id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2307.05049 | An Abstract Look at Awareness Models and Their Dynamics | This work builds upon a well-established research tradition on modal logics
of awareness. One of its aims is to export tools and techniques to other areas
within modal logic. To this end, we illustrate a number of significant bridges
with abstract argumentation, justification logics, the epistemic logic of
knowing-what and deontic logic, where basic notions and definitional concepts
can be expressed in terms of the awareness operator combined with the box
modality. Furthermore, these conceptual links point to interesting properties
of awareness sets beyond those standardly assumed in awareness logics, i.e.
positive and negative introspection. We show that the properties we list are
characterised by corresponding canonical formulas, so as to obtain a series of
off-the-shelf axiomatisations for them. As a second focus, we investigate the
general dynamics of this framework by means of event models. Of specific
interest in this context is to know under which conditions, given a model that
satisfies some property, the update with an event model keeps it within the
intended class. This is known as the closure problem in general dynamic
epistemic logics. As a main contribution, we prove a number of closure theorems
providing sufficient conditions for the preservation of our properties. Again,
these results enable us to axiomatize our dynamic logics by means of reduction
axioms. | Carlo Proietti, Fernando R. Velázquez-Quesada, Antonio Yuste-Ginel | 2023-07-11T06:57:42Z | http://arxiv.org/abs/2307.05049v1 | # An Abstract Look at
# An Abstract Look at
Awareness Models and Their Dynamics
Carlo Proietti
ILC, CNR Genova, Italy [email protected] University of Bergen
Norway Fernando.Velaquez@[email protected] Universidad Complutense de Madrid, Spain [email protected]
Fernando R. Velazquez-Quesada
University of Bergen
Norway Fernando.Velaquez@[email protected] Universidad Complutense de Madrid, Spain [email protected]
###### Abstract
This work builds upon a well-established research tradition on modal logics of awareness. One of its aims is to export tools and techniques to other areas within modal logic. To this end, we illustrate a number of significant bridges with abstract argumentation, justification logics, the epistemic logic of knowing-what and deontic logic, where basic notions and definitional concepts can be expressed in terms of the awareness operator combined with the \(\Box\) modality. Furthermore, these conceptual links point to interesting properties of awareness sets beyond those standardly assumed in awareness logics - i.e. positive and negative introspection. We show that the properties we list are characterised by corresponding canonical formulas, so as to obtain a series of off-the-shelf axiomatisations for them. As a second focus, we investigate the general dynamics of this framework by means of event models. Of specific interest in this context is to know under which conditions, given a model that satisfies some property, the update with an event model keeps it within the intended class. This is known as the closure problem in general dynamic epistemic logics. As a main contribution, we prove a number of closure theorems providing sufficient conditions for the preservation of our properties. Again, these results enable us to axiomatize our dynamic logics by means of reduction axioms.
## 1 Introduction
Epistemic logics of awareness [20, 34] are extensions of propositional epistemic logic (EL; [26]) introduced for modelling a form of (_explicit_) knowledge that lacks closure under logical consequence (therefore avoiding the _logical omniscience_ problem). The idea is that knowledge requires both lack of uncertainty (the standard \(\Box\) modality) _and_ awareness, with the latter a unary modality that, semantically, verifies whether the given formula belongs to a specified world-dependant _awareness_ set. One can deal with specific awareness properties (e.g., awareness introspection) by specifying not only the properties of the awareness sets but also their interaction with the accessibility relations. One can also look at dynamics of awareness in the dynamic epistemic logic style (DEL; [6, 17, 9, 7]), defining model-changing actions for representing acts of awareness _elicitation_ or _forgetting_[11, 38, 15, 21].
The epistemic awareness setting can also be interpreted more generally by abstracting away from this specific reading (see Section 2). At a general level, one can read the awareness entities as a set **O** of generic objects, and the corresponding awareness modality as capturing the notion of "owning some
abstract object \(o\in\mathbf{O}\)". By doing so, one can find connections with other modal logics where abstract objects are used as additional or definitional concepts. For example, other approaches in epistemic and deontic logic [32, 29, 24, 38] can be seen as instances of a more general awareness-like framework. From this perspective, model properties connecting the "owning-the-object" operator "\(\mathbf{O}\)" with \(\Box\) constitute interesting desiderata. This paper defines a number of such properties and characterises them with formulas of the \(\mathbf{O}\)-language.
A second aim of this work is to investigate the dynamics of general \(\mathbf{O}\)-models. We use _event models_ as in [6] for their power to encode epistemic and factual changes at an extreme level of granularity [19]. Yet, a drawback of it is the often non-trivial _closure problem_: guaranteeing that, for a given class \(\mathfrak{M}\) of models, the product update of an \(\mathfrak{M}\)-model with an event model remains in \(\mathfrak{M}\). Closure results clarify the general constraints for the executability of actions, and therefore provide safe guidance for modelling them. Some closure theorems are available for DEL, establishing sufficient conditions for the preservation of accessibility relations [4, 8]. However, this issue is relatively underexplored for properties relating accessibility relations and awareness sets, as the ones mentioned above (with the exception of [34]). As a central contribution of our work, we prove closure theorems for these properties. As an important byproduct, this serves to find direct roads to axiomatisation via reduction axioms.
The paper proceeds as follows. Section 2 introduces the general \(\mathbf{O}\)-framework, illustrating some of its applications. Crucially, it also lists meaningful model properties (at both the individual and multi-agent level; Subsection 2.1), providing their syntactic characterizations as well as their complete axiomatisations as a main result. Section 3 is about the dynamics of \(\mathbf{O}\)-models, semantically: we introduce event models and the closure problem, identifying sufficient conditions for the preservation of the discussed model properties. Section 4 looks at dynamics from the syntactic side, providing sound and complete axiomatisations for dynamic \(\mathbf{O}\)-logics. We end with a discussion of our results in Section 5. Sketches of proofs are to be found in the Appendix.
## 2 Basic framework
Through this document, let \(\mathsf{Ag}\) be a finite non-empty set of agents, \(\mathsf{At}\) be a countable set of propositional variables, and \(\mathbf{O}\) be a countable non-empty set of abstract objects. An \(\mathbf{O}\)-model is just a multi-relational model together with a function that assigns, to each agent, a set of objects from \(\mathbf{O}\) at each possible world.
**Definition 1** (O-Model): _An \(\mathbf{O}\)-model is a tuple \(\mathcal{M}=(\mathcal{W},\mathcal{R},\mathcal{O},\mathcal{V})\) where \(\mathcal{W}\neq\emptyset\) is a set whose elements are called possible worlds, \(\mathcal{R}:\mathsf{Ag}\rightarrow\wp(\mathcal{W}\times\mathcal{W})\) assigns a binary relation on \(\mathcal{W}\) to each agent \(i\in\mathsf{Ag}\), \(\mathcal{O}:(\mathsf{Ag}\times\mathcal{W})\rightarrow\wp(\mathbf{O})\) assigns a set of objects to each agent \(i\in\mathsf{Ag}\) at each world \(w\in\mathcal{W}\), and \(\mathcal{V}:\mathsf{At}\rightarrow\wp(\mathcal{W})\) is an atomic valuation function. Note: \(\mathcal{R}_{i}\) abbreviates \(\mathcal{R}(i)\), and \(\mathcal{O}_{i}(w)\) stands for \(\mathcal{O}(i,w)\). The set of worlds of a given \(\mathcal{M}\) is referred to as \(\mathcal{W}[\mathcal{M}]\); the same convention applies \(\mathcal{R}\), \(\mathcal{O}\) and \(\mathcal{V}\). We use infix notation for each \(\mathcal{R}_{i}\). A pointed \(\mathbf{O}\)-model is a tuple \((\mathcal{M},w)\) with \(\mathcal{M}\) an \(\mathbf{O}\)-model and \(w\in\mathcal{W}[\mathcal{M}]\). Finally, \(\mathfrak{M}^{\mathbf{O}}\) denotes the class of all \(\mathbf{O}\)-models. \(\blacktriangleleft\)_
The language for describing \(\mathbf{O}\)-models is the following.
**Definition 2** (Language \(\mathcal{L}\)): _Given \(\mathsf{Ag}\), \(\mathsf{At}\), and \(\mathbf{O}\) as above, formulas \(\varphi\) of the language \(\mathcal{L}\) are given by_
\[\varphi::=\top\mid p\mid\mathsf{O}_{i}o\mid\neg\varphi\mid\varphi\wedge\varphi \mid\Box_{i}\varphi\]
_with \(p\in\mathsf{At}\), \(i\in\mathsf{Ag}\) and \(o\in\mathbf{O}\). Other Boolean constants/operators are defined as usual; likewise for the modal dual \(\Diamond_{i}\varphi\), defined as \(\neg\Box_{i}\neg\varphi\). Formulas of \(\mathcal{L}\) are interpreted at pointed \(\mathbf{O}\)-models. The truth-clauses for the multi-modal fragment of \(\mathcal{L}\) are the standard ones; for the new formulas,
\[\mathcal{M},w\models\mathrm{O}_{i}o\quad\text{iff}\quad o\in\mathcal{O}_{i}(w).\]
_Global truth of a formula and a set of formulas in a model is defined as usual [13], and denoted \(\mathcal{M}\models\varphi\) and \(\mathcal{M}\models\Phi\), respectively. Likewise for the notion of validity (notation: \(\models\varphi\)). _
Let us now present some particular interpretations and instantiations of \(\mathbf{O}\)-models.
**Models for general and atomic awareness**: A model for _general_ awareness [21] is an \(\mathbf{O}\)-model where \(\mathbf{O}\) is the language \(\mathcal{L}\) itself. In this context, \(\mathcal{O}_{i}\) is called the _awareness function_ and it is denoted by \(\mathcal{A}_{i}\); notationally, the operator \(\mathrm{O}_{i}\) is replaced by the awareness operator \(\mathrm{A}_{i}\). A model for _atomic awareness_[21, 23] is instead one where the awareness function \(\mathcal{A}\) returns a set of atoms from \(\mathrm{At}\), with agent \(i\) aware of \(\varphi\) at a world \(w\) if and only if the set of atoms in \(\varphi\) is a subset of \(\mathcal{A}_{i}(w)\). These structures correspond to \(\mathbf{O}\)-models where \(\mathbf{O}\) is a set of atoms \(\mathrm{At}\). Syntactically, \(\mathcal{M},w\models\mathrm{A}_{i}p\) iff \(p\in\mathcal{A}_{i}(w)\), and then one can define inductively an additional modality \(\widetilde{\mathrm{A}}_{i}\) that works over arbitrary formulas:
\[\widetilde{\mathrm{A}}_{i}\top :=\top, \widetilde{\mathrm{A}}_{i}\neg\varphi :=\widetilde{\mathrm{A}}_{i}\varphi, \widetilde{\mathrm{A}}_{i}\square_{j}\varphi :=\widetilde{\mathrm{A}}_{i}\varphi,\] \[\widetilde{\mathrm{A}}_{i}p :=\mathrm{A}_{i}p, \widetilde{\mathrm{A}}_{i}(\varphi\wedge\psi) :=\widetilde{\mathrm{A}}_{i}\varphi\wedge\widetilde{\mathrm{A}}_{i }\varphi, \widetilde{\mathrm{A}}_{i}\widetilde{\mathrm{A}}_{j}\varphi :=\widetilde{\mathrm{A}}_{i}\varphi.\]
In this way, \(\mathcal{M},w\models\widetilde{\mathrm{A}}_{i}\varphi\) if and only if \(\mathrm{atm}(\varphi)\subseteq\mathcal{A}_{i}(w)\), with \(\mathrm{atm}(\varphi)\) the set of atoms occurring in \(\varphi\).
**Models for awareness of arguments**: One can also conceive \(\mathbf{O}\) as a set of _abstract arguments_ and \(\mathcal{O}\) as a function indicating the set of arguments that each agent is aware of at each world [36], so that \(\mathbf{O}_{i}a\) means "agent \(i\) is aware of argument \(a\)" or "agent \(i\) is able to use argument \(a\)". The resulting models constitute 'epistemic' versions of the abstract models of argumentation introduced in [20]. The main idea behind _abstract argumentation_ is to represent arguments as nodes of a graph, and attacks among them as arrows of the graph. There, argumentative notions such as argument acceptability are reduced to graph-theoretical notions, such as stability of a set within a graph. In the modalized (multi-agent) versions, a possible world is constituted by one such graph plus the specification of which arguments and attacks each agent is aware of. This enables us to express higher-order uncertainty about awareness of arguments [36], which is in turn crucial for modelling strategic reasoning in an argumentative environment [37] and its dynamics [33, 34]. In a similar vein, \(\mathbf{O}\)-models have been applied to more structured frameworks for argumentation [14, 15], with \(\mathbf{O}\) understood as a set of ASPIC\({}^{+}\) arguments [31].
**Justification logics**: In the justification logics of [3], justifications are abstract objects which have structure and operations on them. Formally, the set of _justification terms_\(J\) is built from sets of justification constants and justification variables by means of the operations of application ('\(\cdot\)') and sum ('\(+\)'). Thanks to them, one can define the language \(\mathcal{L}_{J}\) as the basic (multi)modal language plus expressions of the form \(t\colon\varphi\) (with \(t\) a term and \(\varphi\) a formula), read as _"\(t\) is a justification for \(\varphi\)"_. Formulas of this extended language are interpreted over _justification models_, tuples \(M=(\mathcal{W},\mathcal{R},\mathcal{E},\mathcal{V})\) where \(\mathcal{W}\), \(\mathcal{R}\) and \(\mathcal{V}\) are as in a \(\mathbf{O}\)-models. The new component, the evidence function \(\mathcal{E}:(J\times\mathcal{L}_{J})\to\wp(\mathcal{W})\), provides the set of worlds \(\mathcal{E}(t,\varphi)\) in which the term \(t\) is relevant/admissible evidence for the formula \(\varphi\). For this to work properly, \(\mathcal{E}\) should satisfy both
\[\mathcal{E}(s,\varphi\to\psi)\cap\mathcal{E}(t,\varphi)\subseteq\mathcal{E}(s \cdot t,\psi)\quad\text{and}\quad\mathcal{E}(s,\varphi)\cup\mathcal{E}(t, \varphi)\subseteq\mathcal{E}(s+t,\varphi).\]
Then, \((M,w)\models t\colon\varphi\) if and only if both \(w\in\mathcal{E}(t,\varphi)\) and \(\varphi\) holds in all worlds \(\mathcal{R}\)-reachable from \(w\).
A justification model can be seen as an \(\mathbf{O}\)-model in which the codomain of \(\mathcal{O}\) is a set of pairs of the form (justification, formula). Indeed, the evidence function can be equivalently defined as \(\mathcal{E}^{\prime}:\mathcal{W}\to\wp(J\times\mathcal{L}_{J})\), with \((t,\varphi)\in\mathcal{E}(w)\) indicating that \(t\) is relevant/admissible for \(\varphi\) at \(w\). Its constraints become
\[\{(s,\varphi\to\psi),(t,\varphi)\}\subseteq\mathcal{E}(w)\,\Rightarrow\,(s \cdot t,\psi)\in\mathcal{E}(w)\quad\text{and}\quad(s,\varphi)\in\mathcal{E}( w)\,\Rightarrow\,\{(s+t,\varphi),(t+s,\varphi)\}\subseteq\mathcal{E}(w)\]
and thus a justification model \(M=(\mathcal{W},\mathcal{R},\mathcal{E},\mathcal{V})\) can be equivalently stated as \(M^{\prime}=(\mathcal{W},\mathcal{R},\mathcal{E}^{\prime},\mathcal{V})\). Finally, for the language, one can simply define \(t\colon\!\varphi\mathrel{\mathop{:}}=\operatorname{O}(t,\varphi)\wedge\Box\varphi\).
**Models for knowing-what** Plaza's analysis of the _knowing-what-the-value-of-a-constant-is_ operator (_knowing-what_ for short; [32]) has played a crucial role in the emerging of a _new generation of epistemic logics_[40] that go beyond standard _knowing-that_ modalities. Adding \(\mathsf{D}\) as a denumerable set of constants (rigid designators) to the framework, a \(\mathsf{D}\)-model extends a multi-relational model with a function \(\mathcal{V}_{\mathsf{D}}\mathrel{\mathop{:}}(\mathcal{W}\times\mathsf{D}) \to S\), assigning a value in \(S\) to each object in \(\mathsf{D}\) at each world in \(\mathcal{W}\). Syntactically, the language extends the standard modal language with expressions of the form \(Kv_{i}d\) (for \(i\in\mathsf{Ag}\) and \(d\in\mathsf{D}\)), intuitively read as "agent \(i\) knows the value of constant \(d\)". Semantically, this is the case iff \(d\) denotes the same object in all \(i\)'s epistemically accessible worlds:
\[M,w\models_{v}Kv_{i}d\quad\text{iff}\quad\forall u,u^{\prime}\in\mathcal{W},\, w\mathcal{R}_{i}u\text{ and }w\mathcal{R}_{i}u^{\prime}\text{ imply }\mathcal{V}_{\mathsf{D}}(u,d)= \mathcal{V}_{\mathsf{D}}(u^{\prime},d).\]
A \(\mathsf{D}\)-model can be seen as an \(\mathbf{O}\)-model where \(\mathbf{O}\) is the set of tuples \(D\times S\), and with each possible world \(w\) having a single set \(\mathcal{O}(w)\). 1 Moreover, these sets should contain exactly one pair \((d,s)\) for each \(d\in\mathsf{D}\). Finally, using the 'owning' operator \(\operatorname{O}\), the formula \(Kv_{i}d\) is definable as \(Kv_{i}d\mathrel{\mathop{:}}=\Diamond_{i}\operatorname{O}(d,s)\to\square_{i} \operatorname{O}(d,s)\).
Footnote 1: Alternatively, all \(\mathcal{O}_{i}\)-sets are the same at each possible world.
**Deontic logic** The _Kanger-Anderson reductionist approach_ to deontic logic [29, 2] consists in expressing the \(OB\) operator 'it is obligatory that' by means of the alethic modality \(\square\) plus a new propositional constant. In Kanger's terms, the propositional constant \(d\) has the intuitive meaning 'all normative demands are satisfied' (i.e., the situation is 'ideal'). The \(OB\varphi\) operator is defined as \(\square(d\to\varphi)\), and Kanger's system of deontic logic is obtained by adding, to the modal logic \(K\), the axiom \(\Diamond d\), which semantically defines _strong seriality_: for any world \(w\) there is a \(v\) s.t. \(w\mathcal{R}v\) and \(v\) is ideal. From our perspective, it is natural to interpret \(\mathbf{O}\) as representing the set of normative demands. Interestingly, when \(\mathbf{O}\) is finite, it is easy to rewrite \(d\) as \(\bigwedge_{o\in\mathbf{O}}o\) and capture its intended meaning. Indeed, the following holds:
**Remark 1**: _In the class of \(\mathbf{O}\)-models with \(\mathbf{O}\) finite, the formula \(\Diamond\bigwedge_{o\in\mathbf{O}}o\) characterizes strong seriality._
While the original Kanger-Anderson's framework cannot handle contrary-to-duty obligations, further refinements, dating back to [24], allow this. The key idea is to use the \(\Diamond\) operator to express betterness as a pre-order among worlds, where \(\Diamond\varphi\) means that \(\varphi\) is the case in some world that is at least as good as the actual. As suggested by [38], it is also natural to encode betterness by syntactic means, i.e. via an ordering \(\prec\) between formulas, where if \(\varphi\prec\psi\) then \(\psi\) logically implies \(\varphi\). Along similar lines, by regarding our objects as normative demands (desirable properties), one can define a betterness ordering as, e.g., \(\bigwedge_{o\in S}o\prec\bigwedge_{o\in S^{\prime}}o\) iff \(S\subsetneq S^{\prime}\), where \(S,S^{\prime}\subseteq\mathbf{O}\), and therefore \(\bigwedge_{o\in\mathbf{O}}o\) is the maximal element.
**Remark 2**: _Under this reading, the formula \(\mathbf{O}o\to\square\mathbf{O}o\) characterizes the fact that \(\mathcal{R}\) is a betterness relation: only worlds that are at least as ideal can be seen. Further, \(\neg\mathbf{O}o\to\Diamond\mathbf{O}o\) says that all non-ideal worlds failing some normative demand have access to some world satisfying it. Together with \(\mathbf{O}o\to\square\mathbf{O}o\) and the axiom \(\Diamond\Diamond p\to\Diamond p\) for transitivity, this implies strong seriality._
### Some useful/important properties of \(\mathbf{O}\)-models
Depending on the particular interpretation, an \(\mathbf{O}\)-model may be asked to satisfy requirements connecting the \(\mathcal{O}\)-sets with the accessibility relations \(\mathcal{R}_{i}\). This section lists some examples, providing their syntactic characterisations and discussing the settings in which they might be useful/important.
**Individual properties** We start with the simplest properties relating accessibility relations with objects: those whose formulations involve a single agent. These _individual properties_ are summarised in Table 1,
with a model \(\mathcal{M}\) satisfying an individual property (e.g., preservation of \(\mathcal{O}\)) iff it satisfies it for every agent \(i\in\mathsf{Ag}\). _Preservation_ and _anti-preservation_ come from awareness logic [21, 26, 35], where they capture the idea of _awareness introspection_. Indeed, if \(\mathcal{R}_{i}\) preserves (anti-preserves) \(\mathcal{O}_{i}\), then agent \(i\)'s awareness is positively (negatively) introspective: whenever she is (not) aware of something, she knows/believes so. The _invariance_ property, the conjunction of preservation and anti-preservation, captures perfect/total awareness introspection. Finally, the _inversion_ properties are mathematical variations of the preservation properties: they ask for the accessibility relation to _invert_ the 'opinion' of a set towards an object. To the best of our knowledge, none of them has been studied, and yet they can be seen as intuitively appealing in some contexts. For instance, \(\mathcal{R}\) inverting \(\mathcal{O}\) seems appropriate to talk, in the spirit of [2], about normative violations in a deontic reading of \(\mathbf{O}\)-models: if an agent has a bad habit, then she would prefer not to have it. Analogously, \(\mathcal{R}\) anti-inverting \(\mathcal{O}\) works well as a formal property for normative demands as those of [29]: if the agent lacks it, then she prefers to have it.
The following proposition states the definability of the listed individual properties in \(\mathcal{L}\).
**Proposition 1**: _Let \(\mathbb{P}\) be an individual property (left-hand column of Table 1); let \(\Gamma(\mathbb{P})\) be the set of all instances of the corresponding schema in the right-hand column. For any \(\mathbf{O}\)-model \(\mathcal{M}\), we have that_
\[\mathcal{M}\text{ satisfies }\mathbb{P}\quad\text{ iff }\quad\mathcal{M}\models \Gamma(\mathbb{P}).\]
#### Group properties
These properties express how the set of objects of one agent 'affects'/'influences' the set of objects of other agents in the worlds accessible to the first. As it is explained below, the notion of "a model \(\mathcal{M}\) satisfying a group property \(\mathbb{P}\)" should be parametrised to avoid trivialisations (e.g., all agents are aware of everything). The properties are listed in Table 2, with \(f:\mathsf{Ag}\to\wp(\mathsf{Ag})\setminus\{\emptyset\}\) a possibly partial function whose domain is non-empty. If \(\mathbb{P}\) is a group property, we say that \(\mathcal{M}\)_\(f\)-satisfies \(\mathbb{P}\)_iff for every \(i\in Dom(f)\), \(\mathcal{M}\) satisfies \(\mathbb{P}\) for \(i\) and \(f(i)\). Moreover, we call _universal (resp. existential) group properties_ those that contain "for all" (resp. "for some") in their formulation. Regarding their use, the property of anti-preservation of \(\mathbf{O}\) for everyone in \(f(i)\) was first brought up by [36] in the context of epistemic logics for abstract argumentation: if the agent is not aware of an argument, she thinks no one else is. As suggested by [34], this property makes general sense under a _de re_ reading of the epistemic possibility of attributing someone else a given item. The remaining versions of preservation and anti-preservation are natural mathematical variations of the first, and it is not difficult to find intuitive readings for them. For instance, in an awareness context, preservation _for all_\(i\)throws/believes that everybody in \(f(i)\) is aware of what she is aware of. Analogously, preservation _for some_ indicates that each agent \(i\) knows/believes that at least someone in \(f(i)\) is aware of what she is aware of.
\begin{table}
\begin{tabular}{l l l} \hline \((\mathcal{W},\mathcal{R},\mathcal{O},\mathcal{V})\) **is s.t.** & **iff, for every \(w,u\in\mathcal{W}\),** & **Characterising schema** \\ \hline \(\mathcal{R}_{i}\) preserves \(\mathcal{O}_{i}\) & \(w\mathcal{R}_{i}u\Rightarrow\mathcal{O}_{i}(w)\subseteq\mathcal{O}_{i}(u)\) & \(\mathsf{O}_{i}o\to\square_{i}\mathsf{O}_{i}o\) \\ \(\mathcal{R}_{i}\) anti-preserves \(\mathcal{O}_{i}\) & \(w\mathcal{R}_{i}u\Rightarrow\mathcal{O}_{i}(u)\subseteq\mathcal{O}_{i}(w)\) & \(\neg\mathsf{O}_{i}o\to\square_{i}\neg\mathsf{O}_{i}o\) \\ \(\mathcal{O}_{i}\) is invariant under \(\mathcal{R}_{i}\) & \(w\mathcal{R}_{i}u\Rightarrow\mathcal{O}_{i}(w)=\mathcal{O}_{i}(u)\) & \((\mathsf{O}_{i}o\to\square_{i}\mathsf{O}_{i}o)\wedge(\neg\mathsf{O}_{i}o\to \square_{i}\neg\mathsf{O}_{i}o)\) \\ \(\mathcal{R}_{i}\) inverts \(\mathcal{O}_{i}\) & \(w\mathcal{R}_{i}u\Rightarrow\mathcal{O}_{i}(w)\cap\mathcal{O}_{i}(u)=\emptyset\) & \(\mathsf{O}_{i}o\to\square_{i}\neg\mathsf{O}_{i}o\) \\ \(\mathcal{R}_{i}\) anti-inverts \(\mathcal{O}_{i}\) & \(w\mathcal{R}_{i}u\Rightarrow\mathcal{O}_{i}(w)\cup\mathcal{O}_{i}(u)=\mathbf{O}\) & \(\neg\mathsf{O}_{i}o\to\square_{i}\mathsf{O}_{i}o\) \\ \hline \end{tabular}
\end{table}
Table 1: Some individual properties.
The following proposition justifies the parametrisation of the group properties. In awareness epistemic terms, the first bullet says that, when combined with knowledge (or any other active epistemic attitude), preservation and anti-preservation together imply that every agent is aware of the same things, and that this is common knowledge among all agents. This is clearly a trivialisation. The second bullet shows that knowledge cannot be combined with the universal group version of inversion or anti-inversion.
**Proposition 2**: _Let \(f_{gen}=\{(i,\mathsf{Ag})\mid i\in\mathsf{Ag}\}\), let \(\mathcal{M}\) be a reflexive \(\mathbf{O}\)-model._
* _If_ \(\mathcal{M}\) __\(f_{gen}\)_-satisfies universal preservation or anti-preservation, then all agents have available the same objects at each pair of worlds_ \(w,v\in\mathcal{W}\) _connected by the transitive closure of_ \(\bigcup_{i\in\mathsf{Ag}}\mathcal{R}_{i}\)_._
* \(\mathcal{M}\)__\(f_{gen}\)_-satisfies neither universal inversion nor universal anti-inversion._
**Remark 3**: _The individual version of (anti-)preservation and (anti-)inversion properties for \(i\in\mathsf{Ag}\) are the group versions (both universal and existential) for \(f_{indv}=\{(i,\{i\})\mid i\in Dom(f)\}\). \(\blacksquare\)_
Finally, we can characterise the group properties using \(\mathcal{L}\).
**Proposition 3**: _Let \(f:\mathsf{Ag}\to\wp(\mathsf{Ag})\setminus\{\emptyset\}\) be as described above; let \(\mathbb{P}_{i}^{f}\) be any of the group properties of the left-hand column of Table 2 (e.g., anti-inversion for \(i\) and someone in \(f(i)\)) and let \(\varphi(\mathbb{P}_{i}^{f})\) be its corresponding schema in the right-hand column. Let \(\Gamma(\mathbb{P}^{f})\) the set of all instances of \(\varphi(\mathbb{P}_{i}^{f})\) for all \(i\in\mathsf{Ag}\), and let \(\mathcal{M}\) be an \(\mathbf{O}\)-model. Then,_
\[\mathcal{M}\) _\(f\)-satisfies \(\mathbb{P}\quad\text{ iff }\quad\mathcal{M}\models\Gamma(\mathbb{P}^{f})\). \(\blacksquare\)_
Finally, here is the definition of the class of \(\mathbf{O}\)-models satisfying a collection of properties.
**Definition 3** (Classes of models): _Let \((f_{1},\ldots,f_{n})\) be a sequence with \(f_{k}:\mathsf{Ag}\to\wp(\mathsf{Ag})\setminus\{\emptyset\}\) being a function as described above for every \(1\leq k\leq n\), and let \((\mathbb{P}_{1},\ldots,\mathbb{P}_{n})\) be a sequence of group properties. We denote as \(\mathfrak{M}(f_{1}\mathbb{P}_{1},\ldots,f_{n}\mathbb{P}_{n})\) the class of all \(\mathbf{O}\)-models \(\mathcal{M}\) s.t. for every \(k\), \(\mathcal{M}\)\(f_{k}\)-satisfies \(\mathbb{P}_{k}\). \(\blacktriangle\)_
### Axiom system
Axiomatizing validities over \(\mathfrak{M}^{\mathbf{O}}\) (the class of all \(\mathbf{O}\)-models) is straightforward, as formulas with the 'owning' modality \(\mathsf{O}_{i}\) can be seen as a particular atoms connected to a dedicated valuation function \(\mathcal{O}_{i}\). Since the \(\mathcal{O}_{i}\) sets have no particular requirements, the modal logic axiomatisation is enough.
When the focus is the class of models satisfying a certain collection of properties, additional work is needed; for this, Proposition 3 will be useful. Define the notion of local semantic consequence w.r.t. a given class of models in the standard way [13], denoting it by \(\Phi\models_{\mathfrak{M}(f_{1}\mathbb{P}_{1},\ldots,f_{n}\mathbb{P}_{n})}\varphi\).
\begin{table}
\begin{tabular}{l l l} \hline \((\mathcal{W},\mathcal{R},\mathcal{O},\mathcal{V})\) **is s.t.** & **iff, for every \(w,u\in\mathcal{W}\),** & **Characterising schema** \\ \hline \(\mathcal{R}_{i}\)preserves \(\mathcal{O}_{j}\) for all \(j\in f(i)\subseteq\mathsf{Ag}\) & \(w\mathcal{R}_{i}u\Rightarrow\mathcal{O}_{i}(w)\subseteq\bigcap_{j\in f(i)} \mathcal{O}_{j}(u)\) & \(\mathsf{O}_{i}o\to\square_{i}\bigwedge_{j\in f(i)}\mathsf{O}_{j}o\) \\ \(\mathcal{R}_{i}\)preserves \(\mathcal{O}_{j}\) for some \(j\in f(i)\subseteq\mathsf{Ag}\) & \(w\mathcal{R}_{i}u\Rightarrow\mathcal{O}_{i}(w)\subseteq\bigcup_{j\in f(i)} \mathcal{O}_{j}(u)\) & \(\mathsf{O}_{i}o\to\square_{i}\bigwedge_{j\in f(i)}\mathsf{O}_{j}o\) \\ \(\mathcal{R}_{i}\) anti-preserves \(\mathcal{O}_{j}\) for all \(j\in f(i)\subseteq\mathsf{Ag}\) & \(w\mathcal{R}_{i}u\Rightarrow\bigcup_{j\in f(i)}\mathcal{O}_{j}(u)\subseteq \mathsf{O}_{i}(w)\) & \(\neg\mathsf{O}_{i}o\to\square_{i}\bigwedge_{j\in f(i)}\neg\mathsf{O}_{j}o\) \\ \(\mathcal{R}_{i}\) anti-preserves \(\mathcal{O}_{j}\) for some \(j\in f(i)\subseteq\mathsf{Ag}\) & \(w\mathcal{R}_{i}u\Rightarrow\bigcap_{j\in f(i)}\mathcal{O}_{j}(u)\subseteq \mathsf{O}_{i}(w)\) & \(\neg\mathsf{O}_{i}o\to\square_{i}\bigvee_{j\in f(i)}\neg\mathsf{O}_{j}o\) \\ \(\mathcal{R}_{i}\) inverts \(\mathcal{O}_{j}\) for all \(j\in f(i)\subseteq\mathsf{Ag}\) & \(w\mathcal{R}_{i}u\Rightarrow\mathcal{O}_{i}(w)\cap\bigcup_{j\in f(i)}\mathcal{O}_{ j}(u)=\emptyset\) & \(\mathsf{O}_{i}o\to\square_{i}\bigwedge_{j\in f(i)}\neg\mathsf{O}_{j}o\) \\ \(\mathcal{R}_{i}\) inverts \(\mathcal{O}_{j}\) for some \(j\in f(i)\subseteq\mathsf{Ag}\) & \(w\mathcal{R}_{i}u\Rightarrow\mathcal{O}_{i}(w)\cap\bigcap_{j\in f(i)}\mathcal{O}_{ j}(u)=\emptyset\) & \(\mathsf{O}_{i}o\to\square_{i}\bigwedge_{j\in f(i)}\neg\mathsf{O}_{j}o\) \\ \(\mathcal{R}_{i}\) anti-inverts \(\mathcal{O}_{j}\) for all \(j\in f(i)\subseteq\mathsf{Ag}\) & \(w\mathcal{R}_{i}u\Rightarrow\mathcal{O}_{i}(w)\cup\bigcup_{j\in f(i)}\mathcal{O}_{ j}(u)=\mathbf{O}\) & \(\neg\mathsf{O}_{i}o\to\square_{i}\bigwedge_{j\in f(i)}\mathsf{O}_{j}o\) \\ \(\mathcal{R}_{i}\) anti-inverts \(\mathcal{O}_{j}\) for some \(j\in f(i)\subseteq\mathsf{Ag}\) & \(w\mathcal{R}_{i}u\Rightarrow\mathcal{O}_{i}(w)\cup\bigcup_{j\in f(i)}\mathcal{O}_{ j}(u)=\mathbf{O}\) & \(\neg\mathsf{O}_{i}o\to\square_{i}\bigvee_{j\in f(i)}\mathsf{O}_{j}o\) \\ \hline \end{tabular}
\end{table}
Table 2: Some group properties.
**Definition 4** (Static logics): _The logic \(\mathsf{K}\) is the smallest set containing all instances of the axiom schemas of Table 3 that is moreover closed under both inference rules of the same table. The extension of \(\mathsf{K}\) by \(\Phi\subseteq\mathcal{L}\) is the smallest set of formulas containing all instances of schemas of Table 3, all formulas in \(\Phi\) and it is closed under both inference rules. Let \((f_{1},\ldots,f_{n})\) be a sequence of functions \(\mathsf{Ag}\to\wp(\mathsf{Ag})\setminus\{\emptyset\}\) as described above, and let \((\mathbb{P}_{1},\ldots,\mathbb{P}_{n})\) be a sequence of group properties. Then, we denote by \(\mathsf{L}(f_{1}\mbox{-}\mathbb{P}_{1},\ldots,f_{n}\mbox{-}\mathbb{P}_{n})\) the extension of \(\mathsf{K}\) with \(\bigcup_{1\leq k\leq n}\mathsf{\Gamma}(\mathbb{P}_{k}^{f_{k}})\).2 Note that when \(n=0\), \(\mathsf{L}(f_{1}\mbox{-}\mathbb{P}_{1},\ldots,f_{n}\mbox{-}\mathbb{P}_{n})= \mathsf{K}\). \(\blacktriangleleft\)_
Footnote 2: See propositions 1 and 3 for the meaning of \(\mathsf{\Gamma}(\mathbb{P}^{f})\).
The notions of \(\mathsf{L}(f_{1}\mbox{-}\mathbb{P}_{1},\ldots,f_{n}\mbox{-}\mathbb{P}_{n})\)-proof and \(\mathsf{L}(f_{1}\mbox{-}\mathbb{P}_{1},\ldots,f_{n}\mbox{-}\mathbb{P}_{n})\)-deduction from assumption (noted \(\Phi\vdash_{\mathsf{L}(f_{1}\mbox{-}\mathbb{P}_{1},\ldots,f_{n}\mbox{-} \mathbb{P}_{n})}\varphi\)), are the standard ones in modal logic (see e.g., [13]).
**Theorem 1** (Static completeness): _Let \((f_{1},\ldots,f_{n})\) be a sequence of functions \(\mathsf{Ag}\to\wp(\mathsf{Ag})\setminus\{\emptyset\}\) as described above, and let \((\mathbb{P}_{1},\ldots,\mathbb{P}_{n})\) be a sequence of group properties, we have that:_
\[\mathsf{L}(f_{1}\mbox{-}\mathbb{P}_{1},\ldots,f_{n}\mbox{-}\mathbb{P}_{n})\]
_is sound and strongly complete with respect to \(\mathfrak{M}(f_{1}\mbox{-}\mathbb{P}_{1},\ldots,f_{n}\mbox{-}\mathbb{P}_{n})\). \(\blacksquare\)_
## 3 Dynamics of O-models, semantically
Changes in different modal attitudes (knowledge, beliefs, preferences and so on) have been the main topic of DEL. The main feature that distinguishes DEL from other approaches for modelling dynamics (e.g., propositional dynamic logic [25] or automata theory [28]) is that changes are not represented as (binary) relations, but rather as operations that modify the underlying semantic structure. Indeed, DEL can be understood, more broadly, as the study of modal logics of model change [8]. Here we focus on the _event models_ of [7, 6]: structures that, when 'applied' to a relational model (by means of a _product update_), produce another relational model. They were initially introduced as a way of modelling non-public acts of communication, and have since then been widely employed to model other forms of informational and factual changes [9, 19, 12, 16]. Besides their versatility, they have an important technical advantage: as proved in [19], any pointed relational model can be turned into any other by means of the product update with some event model that allows factual change.3 The rest of this section will discuss an extension of these event models that works for describing dynamics of \(\mathbf{O}\)-models.
Footnote 3: Slightly more precisely, given pointed models \((\mathcal{M},w)\) and \((\mathcal{M}^{\prime},w^{\prime})\), there is ‘almost always’ an event model such that, when applied to \((\mathcal{M},w)\), produces a pointed model \((\mathcal{M}^{\prime\prime},w^{\prime\prime})\) that is, from the point of view of the language of propositional dynamic logic [25] (an extension of the basic modal language), indistinguishable from \((\mathcal{M}^{\prime},w^{\prime})\). See [19] for details.
**Definition 5** (Event O-Model): _An event \(\mathbf{O}\)-model is a tuple \(\mathcal{E}=(\mathcal{S},\mathcal{T},\mathsf{pre},\mathsf{eff})\) where \(\mathcal{S}\neq\emptyset\) is a finite set of events, \(\mathcal{T}:\mathsf{Ag}\to\wp(\mathcal{S}\times\mathcal{S})\) assigns to each agent a binary relation, \(\mathsf{pre}:\mathcal{S}\to\mathcal{L}\) assigns a precondition to each event, and \(\mathsf{eff}:(\mathsf{Ag}\times\{+,-\}\times\mathcal{S})\to\wp(\mathbf{O})\) is a function indicating, for each event, its (positive and negative) effects on the set of objects available to each agent (write \(\mathsf{eff}(i,\pm,s)\) as \(\mathsf{eff}_{i}^{\pm}(s)\) for \(\pm\in\{+,-\}\)). We assume that, for every \(s\in\mathcal{S}\) and every \(i\in\mathsf{Ag}\), the sets \(\mathsf{eff}_{i}^{+}(s)\) and \(\mathsf{eff}_{i}^{-}(s)\) are finite and disjoint. Note: \(\mathcal{T}(i)\) abbreviates \(\mathcal{T}_{i}\). The set of events of a given \(\mathcal{E}\) is referred to as \(\mathcal{S}[\mathcal{E}]\) (and the same
\begin{table}
\begin{tabular}{c c} \hline \hline TAUT: & All propositional tautologies \\ K: & \(\square_{i}(\varphi\to\psi)\to(\square_{i}\varphi\to\square_{i}\psi)\) \\ \hline \hline \end{tabular} \begin{tabular}{c c} \hline \hline \begin{tabular}{c} \hline \begin{tabular}{c} \hline \end{tabular} & \begin{tabular}{c} \hline \hline MP: \\ NEC: \\ \end{tabular} &
\begin{tabular}{c} From \(\varphi\) and \(\varphi\to\psi\), infer \(\psi\) \\ \end{tabular} \\ \hline \hline \end{tabular}
\end{table}
Table 3: The minimal modal logic \(\mathsf{K}\).
convention applies for the other components of \(\mathcal{E}\)). We use infix notation for each \(\mathcal{T}_{i}\). A pointed event \(\mathbf{O}\)-model is a tuple \((\mathcal{E},s)\) with \(\mathcal{E}=(\mathcal{S},\mathcal{T},\mathsf{pre},\mathsf{eff})\) an event \(\mathbf{O}\)-model and \(s\in\mathcal{S}[\mathcal{E}]\).
The above definition does not include the _post-condition function_ (see, e.g., [16, 17]), as we want to focus on non-factual changes (i.e., changes on accessibility relations and \(\mathbf{O}\)-sets, but not on atomic valuations). We think, however, that incorporating them does not pose any challenge, since our framework can in fact be seen as a variation of event models for factual change, where one deals with agent-indexed predicates instead of purely atomic propositions.
**Definition 6** (Product update): _Let \(\mathcal{M}=(\mathcal{W},\mathcal{R},\mathcal{O},\mathcal{V})\) be an \(\mathbf{O}\)-model and let \(\mathcal{E}=(\mathcal{S},\mathcal{T},\mathsf{pre},\mathsf{eff})\) be an event \(\mathbf{O}\)-model. The product update of \(\mathcal{M}\) and \(\mathcal{E}\) produces the model \(\mathcal{M}\otimes\mathcal{E}=(\mathcal{W}^{\prime},\mathcal{R}^{\prime}, \mathcal{O}^{\prime},\mathcal{V}^{\prime})\), where:_
* \(\mathcal{W}^{\prime}:=\{(w,s)\in\mathcal{W}\times\mathcal{S}\mid\mathcal{M},w \models\mathsf{pre}(s)\}\)_._
* \(\mathcal{R}^{\prime}_{i}:=\{\left((w,s),(u,t)\right)\in\mathcal{W}^{\prime} \times\mathcal{W}^{\prime}\mid w\mathcal{R}_{iu}\not\&\ s\mathcal{T}_{i}t\}\)_._
_Note: \(\mathcal{W}^{\prime}\) is empty (and thus \(\mathcal{M}\otimes\mathcal{E}\) is not defined) when no possible world satisfies any precondition. Thus, \(\otimes\) is a partial function. When \(\mathcal{W}^{\prime}\neq\emptyset\), we say that \(\mathcal{M}\otimes\mathcal{E}\) is defined. _
The closure problemGiven a class of \(\mathbf{O}\)-models \(\mathfrak{M}\), the _closure_ problem [4, 5] asks to find a class of event \(\mathbf{O}\)-models \(\mathfrak{E}\neq\emptyset\) s.t., \(\mathcal{M}\in\mathfrak{M}\) and \(\mathcal{E}\in\mathfrak{E}\) imply \(\mathcal{M}\otimes\mathcal{E}\in\mathfrak{M}\). This is not trivial for the properties in Tables 1 and 2: it is clear that executing certain event \(\mathbf{O}\)-models in certain \(\mathbf{O}\)-models leads to the loss of, e.g., individual preservation. This paper focusses on group properties (Remark 3), using \(\mathsf{EMP}(\mathbb{P})\) for referring to the event-model property in Table 4 that corresponds to the group property \(\mathbb{P}\) in Table 2.4
Footnote 4: For instance, if \(\mathbb{P}\) is anti-inversion for someone, then \(\mathsf{EMP}(\mathbb{P})=\mathsf{EMP}^{\mathsf{anti-inversion-\exists}}\).
**Definition 7** (Classes of event models): _Let \((f_{1},\ldots,f_{n})\) be a sequence of functions \(\mathsf{Ag}\to\wp(\mathsf{Ag})\setminus\{\emptyset\}\) as described above, and let \((\mathsf{EMP}_{1},\ldots,\mathsf{EMP}_{n})\) be a sequence of group properties for event models (Table 4). We denote as \(\mathfrak{E}(f_{1}\mathsf{-EMP}_{1},\ldots,f_{n}\mathsf{-EMP}_{n})\) the class of all event \(\mathbf{O}\)-models \(\mathcal{E}\) s.t. for every \(1\leq k\leq n\), \(\mathcal{E}\)\(f_{k}\)-satisfies \(\mathsf{EMP}_{k}\). _
With properties of event models defined, here is the main result.
**Theorem 2** (Closure for group properties): _Let \(f:\mathsf{Ag}\to\wp(\mathsf{Ag})\setminus\{\emptyset\}\) be as described above. Let \(\mathcal{M}\) be an \(\mathbf{O}\)-model and \(\mathcal{E}\) an event \(\mathbf{O}\)-model s.t. \(\mathcal{M}\otimes\mathcal{E}\) is defined. For any property \(\mathbb{P}\) in Table 2, if \(\mathcal{M}\)\(f\)-satisfies \(\mathbb{P}\) and \(\mathcal{E}\)\(f\)-satisfies \(\mathsf{EMP}(\mathbb{P})\), then \(\mathcal{M}\otimes\mathcal{E}\)\(f\)-satisfies \(\mathbb{P}\). _
\begin{table}
\begin{tabular}{l l l l} \hline \hline \(\mathcal{E}\)\(f\)**-satisfies** & **iff for every \(i\in Dom(f)\), \(s\), \(t\in\mathcal{S}[\mathcal{E}]\)** & **is safe for** \\ \hline \(\mathsf{EMP}^{\mathsf{pres}-\mathcal{V}}\) & \(s\mathcal{T}_{i}t\)\(\Rightarrow\)\(\mathsf{eff}_{i}^{+}(s)\subseteq\bigcap_{j\in f(i)}\mathsf{eff}_{j}^{+}(t)\) and \(\bigcup_{j\in f(i)}\mathsf{eff}_{j}^{-}(t)\subseteq\mathsf{eff}_{i}^{-}(s)\) & preservation for everyone \\ \(\mathsf{EMP}^{\mathsf{pres}-\exists}\) & \(s\mathcal{T}_{i}t\)\(\Rightarrow\)\(\mathsf{eff}_{i}^{+}(s)\subseteq\bigcup_{j\in f(i)}\mathsf{eff}_{j}^{+}(t)\) and \(\bigcup_{j\in f(i)}\mathsf{eff}_{j}^{-}(t)\subseteq\mathsf{eff}_{i}^{-}(s)\) & preservation for someone \\ \(\mathsf{EMP}^{\mathsf{anti-pres}-\forall}\) & \(s\mathcal{T}_{i}t\)\(\Rightarrow\)\(\bigcup_{j\in f(i)}\mathsf{eff}_{j}^{+}(t)\subseteq\mathsf{eff}_{i}^{+}(s)\) and \(\mathsf{eff}_{i}^{-}(s)\subseteq\bigcap_{j\in f(i)}\mathsf{eff}_{j}^{-}(t)\) & anti-preservation for everyone \\ \(\mathsf{EMP}^{\mathsf{anti-pres}-\exists}\) & \(s\mathcal{T}_{i}t\)\(\Rightarrow\)\(\bigcup_{j\in f(i)}\mathsf{eff}_{j}^{+}(t)\subseteq\mathsf{eff}_{i}^{+}(s)\) and \(\mathsf{eff}_{i}^{-}(s)\subseteq\bigcup_{j\in f(i)}\mathsf{eff}_{j}^{-}(t)\) & anti-preservation for someone \\ \(\mathsf{EMP}^{\mathsf{inv}-\forall}\) & \(s\mathcal{T}_{i}t\)\(\Rightarrow\)\(\mathsf{eff}_{i}^{+}(s)\subseteq\bigcap_{j\in f(i)}\mathsf{eff}_{j}^{-}(t)\) and \(\bigcup_{j\in f(i)}\mathsf{eff}_{j}^{+}(t)\subseteq\mathsf{eff}_{i}^{-}(s)\) & inversion for everyone \\ \(\mathsf{EMP}^{\mathsf{anti-inv}-\exists}\) & \(s\mathcal{T}_{i}t\)\(\Rightarrow\)\(\mathsf{eff}_{i}^{-}(s)\subseteq\bigcap_{j\in f(i)}\mathsf{eff}_{j}^{+}(t)\) and \(\bigcup_{j\in f(i)}\mathsf{eff}_{j}^{-}(t)\subseteq\mathsf{eff}_{i}^{+}(s)\) & inversion for someone \\ \(\mathsf{EMP}^{\mathsf{anti-inv}-\exists}\) & \(s\mathcal{T}_{i}t\)\(\Rightarrow\)\(\mathsf{eff}_{i}^{-}(s)\subseteq\bigcup_{j\in f(i)}\mathsf{eff}_{j}^{+}(t)\) and \(\bigcup_{j\in f(i)}\mathsf{eff}_{j}^{-}(t)\subseteq\mathsf{eff}_{i}^{+}(s)\) & anti-inversion for everyone \\ \(\mathsf{EMP}^{\mathsf{anti-inv}-\exists}\) & \(s\mathcal{T}_{i}t\)\(\Rightarrow\)\(\mathsf{eff}_{i}^{-}(s)\subseteq\bigcup_{j\in f(i)}\mathsf{eff}_{j}^{+}(t)\) and \(\bigcup_{j\in f(i)}\mathsf{eff}_{j}^{-}(t)\subseteq\mathsf{eff}_{i}^{+}(s)\) & anti-inversion for someone \\ \hline \hline \end{tabular}
\end{table}
Table 4: Properties of event \(\mathbf{O}\)-models.
**Example 1** (Different forms of forgetting): _Theorem 2 helps to test the compatibility between the model of a notion/concept and the model of its dynamics: a single action might be modelled by different event models, and the choice might depend on the specific model requirements. As an example, and in the awareness context, consider an action through which agent \(i\) becomes unaware of the atom \(p\) without any-body else noticing it. In [12], this action corresponds to the event model \(\mathsf{Pri}_{i}^{p}=(\{\bullet,\circ\},\mathcal{T},\mathsf{pre},\mathsf{eff})\) with \(\mathcal{T}_{i}=\{(\bullet,\bullet),(\circ,\circ)\}\) and \(\mathcal{T}_{j}=\{(\bullet,\circ),(\circ,\circ)\}\) for \(j\neq i\), and with \(\mathsf{eff}_{i}^{-}(\bullet)=\{p\}\) and \(\mathsf{eff}_{j}^{-}(\bullet)=\mathsf{eff}_{j}^{\pm}(\circ)=\mathsf{eff}_{i}^ {\pm}(\circ)=\emptyset\). When \(\mathsf{Ag}=\{1,2\}\) and \(i=1\), the event model can be represented as_
\[\mathsf{eff}_{1}^{-}(\bullet)=\{p\}; \mathsf{eff}_{1}^{+}(\bullet)=\emptyset\] \[\mathsf{eff}_{2}^{\pm}(\bullet)=\emptyset\]
_This event model does the job when awareness is not required to have special properties.5 However, it is not appropriate, e.g., when \(\mathcal{R}\) is required to \(f_{\text{gen}}\)-anti-preserve \(\mathcal{O}\) for everyone, for \(f_{\text{gen}}=\{(i,\mathsf{Ag})\mid i\in\mathsf{Ag}\}\) (as in the case of awareness of arguments of [36, 34]). Fortunately, there is another event model that captures the central intuition of the action (that is, that agent \(1\) privately looses awareness of \(p\) and she is the only one suffering this loss in the actual event \(\bullet\)) while also preserving the property. Indeed, consider \(\mathsf{AlPri}_{i}^{p}=(\{\bullet,\circ,\triangle\},\mathcal{T},\mathsf{pre}, \mathsf{eff})\) with \(\mathcal{T}_{i}=\{(\bullet,\triangle),(\triangle,\triangle),(\circ,\circ)\}\) and \(\mathcal{T}_{j}=\{(\bullet,\circ),(\triangle,\triangle),(\circ,\circ)\}\) for \(j\neq i\), and with \(\mathsf{eff}_{i}^{-}(\bullet)=\mathsf{eff}_{i}^{-}(\triangle)=\mathsf{eff}_{j} ^{-}(\triangle)=\{p\}\) and \(\mathsf{eff}_{i}^{+}(\bullet)=\mathsf{eff}_{j}^{\pm}(\bullet)=\mathsf{eff}_{i }^{+}(\triangle)=\mathsf{eff}_{j}^{+}(\triangle)=\mathsf{eff}_{i}^{\pm}( \circ)=\mathsf{eff}_{j}^{\pm}(\circ)=\emptyset\). When \(\mathsf{Ag}=\{1,2\}\) and \(i=1\), the event model is_
Footnote 5: It even preserves the individual version of invariance (Table 1).
\[\mathsf{eff}_{1}^{+}(\triangle)=\mathsf{eff}_{2}^{+}(\triangle)= \emptyset\] \[\mathsf{eff}_{1}^{-}(\triangle)=\mathsf{eff}_{2}^{-}(\triangle)= \{p\}\] \[\mathsf{eff}_{1}^{-}(\bullet)=\{p\}; \mathsf{eff}_{1}^{+}(\bullet)=\emptyset\] \[\mathsf{eff}_{2}^{\pm}(\bullet)=\emptyset\]
_Just as before, agent \(1\) drops \(p\) (the effect of \(\bullet\)), and this change is private, since \(2\) believes that nothing happened (\(\circ\)). Additionally, and due to the nature of universal anti-preservation, \(1\) thinks that everyone loses awareness of \(p\) as well (the effects of \(\triangle\)). Note, moreover, that \(\mathsf{AlPri}_{i}^{p}\)\(f_{\text{gen}}\)-satisfies \(\mathsf{EMP}^{\mathsf{anti-inv-\forall}}\) (our sufficient condition for the preservation of universal anti-preservation). _
## 4 Dynamics of \(\mathbf{O}\)-models, syntactically
Here is the language used to describe the effect of product updates.
**Definition 8** (Language \(\mathcal{L}(\star)\)): _Let \(\mathfrak{E}^{\mathbf{O}}\) the class of all event \(\mathbf{O}\)-models, and let \(\star\subseteq\mathfrak{E}^{\mathbf{O}}\) be a non-empty subclass. The dynamic language \(\mathcal{L}(\star)\) is given by_
\[\varphi::=\top\mid p\mid\mathsf{O}_{i}o\mid\neg\varphi\mid\varphi\wedge\varphi \mid\square_{i}\varphi\mid[\mathcal{E},s]\varphi\]
_with \(p\in\mathsf{At}\), \(i\in\mathsf{Ag}\), \(o\in\mathbf{O}\), \(\mathcal{E}\in\star\) and \(s\in\mathcal{S}[\mathcal{E}]\). The truth clause for the new kinds of formulas is:_
\[\mathcal{M},w\models[\mathcal{E},s]\varphi\quad\text{iff}\quad\mathcal{M},w \models\mathsf{pre}(s)\text{ implies }\mathcal{M}\otimes\mathcal{E},(w,s)\models\varphi.\qed\]
**Definition 9** (Dynamic logics): _Let \(\mathsf{L}(f_{1}\neg\mathbb{P}_{1},\ldots,f_{n}\neg\mathbb{P}_{n})\) be a static logic of Definition 4. The logic \(\mathsf{L}^{\cdot}(f_{1}\neg\mathbb{P}_{1},\ldots,f_{n}\neg\mathbb{P}_{n})\) extends \(\mathsf{L}(f_{1}\neg\mathbb{P}_{1},\ldots,f_{n}\neg\mathbb{P}_{n})\) with all axioms and rules of Table 5 that can be written in \(\mathcal{L}(\mathcal{E}(f_{1}\neg\mathsf{EMP}(\mathbb{P}_{1}),\ldots,f_{n}\neg \mathsf{EMP}(\mathbb{P}_{n})))\). _
Then, the completeness result.
**Theorem 3** (Dynamic completeness): _Let \((f_{1},\ldots,f_{n})\) be a sequence of functions \(\mathsf{Ag}\to\wp(\mathsf{Ag})\setminus\{\emptyset\}\) as described above, and let \((\mathbb{P}_{1},\ldots,\mathbb{P}_{n})\) be a sequence of group properties. We have that:_
\[\mathsf{L}^{\,!}(f_{1}\mbox{-}\mathbb{P}_{1},\ldots,f_{n}\mbox{-}\mathbb{P}_{n})\]
_is sound and strongly complete with respect to \(\mathfrak{M}(f_{1}\mbox{-}\mathbb{P}_{1},\ldots,f_{n}\mbox{-}\mathbb{P}_{n})\)._
## 5 Conclusion and future work
The paper provides an abstract look at awareness epistemic models, understanding them not as a representation of the formulas the agents are aware of, but rather as a more general setting for dealing with a notion of 'owning abstract objects'. As discussed in Section 2, several well-know proposals from different areas can be seen as particular instances of these general type of structures.
When modelling specific phenomena, a general \(\mathbf{O}\)-structure may be asked to satisfy specific requirements. Of particular interest are those that relate \(\mathcal{O}\)-sets with accessibility relations, and Subsection 2.1 listed some possibilities, together with their characterising formula. Maybe more importantly, these requirements should be preserved by model operations representing dynamics of the modelled phenomena. Section 3 focussed on model operations defined in terms of event models, introducing classes of the latter that, under the product update operation, guarantee the preservation of the specified requirements. This establishes a form of 'compatibility' between the represented phenomena and the chosen event models. Section 4 closed the discussion, obtaining complete axiomatisations via reduction axioms.
There are branches open for further exploration; here are two of them. The first one is to work out the details of the instantiations of \(\mathbf{O}\)-models that were sketched in Section 2. The second one is to study more systematically the trivialisation of awareness (\(\mathcal{O}\)-sets) for extreme cases of \(f\) (e.g., for \(f_{gen}\)) so as to underpin our notion of \(f\)-satisfiability.
|
2307.12535 | Operator Norm Bounds on the Correlation Matrix of the SK Model at High
Temperature | We prove that the two point correlation matrix $ \textbf{M}= (\langle
\sigma_i ; \sigma_j\rangle)_{1\leq i,j\leq N} \in \mathbb{R}^{N\times N}$ of
the Sherrington-Kirkpatrick model has the property that for every $\epsilon>0$
there exists $K_\epsilon>0$, that is independent of $N$, such that
\[ \mathbb{P}\big( \| \textbf{M} \|_{\text{op}} \leq K_{\epsilon}\big) \geq
1- \epsilon \] for $N$ large enough, for suitable interaction and external
field parameters $(\beta,h)$ in the replica symmetric region. In other words,
the operator norm of $\textbf{M}$ is of order one with high probability. Our
results are in particular valid for all $ (\beta,h)\in (0,1)\times (0,\infty) $
and thus complement recently obtained results in \cite{EAG,BSXY} that imply the
operator norm boundedness of $\textbf{M}$ for all $\beta<1$ in the special case
of vanishing external field. | Christian Brennecke, Changji Xu, Horng-Tzer Yau | 2023-07-24T05:43:59Z | http://arxiv.org/abs/2307.12535v1 | # Operator Norm Bounds on the Correlation Matrix of the SK Model at High Temperature
###### Abstract
We prove that the two point correlation matrix \(\mathbf{M}=(\langle\sigma_{i};\sigma_{j}\rangle)_{1\leq i,j\leq N}\in\mathbb{ R}^{N\times N}\) of the Sherrington-Kirkpatrick model has the property that for every \(\epsilon>0\) there exists \(K_{\epsilon}>0\), that is independent of \(N\), such that
\[\mathbb{P}\big{(}\|\mathbf{M}\|_{\mathrm{op}}\leq K_{\epsilon}\big{)}\geq 1-\epsilon\]
for \(N\) large enough, for suitable interaction and external field parameters \((\beta,h)\) in the replica symmetric region. In other words, the operator norm of \(\mathbf{M}\) is of order one with high probability. Our results are in particular valid for all \((\beta,h)\in(0,1)\times(0,\infty)\) and thus complement recently obtained results in [16, 7] that imply the operator norm boundedness of \(\mathbf{M}\) for all \(\beta<1\) in the special case of vanishing external field.
## 1 Setup and Main Results
We consider systems of \(N\) interacting spins \(\sigma_{i}\in\{-1,1\},i\in[N]=\{1,\ldots,N\}\), described by the Sherrington-Kirkpatrick [25] Hamiltonian \(H_{N}:\{-1,1\}^{N}\to\mathbb{R}\) which is defined by
\[H_{N}(\sigma)=\beta\sum_{1\leq i<j\leq N}g_{ij}\sigma_{i}\sigma_{j}+h\sum_{i= 1}^{N}\sigma_{i}=\frac{\beta}{2}(\sigma,\mathbf{G}\sigma)+h\,(\mathbf{1}, \sigma). \tag{1.1}\]
The symmetric matrix \(\mathbf{G}=(g_{ij})_{1\leq i,j\leq N}\) is a GOE matrix, that is, up to the symmetry constraint its entries are i.i.d. centered Gaussian random variables of variance \(N^{-1}\) for \(i\neq j\) and we set \(g_{ii}=0\) for simplicity. The standard Euclidean inner product in \(\mathbb{R}^{N}\) and its induced norm are denoted by \((\cdot,\cdot)\) and \(\|\cdot\|\), respectively. We assume \(\beta>0\), \(h>0\) and
we assume the \(\{g_{ij}\}\) to be realized in some probability space \((\Omega,\mathcal{F},\mathbb{P})\). The expectation with regards to \(\mathbb{P}\) is denoted by \(\mathbb{E}(\cdot)\) and the \(L^{p}(\Omega,\mathcal{F},\mathbb{P})\) norms are \(\|\cdot\|_{p}=(\mathbb{E}\,|\cdot|^{p})^{1/p}\).
Based on a novel representation of the entries of the two point correlation matrix
\[\mathbf{M}=\mathbf{M}_{\beta,h}=(m_{ij})_{1\leq i,j\leq N}=(\langle\sigma_{i}; \sigma_{j}\rangle)_{1\leq i,j\leq N}\in\mathbb{R}^{N\times N}\]
as sums over weights of self-avoiding paths, which is motivated by the results of [1], we have recently proved in [7] for the special case \(h=0\) that at high temperature, \(\mathbf{M}\) is asymptotically close to a resolvent of \(\mathbf{G}\), in the sense that
\[\lim_{N\to\infty}\ \left\|\mathbf{M}_{\beta,h=0}-(1+\beta^{2}-\beta\mathbf{G})^{ -1}\right\|_{\rm op}=0 \tag{1.2}\]
in probability. Here, \(\|\mathbf{A}\|_{\rm op}=\sup_{\mathbf{u}\in\mathbb{R}^{N}:\|\mathbf{u}\|=1}\| \mathbf{A}\mathbf{u}\|\) denotes the standard operator norm, for \(\mathbf{A}\in\mathbb{R}^{N\times N}\), and \(\langle\cdot\rangle=\langle\cdot\rangle_{\beta,h}\) denotes the Gibbs measure induced by \(H_{N}\), i.e.
\[\langle f\rangle=\frac{1}{Z_{N}}\sum_{\sigma\in\{-1,1\}^{N}}f(\sigma)\,e^{H_{N }(\sigma)}\quad\text{ with }\quad Z_{N}=\sum_{\sigma\in\{-1,1\}^{N}}e^{H_{N}(\sigma)}\]
for \(f:\{-1,1\}^{N}\to\mathbb{R}\) and \(\langle f;g\rangle=\langle fg\rangle-\langle f\rangle\langle g\rangle\). By standard properties of GOE matrices, the validity of (1.2) naturally suggests the (well-known) existence of a phase transition at \(\beta=1\) and [7] verifies the validity of (1.2) indeed in the full replica symmetric region at \(h=0\), that is, for all \(\beta<1\). It implies in particular that
\[\lim_{N\to\infty}\|\mathbf{M}\|_{\rm op}=\ (1-\beta)^{-2},\]
so that the operator norm of \(\mathbf{M}\) is typically of order one as long as \(\beta<1\). The boundedness of \(\|\mathbf{M}\|_{\rm op}\) has also been proved independently in [16], in the sense that \(\mathbb{E}\,\|\mathbf{M}\|_{\rm op}=O(1)\).
A natural question is whether \(\mathbf{M}\) has bounded norm in the general case \(h\neq 0\) as well and whether this is connected to the replica symmetric phase, like in the case \(h=0\). Although a complete proof is to date still lacking, it is generally expected that the SK model is replica symmetric for all \((\beta,h)\) satisfying the AT [15] condition1
Footnote 1: Expectations over effective random variables, like for instance \(Z\) in Eq. (1.3) and Eq. (1.4), that are independent of the disorder \(\{g_{ij}\}\) in (1.1), are denoted by \(E(\cdot)\), which is to be distinguished from expectations \(\mathbb{E}(\cdot)\) over the disorder in (1.1).
\[\beta^{2}E\,{\rm sech}^{4}(h+\beta\sqrt{q}Z)<1, \tag{1.3}\]
where here and in the following \(q=q_{\beta,h}\) denotes the unique solution to
\[q=E\tanh^{2}(h+\beta\sqrt{q}Z). \tag{1.4}\]
Moreover, \(Z\sim\mathcal{N}(0,1)\) denotes a standard Gaussian independent of the disorder \(\{g_{ij}\}\).
Proving the norm boundedness of \(\mathbf{M}\) (or an analogue of (1.2)) for \(h\neq 0\) under (1.3) is a challenging problem. In this work instead, we impose further assumptions on \((\beta,h)\) which ensure exponential concentration of the overlap, based on results obtained by Talagrand in [28, 29]. More precisely, we assume in the sequel the AT condition (1.3) together with
\[(\partial_{m}\Phi)(m,q^{\prime})\big{|}_{m=1}<0\text{ for all }q<q^{\prime}\leq 1, \tag{1.5}\]
where for \(0\leq m\leq 1\) and \(q\leq q^{\prime}\leq 1\) we set
\[\Phi(m,q^{\prime})=\log 2+\frac{\beta^{2}}{4}(1-q^{\prime})^{2}-\frac{\beta^{2 }}{4}m({q^{\prime}}^{2}-q^{2})+\frac{1}{m}E\log E_{Z^{\prime}}\cosh^{m}(h+ \beta\sqrt{q}Z+\beta\sqrt{q^{\prime}-q}Z^{\prime})\,.\]
Here, \((Z,Z^{\prime})\sim\mathcal{N}(0,\mathrm{id}_{\mathbb{R}^{2}})\) and \(E_{Z^{\prime}}(\cdot)\) denotes the expectation over \(Z^{\prime}\). Conditions (1.3) & (1.5) ensure that we consider parameters \((\beta,h)\) inside a suitable interior region of the replica symmetric phase of the model; see [29, Chapter 13] for more details on this. They imply in particular locally uniform in \(\beta\) exponential concentration of the overlap, which in turn enables an efficient computation of many observables.
While we recall the results from [28, 29] that are based on (1.3) & (1.5) and that are used in the sequel in Section 2, let us already point out that (1.3) & (1.5) are satisfied for every external field strength \(h>0\) whenever \(\beta<1\): indeed following [29] one has
\[(\partial_{m}\Phi)(m,q^{\prime})\big{|}_{m=1}(q) =0,\hskip 14.226378pt\partial_{q^{\prime}}\Big{(}(\partial_{m} \Phi)(m,q^{\prime})\big{|}_{m=1}\Big{)}(q)=0,\] \[\partial_{q^{\prime}}^{2}\Big{(}(\partial_{m}\Phi)(m,q^{\prime}) \big{|}_{m=1}\Big{)} =\frac{\beta^{2}}{2}\Big{(}\beta^{2}E\frac{E_{Z^{\prime}}\operatorname{ sech}^{3}(h+\beta\sqrt{q}Z+\beta\sqrt{q^{\prime}-q}Z^{\prime})}{E_{Z^{\prime}} \cosh(h+\beta\sqrt{q}Z+\beta\sqrt{q^{\prime}-q}Z^{\prime})}-1\Big{)}\]
and simple monotonicity arguments as in [13, Section 3] imply
\[\beta^{2}\frac{E_{Z^{\prime}}\operatorname{sech}^{3}(h+\beta\sqrt{q}Z+\beta \sqrt{q^{\prime}-q}Z^{\prime})}{E_{Z^{\prime}}\cosh(h+\beta\sqrt{q}Z+\beta \sqrt{q^{\prime}-q}Z^{\prime})}\leq\beta^{2}\,E_{Z^{\prime}}\operatorname{ sech}^{4}(h+\beta\sqrt{q}Z+\beta\sqrt{q^{\prime}-q}Z^{\prime}).\]
Thus, \((\partial_{m}\Phi)(m,q^{\prime})\big{|}_{m=1}<0\) for all \(q<q^{\prime}\leq 1\) whenever \(\beta<1\).
Assuming from now on (1.3) & (1.5), our strategy to prove the norm boundedness of \(\mathbf{M}\) combines recent ideas from [5, 6, 1, 14, 9, 7] and provides several results of independent interest. In the first step of our analysis, we compute \(\mathbf{M}\) up to an error which has vanishing Frobenius norm in the limit \(N\to\infty\). To state this more precisely, we set from now on
\[\mathbf{m}=(m_{1},\ldots,m_{N})=(\langle\sigma_{1}\rangle,\ldots,\langle\sigma _{N}\rangle)\in\mathbb{R}^{N}\]
and we denote by \(\langle\cdot\rangle^{(i)}\) the Gibbs measure obtained from \(\langle\cdot\rangle\) by removing the spin \(\sigma_{i}\).
**Proposition 1.1**.: _Assume that \((\beta,h)\) satisfy (1.3) & (1.5). Then, there exists a constant \(C=C_{\beta,h}>0\) such that for all \(1\leq i\neq j\leq N\), we have that_
\[\mathbb{E}\Big{(}m_{ij}-\beta\big{(}1-m_{i}^{2}\big{)}\sum_{k=1}^{N}g_{ik}m_{ kj}^{(i)}\Big{)}^{2}\leq CN^{-5/2}. \tag{1.6}\]
**Remark 1.1**.: _Eq. (1.6) shows that \(m_{ij}\approx\beta\big{(}1-m_{i}^{2}\big{)}\sum_{k=1}^{N}g_{ik}m_{kj}^{(i)}\). Iterating the r.h.s., one obtains a representation of \(m_{ij}\) as a sum over weights \(w(\gamma)\) of self-avoiding paths \(\gamma\) from \(i\) to \(j\): each edge \(e=\{i_{1},i_{2}\}\in\gamma\) contributes a factor \(\beta g_{i_{1}i_{2}}\) to \(w(\gamma)\) and each vertex a factor \(\big{(}1-(m_{i}^{(S)})^{2}\big{)}\), for suitable \(S\subset[N]\). Since we do not use this representation to control the operator norm \(\|\mathbf{M}\|_{\mathrm{op}}\), we omit further details on its derivation, but only mention that it generalizes the path representation of \(\textbf{M}_{\beta,h=0}\) obtained in [7] in the range \(\beta<1\)._
As discussed in detail in [1] (for \(\beta\) sufficiently small), after expressing the \(m_{kj}^{(i)}\) through the original two point functions \(m_{kj}\), (1.6) suggests the validity of the resolvent equation
\[\mathbf{M}\approx\frac{1}{\mathbf{D}+\beta^{2}(1-q)-2\beta^{2}N^{-1}\mathbf{ m}\otimes\mathbf{m}-\beta\mathbf{G}}, \tag{1.7}\]
where \(\mathbf{D}\in\mathbb{R}^{N\times N}\) is the diagonal matrix with entries
\[(\mathbf{D})_{ij}=m_{ii}^{-1}\delta_{ij}=\frac{1}{1-m_{i}^{2}}\delta_{ij}. \tag{1.8}\]
Motivated by this approximation, the second step of our analysis proves the following.
**Proposition 1.2**.: _Assume that \((\beta,h)\) satisfy (1.3) \(\&\) (1.5), let \(q\) denote the solution of (1.4) and let \(\textbf{D}\in\mathbb{R}^{N\times N}\) be defined as in (1.8). Then we have that_
\[\|(\mathbf{D}+\beta^{2}(1-q)-\beta\mathbf{G})\mathbf{M}-\mathrm{id}_{\mathbb{ R}^{N}}\|_{\mathrm{op}}\leq o(1)\|\mathbf{M}\|_{\mathrm{op}}+O(1)\]
_for an error \(o(1)\to 0\) as \(N\to\infty\) in probability and where the error \(O(1)\geq 0\) is of order one with high probability: there exists a constant \(C=C_{\beta,h}>0\) such that_
\[\mathbb{P}\big{(}O(1)\leq K\big{)}\geq 1-CK^{-2}\]
_for every \(K>0\)._
**Remark 1.2**.: _In view of the resolvent heuristics (1.7), it seems natural to expect that_
\[\lim_{N\to\infty}\big{\|}(\mathbf{D}+\beta^{2}(1-q)-2\beta^{2}N^{-1}\textbf{m} \otimes\textbf{m}-\beta\mathbf{G})\mathbf{M}-\mathrm{id}_{\mathbb{R}^{N}} \big{\|}_{\mathrm{op}}=0,\]
_with high probability. While the methods presented in this paper do not seem to allow a simple proof of this norm convergence, we plan to address this point in a follow-up work._
Finally, the norm boundedness of \(\mathbf{M}\) follows by deriving a uniform lower bound on the matrix \(\mathbf{D}+\beta^{2}(1-q)-\beta\mathbf{G}\) appearing in Prop. 1.2. This follows from the next result which is a direct consequence of combining recent results and ideas from [5, 6, 14, 9]. In its statement, \((\mathbf{m}^{(k)})_{k\geq 1}\) denotes Bolthausen's iterative TAP solution [5, 6], whose construction is recalled in detail in Section 4 below.
**Proposition 1.3**.: _Assume that \((\beta,h)\) satisfy the AT condition (1.3) and let \(q\) denote the solution of (1.4). Then, there exists \(c=c_{\beta,h}>0\), which is independent of \(N\in\mathbb{N}\), so that_
\[\mathbf{D}_{\boldsymbol{m}^{(k)}}+\beta^{2}(1-q)-\frac{2\beta^{2}}{N} \boldsymbol{m}^{(k)}\otimes\boldsymbol{m}^{(k)}-\beta\mathbf{G}\geq c \tag{1.9}\]
_for \(k\) large enough, with probability tending to one as \(N\to\infty\). Here, \(\boldsymbol{m}^{(k)}\) denotes Bolthausen's iterative TAP solution at step \(k\) and \(\boldsymbol{D}_{\boldsymbol{m}^{(k)}}\in\mathbb{R}^{N\times N}\) has entries_
\[(\boldsymbol{D}_{\boldsymbol{m}^{(k)}})_{ij}=\frac{1}{1-\big{(}m_{i}^{(k)} \big{)}^{2}}\delta_{ij}.\]
_Moreover, assuming \((\beta,h)\) to satisfy both (1.3) & (1.5), it follows that_
\[\mathbf{D}+\beta^{2}(1-q)-\beta\mathbf{G}\geq\mathbf{D}+\beta^{2}(1-q)-\frac{ 2\beta^{2}}{N}\boldsymbol{m}\otimes\boldsymbol{m}-\beta\mathbf{G}\geq c\]
_with probability tending to one as \(N\to\infty\), where \(\boldsymbol{D}\in\mathbb{R}^{N\times N}\) is defined as in (1.8). In particular, \(\mathbf{D}+\beta^{2}(1-q)-\beta\mathbf{G}\) is invertible with high probability and satisfies_
\[\big{\|}\big{(}\mathbf{D}+\beta^{2}(1-q)-\beta\mathbf{G}\big{)}^{-1}\big{\|}_ {\mathrm{op}}\leq c^{-1}<\infty.\]
**Remark 1.3**.: _We point out that the proof of (1.9) only requires \((\beta,h)\) to satisfy (1.3). The additional assumption (1.5) is used to show that \(\boldsymbol{m}\) is close to \(\boldsymbol{m}^{(k)}\), applying the main result of [14] (see Prop. 4.5 for the details). The proof of (1.9) follows, up to a few modifications, from translating the arguments in [9] to the present setting. Related to this point, notice that the matrix on the l.h.s. in (1.9) is, up to a negligible error2, equal to the negative of the Hessian of the TAP free energy functional [2] at \(\boldsymbol{m}^{(k)}\). In particular, the lower bound in (1.9) resolves a recent open question from [21], which studies the limiting spectral distribution of the Hessian at \(\boldsymbol{m}^{(k)}\) and which provides an interesting characterization of Plefka's conditions [24]. For further recent work related to the TAP approach, see also [4, 12, 11, 10, 8, 17, 3, 23, 19] and the references therein._
Footnote 2: Under (1.3), it holds true that \(q=N^{-1}\sum_{i=1}^{N}\big{(}m_{i}^{(k)}\big{)}^{2}+o(1)\) for an error \(o(1)\) which is such that \(\lim_{k\to\infty}\limsup_{N\to\infty}o(1)\to 0\), in probability.
Collecting Propositions 1.1, 1.2 and 1.3, we arrive at the following main result.
**Theorem 1.4**.: _Assume that \((\beta,h)\) satisfy (1.3) \(\&\) (1.5). Then, for every \(\epsilon>0\) there exists a constant \(K_{\epsilon}>0\), that is independent of \(N\), such that_
\[\mathbb{P}\big{(}\|\mathbf{M}\,\|_{\mathrm{op}}\leq K_{\epsilon}\big{)}\geq 1-\epsilon\]
_for all \(N\) large enough._
Proof.: On a set of probability tending to one as \(N\to\infty\), Prop. 1.3 shows that
\[\left\|\left(\boldsymbol{D}+\beta^{2}(1-q)\right)-\beta\,\boldsymbol{G}\right)^{ -1}\right\|_{\mathrm{op}}\leq c_{\beta,h}{}^{-1}\]
for some constant \(c_{\beta,h}>0\), that is independent of \(N\). Using Prop. 1.2 and the norm bound \(\|\mathbf{A}\mathbf{B}\|_{\mathrm{op}}\leq\|\mathbf{A}\|_{\mathrm{op}}\| \mathbf{B}\|_{\mathrm{op}}\) for all \(\mathbf{A},\mathbf{B}\in\mathbb{R}^{N\times N}\), this implies
\[\|\mathbf{M}\|_{\mathrm{op}} \leq c_{\beta,h}{}^{-1}\big{(}1+\|(\mathbf{D}+\beta^{2}(1-q)- \beta\mathbf{G})\mathbf{M}-\mathrm{id}_{\mathbb{R}^{N}}\|_{\mathrm{op}}\big{)}\] \[\leq c_{\beta,h}{}^{-1}\big{(}1+O(1)\big{)}+c_{\beta,h}{}^{-1}o(1 )\|\mathbf{M}\|_{\mathrm{op}}\]
so that
\[\|\mathbf{M}\|_{\mathrm{op}}\leq\frac{c_{\beta,h}^{-1}+O(1)}{1-c_{\beta,h}^{- 1}o(1)}.\]
The claim now follows from the fact that \(\lim_{N\to\infty}o(1)=0\) in the sense of probability and the bound \(\mathbb{P}\big{(}O(1)>K\big{)}\leq CK^{-2}\), for some \(C>0\) and every \(K>0\).
The rest of this paper is devoted to the proofs of Prop. 1.1, Prop. 1.2 and Prop. 1.3. Section 2 recalls several useful results from [28, 29] and uses them to derive decay estimates on suitable correlation functions. Applying tools from stochastic calculus as in [1], this implies Prop. 1.1 and Prop. 1.2, which is explained in Section 3. Finally, in Section 4 we use the results of [5, 6, 14, 9] to deduce Prop. 1.3.
## 2 Bounds on \((k,p)\)-Point Correlation Functions
In this section, we derive suitable decay bounds on a certain class of correlation functions that occur naturally in the proofs of Propositions 1.1 and 1.2. Our bounds combine ideas from [1] and results by Talagrand [28, 29] that follow from the assumptions (1.3) & (1.5).
To efficiently employ the results from [28, 29] that are recalled below, it is convenient to work in a slightly more general setting compared to the previous Section 1. To be more precise, we consider in this section spin systems with Hamiltonian of the form
\[H_{N}(\sigma)=\beta\sum_{1\leq i<j\leq N}g_{ij}\sigma_{i}\sigma_{j}+\sum_{i=1 }^{N}h_{i}\sigma_{i}=\frac{\beta}{2}(\sigma,\mathbf{G}\sigma)+(\mathbf{h}, \sigma), \tag{2.1}\]
where \(\mathbf{h}\in\mathbb{R}^{N}\) is assumed to be a Gaussian random vector whose components are i.i.d. copies of a Gaussian random variable \(h\) with mean and variance denoted by
\[h\sim\mathcal{N}(\mu_{h},\sigma_{h}^{2}).\]
We allow for the possibility that \(\sigma_{h}^{2}=0\) (so that all of the following results apply in particular to systems with deterministic field as in (1.1)) and we assume that \(E\,h^{2}>0\) (in
particular, the Gaussian field \(\mathbf{h}\) need not be centered). Under the latter assumption, recall from [20] and [28, Chapter 1] that there is a unique solution
\[q=q_{\beta,\mu_{h},\sigma_{h}^{2}}\in(0,1)\]
to (1.4). Let us point out that in this slightly more general setting, the expectation \(E(\cdot)\) in (1.3), (1.4) and (1.5) is taken over all of the independent random variables \(Z,Z^{\prime}\) and \(h\).
In the following, we use dynamical methods to control the correlation functions of SK models with Hamiltonians as in (2.1), but with possibly modified interaction coupling \(\beta^{\prime}\) and Gaussian field \(\mathbf{h}^{\prime}\) (whose entries are i.i.d. copies of a modified Gaussian random variable \(h^{\prime}\sim\mathcal{N}(\mu_{h^{\prime}},\sigma_{h^{\prime}}^{2})\)). All considered parameters \((\beta^{\prime},h^{\prime})\) satisfy, however, conditions (1.3) & (1.5) and this motivates us to define the parameter set
\[\mathcal{A}_{RS^{-}}=\big{\{}(\beta,\mu_{h},\sigma_{h}^{2})\in(0,\infty) \times\mathbb{R}\times[0,\infty):(\beta,h)\text{ satisfy \eqref{eq:1.3}}\,\&\,\eqref{eq:2.5},\text{ for }h\sim \mathcal{N}(\mu_{h},\sigma_{h}^{2})\big{\}}.\]
Using the continuous dependence \((\beta,\mu_{h},\sigma_{h}^{2})\mapsto q_{\beta,h}=q_{\beta,\mu_{h},\sigma_{h} ^{2}}\), it is straightforward to check that the set \(\mathcal{A}_{RS^{-}}\subset(0,\infty)\times\mathbb{R}\times[0,\infty)\) is open, which is frequently used below.
We denote by \((\sigma^{l})_{l\in\mathbb{Z}}\) a sequence of i.i.d. samples from \(\langle\cdot\rangle\), called replicas. Following standard conventions, the \(l\)-th replica \(\sigma^{l}\) corresponds to the \(l\)-th coordinate in \(\prod_{j\in\mathbb{Z}}\{-1,1\}^{N}\) with product Gibbs measure \(\bigotimes_{j\in\mathbb{Z}}\langle\cdot\rangle\). By slight abuse of notation, we write \(\langle\cdot\rangle\) also for expectations over functions of several replicas. Consider e.g. the overlap \(R:\{-1,1\}^{N}\times\{-1,1\}^{N}\to\mathbb{R}\) which equals the normalized inner product of two replicas:
\[R(\sigma^{l},\sigma^{l^{\prime}})=N^{-1}\sum_{i=1}^{N}\sigma_{i}^{l}\sigma_{i }^{l^{\prime}}=N^{-1}(\sigma^{l},\sigma^{l^{\prime}}).\]
Below, we abbreviate for simplicity \(R_{l,l^{\prime}}=R(\sigma^{l},\sigma^{l^{\prime}})\) when computing Gibbs expectations of functions of the overlap of multiple replicas. Following the previous remark, we thus write for instance \(\big{\langle}f(R_{1,2},R_{3,4})\big{\rangle}=\big{\langle}f\big{(}N^{-1}( \sigma^{1},\sigma^{2}),N^{-1}(\sigma^{3},\sigma^{4})\big{)}\big{\rangle}^{ \otimes 4}\) for every \(f:\mathbb{R}\to\mathbb{R}\).
**Theorem 2.1** ([29, Theorem 13.7.1.]).: _Assume \((\beta,\mu_{h},\sigma_{h}^{2})\in\mathcal{A}_{RS^{-}}\) and \(h\sim\mathcal{N}(\mu_{h},\sigma_{h}^{2})\). Let \(q_{\beta,h}\) denote the unique solution of (1.4). Then, there is a constant \(K_{\beta,h}>0\) such that_
\[\mathbb{E}\,\big{\langle}\exp\big{(}N(R_{1,2}-q_{\beta,h})^{2}/K_{\beta,h} \big{)}\big{\rangle}_{\beta,h}\leq 2.\]
_Moreover, \(K_{\beta,h}=K_{\beta,\mu_{h},\sigma_{h}^{2}}\) is locally bounded in \((\beta,\mu_{h},\sigma_{h}^{2})\): for \(\delta=\delta_{\beta,h}>0\) small enough s.t. \((\beta-\delta,\beta+\delta)\times(\mu_{h}-\delta,\mu_{h}+\delta)\times[( \sigma_{h}^{2}-\delta)_{+},\sigma_{h}^{2}+\delta)\subset\mathcal{A}_{RS^{-}}\), we find \(K>0\) such that_
\[\sup_{\begin{subarray}{c}\beta^{\prime}\in(\beta-\delta,\beta+\delta),\\ \mu_{h}^{\prime}\in(\mu_{h}-\delta,\mu_{h}+\delta),\\ \sigma_{h^{\prime}}^{2}\in[(\sigma_{h}^{2}-\delta)_{+},\sigma_{h}^{2}+\delta) \end{subarray}}\mathbb{E}\,\big{\langle}\exp\big{(}N(R_{1,2}-q_{\beta^{\prime },h^{\prime}})^{2}/K\big{)}\big{\rangle}_{\beta^{\prime},h^{\prime}}\leq 2, \tag{2.2}\]
_where \((\sigma_{h}^{2}-\delta)_{+}=\max(0,\sigma_{h}^{2}-\delta)\) and where \(h^{\prime}\sim\mathcal{N}(\mu_{h^{\prime}},\sigma_{h^{\prime}}^{2})\)._
**Remark 2.1**.: _That \(K_{\beta,h}>0\) is locally bounded in \((\beta,\mu_{h},\sigma_{h}^{2})\) is not explicitly stated in [29, Theorem 13.7.1.], but it readily follows from the arguments in [29, Sections 13.4-13.7]._
Theorem 2.1 implies useful decay bounds on a large class of correlation functions. Here, we rely on a direct consequence of (2.2) explained in detail in [28, Sections 1.8 & 1.10]. For the precise statement, we recall from [28] for \(l,l^{\prime}\in\mathbb{Z}\) the notation
\[T_{l,l^{\prime}}=N^{-1}(\sigma^{l}-\mathbf{m},\sigma^{l^{\prime}}-\mathbf{m}),\hskip 14.226378ptT_{l}=N^{-1}(\sigma^{l}-\mathbf{m},\mathbf{m}),\hskip 14.226378ptT =N^{-1}(\mathbf{m},\mathbf{m}) \tag{2.3}\]
for replicas \((\sigma^{l})_{l\in\mathbb{Z}}\) as well as
\[\begin{split}\nu_{1}^{2}&=\frac{1-2q+q_{4}}{1-\beta ^{2}(1-2q+q_{4}\,)},\hskip 14.226378pt\nu_{2}^{2}=\frac{q-q_{4}}{\big{(}1- \beta^{2}(1-2q+q_{4}\,)\big{)}\big{(}1-\beta^{2}(1-4q+3q_{4}\,)\big{)}},\\ \nu_{3}^{2}&=\frac{q_{4}-q^{2}}{1-\beta^{2}(1-4q+3q _{4}\,)}+\frac{\beta^{2}(q_{4}-q^{2})A^{2}}{1-\beta^{2}(1-4q+3q_{4}\,)}+\frac{2 \beta^{2}(2q+q^{2}-q_{4})B^{2}}{1-\beta^{2}(1-4q+3q_{4}\,)},\end{split} \tag{2.4}\]
where \(q\) is as in (1.4) and where \(q_{4}=E\tanh^{4}(h+\beta\sqrt{q}Z)\).
**Theorem 2.2** ([28, Theorem 1.10.1]).: _Assume \((\beta,\mu_{h},\sigma_{h}^{2})\in\mathcal{A}_{RS^{-}}\) and \(h\sim\mathcal{N}(\mu_{h},\sigma_{h}^{2})\). Denote by \(U_{l,l^{\prime}},U_{l}\) and \(U\) independent centered Gaussian random variables with variance \(\nu_{1}^{2},\nu_{2}^{2}\) and, respectively, \(\nu_{3}^{2}\), as in (2.4). Let \(T_{l,l^{\prime}},T_{l}\) and \(T\) be defined as in (2.3). Then, for fixed \(n\in\mathbb{N}_{0}\), \(k(l,l^{\prime})\in\mathbb{N}_{0}\) for \(1\leq l<l^{\prime}\leq n\), \(k(l)\in\mathbb{N}_{0}\) for \(1\leq l\leq n\), \(k\in\mathbb{N}_{0}\) as well as_
\[m=\sum_{1\leq l<l^{\prime}\leq n}k(l,l^{\prime})+\sum_{1\leq l\leq n}k(l)+k,\]
_we have that_
\[\left|\mathbb{E}\left\langle\prod_{1\leq l<l^{\prime}\leq n}T_{l,l^{\prime}}^{ k(l,l^{\prime})}\prod_{1\leq l\leq n}T_{l}^{k(l)}T^{k}\right\rangle-N^{-\frac{m}{2 }}E\prod_{1\leq l<l^{\prime}\leq n}U_{l,l^{\prime}}^{k(l,l^{\prime})}\prod_{1 \leq l\leq n}U_{l}^{k(l)}U^{k}\right|\leq CN^{-\frac{m+1}{2}}.\]
_Moreover, the constant \(C=C_{\beta,h}\) is locally bounded in \((\beta,\mu_{h},\sigma_{h}^{2})\)._
**Remark 2.2**.: _We remark that [28, Theorem 1.10.1] assumes \(\beta<1/2\) in order to apply [28, Eq. (1.88)] to obtain the error bounds in [28, Eq. (1.296)]. The latter is the key result that implies [28, Lemma 1.10.2 & Corollary 1.10.3], which in turn implies [28, Theorem 1.10.1]. Instead of assuming \(\beta<1/2\), we can equally assume \((\beta,\mu_{h},\sigma_{h}^{2})\in\mathcal{A}_{RS^{-}}\) and \(h\sim\mathcal{N}(\mu_{h},\sigma_{h}^{2})\) to get [28, Eq. (1.296)], by Theorem 2.1. Given [28, Eq. (1.296)], Theorem 2.2 can then be proved exactly as in [28, Section 1.10]. Moreover, the fact that the constant \(C_{\beta,h}\) is locally bounded in \((\beta,\mu_{h},\sigma_{h}^{2})\) is not explicitly explained in [28, Section 1.10], but it readily follows as a consequence of Theorem 2.1 and the relevant tools from [28, Sections 1.6, 1.8 & 1.10], in particular [28, Eq. (1.151), (1.154), (1.215) & (1.216)]._
We are now ready to apply the preceding results to derive suitable decay bounds on correlation functions. For \(k\in\mathbb{N}\) and \(i_{1},\ldots,i_{k}\in[N]\), we define the \(k\)-point functions by
\[m_{i_{1}i_{2}\ldots i_{k}}=\partial_{h_{i_{1}}}\partial_{h_{i_{2}}}\ldots \partial_{h_{i_{k}}}\log Z_{N}(\mathbf{h}). \tag{2.5}\]
To derive our main results, it is useful to consider a certain class of \((k,p)\)-point correlation functions, which are defined as follows: fixing \(k,p\in\mathbb{N}_{0}\) and \(j_{1},..,j_{p}\in[N]\), we define
\[m_{j_{1}\ldots j_{p};k}=N^{-k}\sum_{i_{1},\ldots,i_{k}=1}^{N}m_{i_{1}i_{1}i_{2 }i_{2}\ldots i_{k}i_{k}j_{1}j_{2}\ldots j_{p}}\,. \tag{2.6}\]
Notice that each of the indices \(i_{1},\ldots,i_{k}\) over which we average occurs exactly twice.
**Lemma 2.3**.: _Assume \((\beta,\mu_{h},\sigma_{h}^{2})\in\mathcal{A}_{RS^{-}}\) and \(h\sim\mathcal{N}(\mu_{h},\sigma_{h}^{2})\). Set \(A_{m}=m/2\) if \(m\leq 2\) and \(A_{m}=m/2+1/2\) if \(m\geq 3\). Then, we have for every \(k\geq 0,p\geq 1\) with \(2k+p\geq 2\) that_
\[N^{-p}\sum_{j_{1},\ldots,j_{p}=1}^{N}\mathbb{E}\,m_{j_{1}\ldots j_{p};k}^{2} \leq C_{\beta,h}N^{-A_{2k+p}}\]
_for some \(C_{\beta,h}>0\) that is locally bounded in \((\beta,\mu_{h},\sigma_{h}^{2})\)._
Proof.: For simplicity, we give a detailed proof for the cases \(k=0\) and \(k=1\); the remaining cases \(k\geq 2\) are proved with analogous arguments. We use the representation
\[m_{j_{1}\ldots j_{p}}=\Big{\langle}\sigma_{j_{1}}^{1}\prod_{u=1}^{p-1}\Big{(} \sum_{v=1}^{u}(\sigma_{j_{u+1}}^{v}-\sigma_{j_{u+1}}^{u+1})\Big{)}\Big{\rangle}, \tag{2.7}\]
where we recall that \((\sigma^{l})_{l\in\mathbb{N}}\) denote i.i.d. replicas sampled from \(\langle\cdot\rangle\). Eq. (2.7) follows from
\[\partial_{h_{i}}\left\langle f(\sigma^{1},\ldots,\sigma^{u})\right\rangle =\sum_{\sigma^{1},\ldots,\sigma^{u}\in\{-1,1\}^{N}}f(\sigma^{1}, \ldots,\sigma^{u})\,\partial_{h_{i}}\frac{e^{H_{N}(\sigma^{1})+\ldots+H_{N}( \sigma^{u})}}{Z_{N}^{u}}\] \[=\Big{\langle}f(\sigma^{1},\ldots,\sigma^{u})\Big{(}\sum_{v=1}^{ u}\sigma_{i}^{v}-u\sigma_{i}^{u+1}\Big{)}\Big{\rangle}\] \[=\Big{\langle}f(\sigma^{1},\ldots,\sigma^{u})\sum_{v=1}^{u}( \sigma_{i}^{v}-\sigma_{i}^{u+1})\Big{\rangle}\]
and induction. Observe that the previous identity implies in particular that
\[\Big{\langle}\prod_{u=1}^{p-1}\Big{(}\sum_{v=1}^{u}(\sigma_{j_{u+1}}^{v}- \sigma_{j_{u+1}}^{u+1})\Big{)}\Big{\rangle}=\partial_{h_{j_{2}}}\ldots \partial_{h_{j_{p}}}\left\langle 1\right\rangle=0\,. \tag{2.8}\]
Now, consider first the case \(k=0\). Recalling that \((\sigma^{l})_{l\in\mathbb{Z}}\) denotes a sequence of i.i.d. replicas sampled from \(\langle\cdot\rangle\), Eq. (2.7) and (2.8) imply that we can express \(m^{2}_{j_{1}\ldots j_{p}}\) as
\[m^{2}_{j_{1}\ldots j_{p}}=\Big{\langle}\big{(}\sigma^{1}_{j_{1}}\sigma^{-1}_{j_ {1}}-N^{-1}\|\mathbf{m}\|^{2}\big{)}\prod_{u=1}^{p-1}\Big{(}\sum_{v=1}^{u}( \sigma^{v}_{j_{u+1}}-\sigma^{u+1}_{j_{u+1}})\Big{)}\Big{(}\sum_{v^{\prime}=-1} ^{-u}(\sigma^{v^{\prime}}_{j_{u+1}}-\sigma^{-(u+1)}_{j_{u+1}})\Big{)}\Big{\rangle}\]
so that
\[N^{-p}\sum_{j_{1},\ldots,j_{p}=1}^{N}m^{2}_{j_{1}\ldots j_{p}}\] \[=\Big{\langle}\big{(}R_{1,-1}-N^{-1}\|\mathbf{m}\|^{2}\big{)} \prod_{u=1}^{p-1}\sum_{v=1}^{u}\sum_{v^{\prime}=-1}^{-u}\big{(}R_{vv^{\prime} }-R_{v,-(u+1)}-R_{u+1,v^{\prime}}+R_{u+1,-(u+1)}\big{)}\Big{\rangle}\] \[=\Big{\langle}(T_{1,-1}+T_{1}+T_{-1})\prod_{u=1}^{p-1}\sum_{v=1}^ {u}\sum_{v^{\prime}=-1}^{-u}\big{(}T_{vv^{\prime}}-T_{v^{\prime},-(u+1)}-T_{u +1,v^{\prime}}+T_{u+1,-(u+1)}\big{)}\Big{\rangle}.\]
Here, we used in the second step that \(R_{l,l^{\prime}}=T_{l,l^{\prime}}+T_{l}+T_{l^{\prime}}+N^{-1}\|\mathbf{m}\|^ {2}\) and that
\[R_{vv^{\prime}}-R_{v,-(u+1)}-R_{u+1,v^{\prime}}+R_{u+1,-(u+1)}\] \[=T_{vv^{\prime}}+T_{v}+T_{v^{\prime}}+N^{-1}\|\mathbf{m}\|^{2}-T _{v,-(u+1)}-T_{v}-T_{-(u+1)}-N^{-1}\|\mathbf{m}\|^{2}\] \[\quad-T_{u+1,v^{\prime}}-T_{u+1}-T_{v^{\prime}}-N^{-1}\|\mathbf{ m}\|^{2}+T_{u+1,-(u+1)}+T_{u+1}+T_{-(u+1)}+N^{-1}\|\mathbf{m}\|^{2}\] \[=T_{vv^{\prime}}-T_{v^{\prime},-(u+1)}-T_{u+1,v^{\prime}}+T_{u+1, -(u+1)}.\]
Now, applying and using the same notation as in Theorem 2.2, we get
\[\Big{|}N^{-p}\sum_{j_{1},\ldots,j_{p}=1}^{N}\mathbb{E}\,m^{2}_{j_{1}\ldots j_ {p}}-N^{-\frac{p}{2}}E\prod_{u=0}^{p-1}Z_{u}\Big{|}\leq CN^{-\frac{p+1}{2}}, \tag{2.9}\]
where \((Z_{u})_{0\leq u\leq p-1}\) is a Gaussian random vector whose entries are defined by
\[Z_{u}=\begin{cases}U_{1,-1}+U_{1}+U_{-1}&:u=0,\\ \sum_{v=1}^{u}\sum_{v^{\prime}=-1}^{-u}\big{(}U_{vv^{\prime}}-U_{v,-(u+1)}-U_ {u+1,v^{\prime}}+U_{u+1,-(u+1)}\big{)}&:u\geq 1.\end{cases}\]
This proves the lemma for the case \(k=0\) and \(p=2\). To obtain an additional decay factor \(N^{-1/2}\) for the cases \(p\geq 3\), notice that the entries of \((Z_{u})_{1\leq u\leq p-1}\) are independent:
\[EZ_{u}Z_{u^{\prime}}= \,E\sum_{v=1}^{u}\sum_{v^{\prime}=-1}^{-u}\sum_{w=1}^{u^{\prime} }\sum_{w^{\prime}=-1}^{-u^{\prime}}\big{(}U_{vv^{\prime}}-U_{v,-(u+1)}-U_{u+1,v^{\prime}}+U_{u+1,-(u+1)}\big{)}U_{ww^{\prime}}\] \[=\sum_{v=1}^{u}\sum_{v^{\prime}=-1}^{-u}\big{(}E\,U^{2}_{vv^{ \prime}}-E\,U^{2}_{v,-(u+1)}-E\,U^{2}_{u+1,v^{\prime}}+E\,U^{2}_{u+1,-(u+1)} \big{)}=0\]
for all \(1\leq u<u^{\prime}\leq p-1\). Since \((Z_{u})_{0\leq u\leq p-1}\) is a Gaussian vector, we can then write
\[Z_{0}=\sum_{u=1}^{p-1}x_{u}Z_{u}+Z^{\prime},\]
where \(\mathbf{x}=(x_{1},\ldots,x_{p-1})\in\mathbb{R}^{p-1}\) and where \(Z^{\prime}\) is a Gaussian random variable independent of the remaining entries \(Z_{1},\ldots,Z_{p-1}\). This implies for \(p\geq 3\) that
\[E\prod_{u=0}^{p-1}Z_{u}=\sum_{u=1}^{p-1}x_{u}\big{(}E\,Z_{u}^{2}\big{)}\,E\prod _{\begin{subarray}{c}u^{\prime}=1,\\ u^{\prime}\neq u\end{subarray}}^{p-1}Z_{u^{\prime}}=0\]
and combining this with (2.9), we conclude that
\[\Big{|}\ N^{-p}\sum_{j_{1},\ldots,j_{p}=1}^{N}\mathbb{E}\,m_{j_{1}\ldots j_{ p}}^{2}\Big{|}\ \leq CN^{-\frac{p+1}{2}}.\]
Next, consider the case \(k=1\). By (2.7), we have that
\[m_{j_{1}\ldots j_{p};1}=\frac{1}{N}\sum_{i_{1}=1}^{N}m_{i_{1}i_{ 1}j_{1}\ldots j_{p}} =\frac{1}{N}\sum_{i_{1}=1}^{N}\Big{\langle}\sigma_{i_{1}}^{1}( \sigma_{i_{1}}^{1}-\sigma_{i_{1}}^{2})\prod_{u=2}^{p+1}\Big{(}\sum_{v=1}^{u} \big{(}\sigma_{j_{u-1}}^{v}-\sigma_{j_{u-1}}^{u+1}\big{)}\Big{)}\Big{\rangle}\] \[=\Big{\langle}(1-R_{1,2})\prod_{u=2}^{p+1}\Big{(}\sum_{v=1}^{u} \big{(}\sigma_{j_{u-1}}^{v}-\sigma_{j_{u-1}}^{u+1}\big{)}\Big{)}\Big{\rangle}\] \[=\Big{\langle}(N^{-1}\|\mathbf{m}\|^{2}-R_{1,2})\prod_{u=2}^{p+1 }\Big{(}\sum_{v=1}^{u}\big{(}\sigma_{j_{u-1}}^{v}-\sigma_{j_{u-1}}^{u+1}\big{)} \Big{)}\Big{\rangle},\]
where we used (2.8) in the last step. Hence
\[m_{j_{1}\ldots j_{p};1}^{2}=\Big{\langle}\big{(}N^{-1}\|\mathbf{ m}\|^{2}-R_{12}\big{)}\big{(}N^{-1}\|\mathbf{m}\|^{2}-R_{-1,-2}\big{)}\] \[\qquad\qquad\times\prod_{u=2}^{p+1}\Big{(}\sum_{v=1}^{u}(\sigma_ {j_{u-1}}^{v}-\sigma_{j_{u-1}}^{u+1})\Big{)}\Big{(}\sum_{v^{\prime}=-1}^{-u}( \sigma_{j_{u-1}}^{v^{\prime}}-\sigma_{j_{u-1}}^{-(u+1)})\Big{)}\Big{\rangle}\]
and thus, arguing as in the first step,
\[N^{-p}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Applying and using the same notation as in Theorem 2.2, we obtain that
\[\Big{|}\ N^{-p}\!\!\sum_{j_{1},\ldots,j_{p}=1}^{N}\!\!\mathbb{E}\,m_{j_{1}\ldots j _{p};1}^{2}-E\,(U_{1}+U_{2}+U_{12})(U_{-1}+U_{-2}+U_{-1,-2})Y_{1,p}\Big{|}\leq CN^ {-\frac{p+3}{2}},\]
where we set
\[Y_{1,p}=\prod_{u=2}^{p+1}\Big{(}\sum_{v=1}^{u}\sum_{v^{\prime}=-1}^{-u}\big{(} U_{vv^{\prime}}-U_{v,-(u+1)}-U_{u+1,v^{\prime}}+U_{u+1,-(u+1)}\big{)}\Big{)}.\]
Notice in particular that \(Y_{1,p}\) is independent of the random variables \((U_{l,l^{\prime}})_{l,l^{\prime}\in\mathbb{Z}:l^{\prime}>0}\) and \((U_{l})_{l\in\mathbb{Z}}\) and that \(EY_{1,p}=0\), arguing as in the previous step. We therefore conclude that
\[\Big{|}\ N^{-p}\!\!\sum_{j_{1},\ldots,j_{p}=1}^{N}\!\!\mathbb{E}\,m_{j_{1} \ldots j_{p};1}^{2}\Big{|}\leq CN^{-\frac{p+3}{2}}=CN^{-A_{2+p}}.\]
For the remaining cases \(k\geq 2\), using the same arguments as above, we find that
\[\Big{|}\ N^{-p}\!\!\sum_{j_{1},\ldots,j_{p}=1}^{N}\!\!\mathbb{E}\,m_{j_{1} \ldots j_{p};k}^{2}-E\,X_{k,p}Y_{k,p}\Big{|}\leq CN^{-\frac{2k+p+1}{2}}=N^{-A_{ 2k+p}},\]
where
\[Y_{k,p}=\prod_{u=2k}^{p+2k-1}\Big{(}\sum_{v=1}^{u}\sum_{v^{\prime}=-1}^{-u} \big{(}U_{vv^{\prime}}-U_{v,-(u+1)}-U_{u+1,v^{\prime}}+U_{u+1,-(u+1)}\big{)} \Big{)}\]
so that by independence \(E\,Y_{k,p}=0\), and where \(X_{k,p}\) is a finite polynomial of the random variables \((U_{l,l^{\prime}})_{l,l^{\prime}\in\mathbb{Z}:l^{\prime}>0}\) and \((U_{l})_{l\in\mathbb{Z}}\) so that \(E[X_{k,p}Y_{k,p}]=0\).
We conclude this section with a few corollaries of Lemma 2.3 that provide useful bounds on the correlation functions, conditionally on a fixed spin \(\sigma_{i}\). Recall that \(\mathbf{m}\in\mathbb{R}^{N}\) denotes the magnetization vector (i.e. \(m_{i}=\langle\sigma_{i}\rangle\)) and that \(\mathbf{m}^{(i)}\) denotes the magnetization of the system after removing \(\sigma_{i}\). We write \(\langle\cdot\rangle^{(i)}\) for the corresponding Gibbs measure so that
\[\mathbf{m}^{(i)}=\big{(}m_{1}^{(i)},\ldots,m_{i-1}^{(i)},m_{i+1}^{(i)},\ldots,m_{N}^{(i)}\big{)}=\big{(}\langle\sigma_{1}\rangle^{(i)},\ldots,\langle\sigma _{i-1}\rangle^{(i)},\langle\sigma_{i+1}\rangle^{(i)},\ldots,\langle\sigma_{N }\rangle^{(i)}\big{)}.\]
Note that \(\langle\cdot\rangle^{(i)}\) is a Gibbs measure induced by the SK Hamiltonian as in (1.1), but with \(N-1\) particles and with coupling \(\beta^{\prime}=(1-1/N)^{1/2}\beta\) (which is close to \(\beta\) for \(N\gg 1\)).
More generally, for fixed \(i\in[N]\), let us identify \((g_{ij})_{j:j\neq i}\) with \((g_{ij}(1))_{j:j\neq i}\), where the components of \(\big{(}(g_{ij}(t))_{j:j\neq i}\big{)}_{t\in[0,1]}\) denote \(N-1\) independent Brownian motions of speed
\(1/N\), i.e. \(\mathbb{E}(g_{ij}(t))^{2}=t/N\). Then, for \(\sigma_{i}\in\{-1,1\}\) and \(t\in[0,1]\), we denote by \(\langle\cdot\rangle^{[i]}(\sigma_{i},t)\) the Gibbs measure induced by the SK Hamiltonian
\[H_{N-1}^{[i]}(\sigma) = \sum_{\begin{subarray}{c}1\leq u<v\leq N,\\ u,v\neq i\end{subarray}}\beta^{\prime}g^{\prime}_{uv}\sigma_{u}\sigma_{v}+ \sum_{\begin{subarray}{c}1\leq u\leq N:\\ u\neq i\end{subarray}}^{N}\big{(}h_{u}+\beta g_{iu}(t)\sigma_{i}\big{)}\sigma_ {u}\] \[= \sum_{\begin{subarray}{c}1\leq u<v\leq N,\\ u,v\neq i\end{subarray}}\beta^{\prime}g^{\prime}_{uv}\sigma_{u}\sigma_{v}+ \sum_{\begin{subarray}{c}1\leq u\leq N:\\ u\neq i\end{subarray}}^{N}h^{\prime}_{u}(\sigma_{i},t)\sigma_{u}\]
for \(\sigma=(\sigma_{j})_{j\in[N],j\neq i}\in\{-1,1\}^{N-1}\). Here, the interactions \(g^{\prime}_{uv}=(1-1/N)^{-1/2}g_{uv}\) define a GOE matrix \(\mathbf{G}^{\prime}\in\mathbb{R}^{N-1\times N-1}\) and the random field \(\big{(}h^{\prime}_{j}(\sigma_{i},t)\big{)}_{j\in[N],j\neq i}\) consists of independent Gaussian copies of \(h^{\prime}\sim\mathcal{N}(\mu_{h^{\prime}},\sigma_{h^{\prime}}^{2})=\mathcal{ N}(\mu_{h},\sigma_{h}^{2}+\beta^{2}t/N)\). Observe that
\[|\mu_{h}-\mu_{h^{\prime}}|=0,\ \ \ \ |\beta-\beta^{\prime}|\to 0,\ \ \ \ |\sigma_{h}^{2}-\sigma_{h^{\prime}}^{2}|\to 0 \tag{2.10}\]
as \(N\to\infty\) and that \(\langle\cdot\rangle^{[i]}(\sigma_{i},0)=\langle\cdot\rangle^{(i)}\). Moreover, observe that \(\langle\cdot\rangle^{[i]}(\cdot,1)\) corresponds to the Gibbs expectation \((\langle\cdot\rangle|\sigma_{i})\) conditionally on \(\sigma_{i}\).
In the sequel, we denote by \(m^{[i]}_{j_{1}\ldots j_{p}}=m^{[i]}_{j_{1}\ldots j_{p}}(\sigma_{i},t)\) the \(k\)-point and by \(m^{[i]}_{j_{1}\ldots j_{p};k}=m^{[i]}_{j_{1}\ldots j_{p};k}(\sigma_{i},t)\) the \((k,p)\)-point correlation functions with regards to \(\langle\cdot\rangle^{[i]}\), defined as in (2.5) and (2.6). Following [1], we finally define for observables \(f:\{-1,1\}^{N-1}\to\mathbb{R}\)
\[\delta_{i}\langle f\rangle^{[i]} =\big{(}\delta_{i}\langle f\rangle^{[i]}\big{)}(t)=\frac{1}{2} \langle f\rangle^{[i]}(1,t)-\frac{1}{2}\langle f\rangle^{[i]}(-1,t),\] \[\epsilon_{i}\langle f\rangle^{[i]} =\big{(}\epsilon_{i}\langle f\rangle^{[i]}\big{)}(t)=\frac{1}{2} \langle f\rangle^{[i]}(1,t)+\frac{1}{2}\langle f\rangle^{[i]}(-1,t),\] \[\Delta_{i}\langle f\rangle^{[i]} =\big{(}\Delta_{i}\langle f\rangle^{[i]}\big{)}(t)=\big{(} \epsilon_{i}\langle f\rangle^{[i]}\big{)}(t)-\langle f\rangle^{(i)}.\]
The next lemmas estimate certain correlation functions, conditionally on \(\sigma_{i}\). To ease the notation, we write \(\sum\) and understand implicitly that all variables are averaged over \([N]\backslash\{i\}\).
**Lemma 2.4**.: _Assume \((\beta,\mu_{h},\sigma_{h}^{2})\in\mathcal{A}_{RS^{-}}\) and \(h\sim\mathcal{N}(\mu_{h},\sigma_{h}^{2})\). Then, for every \(k\geq 0,p\geq 1\) with \(2k+p\geq 1\), there exists a constant \(C>0\) such that_
\[\sup_{t\in[0,1]}N^{-p}\sum_{j_{1},\ldots,j_{p}}\big{\|}\big{(} \Delta_{i}m^{[i]}_{j_{1}\ldots j_{p};k}\big{)}(t)\big{\|}_{2}^{2}\leq CN^{-A _{2k+p+2}},\] \[\sup_{t\in[0,1]}N^{-p}\sum_{j_{1},\ldots,j_{p}}\big{\|}\big{(} \delta_{i}m^{[i]}_{j_{1}\ldots j_{p};k}\big{)}(t)\big{\|}_{2}^{2}\leq CN^{-A _{2k+p+1}}.\]
Proof.: Applying Ito's lemma w.r.t. \(\big{(}(g_{ij}(t))_{j\in[N]:j\neq i}\big{)}_{t\in[0,1]}\), we find by definition (2.6) that
\[\mathrm{d}\big{(}\Delta_{i}m^{[i]}_{j_{1}\ldots j_{p};k}\big{)}= \beta\sum_{l}\big{(}\delta_{i}m^{[i]}_{lj_{1}\ldots j_{p};k}\big{)}\,\mathrm{d }g_{il}+\frac{\beta^{2}}{2}(1-N^{-1})\big{(}\epsilon_{i}m^{[i]}_{j_{1}\ldots j _{p};k+1}\big{)}\,\mathrm{d}t\,,\] \[\mathrm{d}\big{(}\delta_{i}m^{[i]}_{j_{1}\ldots j_{p};k}\big{)}= \beta\sum_{l}\big{(}\epsilon_{i}m^{[i]}_{lj_{1}\ldots j_{p};k}\big{)}\, \mathrm{d}g_{il}+\frac{\beta^{2}}{2}(1-N^{-1})\big{(}\delta_{i}m^{[i]}_{j_{1} \ldots j_{p};k+1}\big{)}\,\mathrm{d}t\,.\]
Notice that the integrands on the r.h.s. in the previous equations are linear combinations of suitable \((k,p)\)-point functions with interaction coupling and external field parameters all satisfying (1.3) & (1.5) for \(N\) large (by the remarks around (2.10)) s.t. by Lemma 2.3
\[\sup_{t\in[0,1]}N^{-p-1}\sum_{l,j_{1},\ldots,j_{p}}\big{\|}\big{(} \epsilon_{i}m^{[i]}_{lj_{1}\ldots j_{p};k}\big{)}(t)\big{\|}_{2}^{2}\leq CN^{ -A_{2k+p+1}},\] \[\sup_{t\in[0,1]}N^{-p}\sum_{j_{1},\ldots,j_{p}}\big{\|}\big{(} \delta_{i}m^{[i]}_{j_{1}\ldots j_{p};k+1}\big{)}(t)\big{\|}_{2}^{2}\leq CN^{ -A(2k+2+p)}.\]
Hence, employing the Ito isometry yields
\[\sup_{t\in[0,1]}N^{-p}\sum_{j_{1},\ldots,j_{p}}\big{\|}\big{(} \delta_{i}m^{[i]}_{j_{1}\ldots j_{p};k}\big{)}(t)\big{\|}_{2}^{2}\leq CN^{-A _{2k+p+1}}\]
and plugging this into the dynamical equation for \(\Delta_{i}m^{[i]}_{j_{1}\ldots j_{p};k}\), we get
\[\sup_{t\in[0,1]}N^{-p}\sum_{j_{1},\ldots,j_{p}}\big{\|}\big{(} \Delta_{i}m^{[i]}_{j_{1}\ldots j_{p};k}\big{)}(t)\big{\|}_{2}^{2}\leq CN^{-A _{2k+p+2}}.\]
**Lemma 2.5**.: _Assume \((\beta,\mu_{h},\sigma_{h}^{2})\in\mathcal{A}_{RS^{-}}\) and \(h\sim\mathcal{N}(\mu_{h},\sigma_{h}^{2})\). Then, for every \(k\geq 0,p\geq 1\) with \(2k+p\geq 1\), there exists a constant \(C>0\) such that_
\[\sup_{t\in[0,1]}N^{-p}\sum_{j_{1},\ldots,j_{p}}\Big{\|}\sum_{j}g_ {ij}(t)\big{(}\Delta_{i}m^{[i]}_{jj_{1}\ldots j_{p};k}\big{)}(t)\Big{\|}_{2}^{2} \leq CN^{-A_{2k+p+3}}, \tag{2.11}\] \[\sup_{t\in[0,1]}N^{-p}\sum_{j_{1},\ldots,j_{p}}\Big{\|}\sum_{j}g_ {ij}(t)\big{(}\delta_{i}m^{[i]}_{jj_{1}\ldots j_{p};k}\big{)}(t)\Big{\|}_{2}^{2} \leq CN^{-A_{2k+p+2}}.\]
Proof.: The bounds follow from integration by parts applied to \(g_{ij}(t)\) and \(g_{ij^{\prime}}(t)\) in
\[N^{-p}\sum_{j_{1},\ldots,j_{p}}\sum_{j,j^{\prime}}\mathbb{E}\,g_{ ij}g_{ij^{\prime}}\big{(}\Delta_{i}m^{[i]}_{jj_{1}\ldots j_{p};k}\big{)} \big{(}\Delta_{i}m^{[i]}_{j^{\prime}j_{1}\ldots j_{p};k}\big{)}\]
and applying Lemma 2.4. It will be clear that the bounds are uniform in \(t\in[0,1]\) so for simplicity, let us ignore the \(t\)-dependence in the notation. Applying Gaussian integration by parts first to the \(g_{ij^{\prime}}\) implies
\[N^{-p}\sum_{j_{1},\ldots,j_{p}}\sum_{j,j^{\prime}}\mathbb{E}\,g_{ ij}g_{ij^{\prime}}\big{(}\Delta_{i}m^{[i]}_{jj_{1}\ldots j_{p};k}\big{)}\big{(} \Delta_{i}m^{[i]}_{j^{\prime}j_{1}\ldots j_{p};k}\big{)}\] \[=N^{-p}\sum_{j_{1},\ldots,j_{p}=1}^{N}\,\bigg{(}\frac{t}{N}\sum_{ j=1}^{N}\mathbb{E}\big{(}\Delta_{i}m^{[i]}_{jj_{1}\ldots j_{p};k}\big{)}^{2}\] \[\qquad\qquad\qquad\qquad\qquad+\beta^{2}t(1-N^{-1})\sum_{j=1}^{N }\mathbb{E}\,g_{ij}\big{(}\Delta_{i}m^{[i]}_{jj_{1}\ldots j_{p};k}\big{)} \big{(}\Delta_{i}m^{[i]}_{j_{1}\ldots j_{p};k+1}\big{)}\] \[\qquad\qquad\qquad\qquad+\beta^{2}\frac{t}{N}\sum_{j,j^{\prime}= 1}^{N}\mathbb{E}\,g_{ij}\big{(}\Delta_{i}m^{[i]}_{jj^{\prime}j_{1}\ldots j_{ p};k}\big{)}\big{(}\Delta_{i}m^{[i]}_{j^{\prime}j_{1}\ldots j_{p};k}\big{)} \bigg{)}\]
and then, applying Gaussian integration by parts to the \(g_{ij}\), we arrive at
\[N^{-p}\sum_{j_{1},\ldots,j_{p}}\sum_{j,j^{\prime}}\mathbb{E}\,g_ {ij}g_{ij^{\prime}}\big{(}\Delta_{i}m^{[i]}_{jj_{1}\ldots j_{p};k}\big{)}\big{(} \Delta_{i}m^{[i]}_{j^{\prime}j_{1}\ldots j_{p};k}\big{)}\] \[=N^{-p}\sum_{j_{1},\ldots,j_{p}}\bigg{(}\frac{t}{N}\sum_{j} \mathbb{E}\big{(}\Delta_{i}m^{[i]}_{jj_{1}\ldots j_{p};k}\big{)}^{2}+\beta^{4 }t^{2}(1-N^{-1})\mathbb{E}\,\big{(}\Delta_{i}m^{[i]}_{j_{1}\ldots j_{p};k+1} \big{)}^{2}\] \[\qquad\qquad\qquad\qquad+\frac{\beta^{4}t^{2}}{N}(1-N^{-1})\sum_{ j}\mathbb{E}\,\big{(}\Delta_{i}m^{[i]}_{jj_{1}\ldots j_{p};k}\big{)}\big{(} \Delta_{i}m^{[i]}_{jj_{1}\ldots j_{p};k+1}\big{)}\] \[\qquad\qquad\qquad+\frac{\beta^{4}t^{2}}{N^{2}}\sum_{j,j^{\prime} }\mathbb{E}\,\big{(}\Delta_{i}m^{[i]}_{jj^{\prime}j_{1}\ldots j_{p};k}\big{)} ^{2}\bigg{)}.\]
Applying Lemma 2.4 and Cauchy-Schwarz, we thus obtain
\[\sup_{t\in[0,1]}N^{-p}\sum_{j_{1},\ldots,j_{p}}\Big{\|}\sum_{j}g_{ij}(t)\big{(} \Delta_{i}m^{[i]}_{jj_{1}\ldots j_{p};k}\big{)}(t)\Big{\|}_{2}^{2}\leq CN^{-A _{2k+p+3}}.\]
Repeating the computation while replacing \(\Delta_{i}\) by \(\delta_{i}\) in all integrands, we obtain (2.11).
## 3 Proof of Propositions 1.1 and 1.2
We start with the proof of Prop. 1.1 which is a simple consequence of Lemma 2.4.
Proof of Prop. 1.1.: A direct computation shows that
\[m_{ij}=(1-m_{i}^{2})\,\delta_{i}m_{j}^{[i]}.\]
Now, applying Ito's lemma as in the previous section on the \(i\)-th row of \(\mathbf{G}\), we find
\[\frac{m_{ij}}{1-m_{i}^{2}}-\beta\sum_{l}g_{il}m_{lj}^{(i)} =\beta\sum_{k}\int_{0}^{1}\big{(}\Delta_{i}m_{jk}^{[i]}\big{)}(t )\,\mathrm{d}g_{ik}(t)+\frac{\beta^{2}}{2N}\sum_{k}\int_{0}^{1}\big{(}\delta_{i }m_{kkj}^{[i]}\big{)}(t)\,\mathrm{d}t\] \[=\beta\sum_{k}\int_{0}^{1}\big{(}\Delta_{i}m_{jk}^{[i]}\big{)}(t )\,\mathrm{d}g_{ik}(t)+\frac{\beta^{2}}{2}(1-N^{-1})\int_{0}^{1}\big{(}\delta_{ i}m_{j;1}^{[i]}\big{)}(t)\,\mathrm{d}t.\]
Using that \((1-m_{i}^{2})\leq 1\) and applying Lemma 2.4, we conclude that
\[\begin{split}&\Big{\|}\frac{m_{ij}}{1-m_{i}^{2}}-\beta\sum_{l \neq i}g_{il}m_{lj}^{(i)}\Big{\|}_{2}^{2}\\ &\leq\frac{C}{N}\sum_{k}\sup_{t\in[0,1]}\big{\|}\big{(}\Delta_{i }m_{jk}^{[i]}\big{)}(t)\big{\|}_{2}^{2}+C\sup_{t\in[0,1]}\big{\|}\big{(}\delta_ {i}m_{j;1}^{[i]}\big{)}(s)\big{\|}_{2}^{2}\\ &=C\sup_{t\in[0,1]}\bigg{(}\frac{1}{(N-1)^{2}}\sum_{j_{1},j_{2}} \big{\|}\big{(}\Delta_{i}m_{j_{1}j_{2}}^{[i]}\big{)}(t)\big{\|}_{2}^{2}+\frac{ 1}{N-1}\sum_{j}\big{\|}\big{(}\delta_{i}m_{j;1}^{[i]}\big{)}(s)\big{\|}_{2}^{2} \bigg{)}\leq CN^{-5/2}.\end{split} \tag{3.1}\]
Notice that in the second step of the last bound we used the symmetry among the sites \(j\in[N]\setminus\{i\}\), in order to average over both indices of the two point functions.
We conclude this section with the proof of Prop. 1.2. Our goal is to show that
\[\mathbf{Y}=\big{(}\mathbf{D}+\beta^{2}(1-q)-\beta\mathbf{G}\big{)}\mathbf{M}- \mathrm{id}_{\mathbb{R}^{N}}\in\mathbb{R}^{N\times N}\]
with entries
\[y_{ik}=\begin{cases}(1-m_{i}^{2})\big{(}\beta^{2}(1-q)-\beta\sum_{j}g_{ij} \delta_{i}m_{j}^{[i]}\big{)}&:i=k,\\ (1-m_{i}^{2})^{-1}m_{ik}+\beta^{2}(1-q)m_{ik}-\beta\sum_{j}g_{ij}m_{jk}&:i\neq k,\end{cases} \tag{3.2}\]
has operator norm \(\|\mathbf{Y}\|_{\mathrm{op}}\) bounded by \(o(1)\|\mathbf{M}\|_{\mathrm{op}}\), up to a quantity \(O(1)\) which is of order one with high probability. Our proof relies on Prop. 1.1. In the following, the standard Frobenius (or Hilbert-Schmidt) norm is denoted by \(\|\cdot\|_{\mathrm{F}}\); recall that \(\|\cdot\|_{\mathrm{op}}\leq\|\cdot\|_{\mathrm{F}}\).
Proof of Prop. 1.2.: A direct computation shows that
\[m_{jk}=\langle m_{j}^{[i]};m_{k}^{[i]}\rangle+\langle m_{jk}^{[i]}\rangle= \big{(}\delta_{i}m_{j}^{[i]}\big{)}m_{ik}+\epsilon_{i}m_{jk}^{[i]}+m_{i} \big{(}\delta_{i}m_{jk}^{[i]}\big{)}\]
so that, motivated by Prop. 1.1, we can decompose \(y_{ik}\) for \(i\neq k\) into
\[y_{ik} =\frac{m_{ik}}{1-m_{i}^{2}}-\beta\sum_{j}g_{ij}m_{jk}^{(i)}-\beta \sum_{j}g_{ij}\big{(}m_{jk}-m_{jk}^{(i)}\big{)}+\beta^{2}(1-q)m_{ik}\] \[=\Big{(}\beta^{2}(1-q)-\beta\sum_{j}g_{ij}\delta_{i}m_{j}^{[i]} \Big{)}m_{ik}+\Big{(}\frac{m_{ik}}{1-m_{i}^{2}}-\beta\sum_{j}g_{ij}m_{jk}^{(i) }\Big{)}\] \[\quad-\beta\sum_{j}g_{ij}\big{(}\Delta_{i}m_{jk}^{[i]}\big{)}- \beta m_{i}\sum_{j}g_{ij}\big{(}\delta_{i}m_{jk}^{[i]}\big{)}.\]
Comparing this with the diagonal entries of \(\mathbf{Y}\) in Eq. (3.2), we can split
\[\mathbf{Y}=\mathbf{Y}_{1}\mathbf{M}+\mathbf{Y}_{2}-\mathbf{Y}_{3}-\mathbf{Y}_ {4},\]
where the \(\mathbf{Y}_{j}\in\mathbb{R}^{N\times N}\), for \(j\in\{1,2,3,4\}\), are defined by
\[(\mathbf{Y}_{1})_{ik} =\Big{(}\beta^{2}(1-q)-\beta\sum_{j}g_{ij}\delta_{i}m_{j}^{[i]} \Big{)}\delta_{ik}, (\mathbf{Y}_{2})_{ik} =\begin{cases}0&:i=k,\\ \frac{m_{ik}}{1-m_{i}^{2}}-\beta\sum_{j}g_{ij}m_{jk}^{(i)}&:i\neq k,\end{cases}\] \[(\mathbf{Y}_{3})_{ik} =\begin{cases}0&:i=k,\\ \beta\sum_{j}g_{ij}\big{(}\Delta_{i}m_{jk}^{[i]}\big{)}&:i\neq k,\end{cases} (\mathbf{Y}_{4})_{ik} =\begin{cases}0&:i=k,\\ \beta m_{i}\sum_{j}g_{ij}\big{(}\delta_{i}m_{jk}^{[i]}\big{)}&:i\neq k.\end{cases}\]
The matrix \(\mathbf{Y}_{2}\) vanishes in norm when \(N\to\infty\), in the sense of probability, by Eq. (3.1) and Markov's inequality: for every \(\delta>0\), we find that
\[\mathbb{P}\big{(}\|\mathbf{Y}_{2}\|_{\mathrm{op}}>\delta\big{)}\leq\delta^{- 2}\mathbb{E}\|\mathbf{Y}_{2}\|_{\mathrm{op}}^{2}\leq\delta^{-2}\mathbb{E}\| \mathbf{Y}_{2}\|_{\mathrm{F}}^{2}=N^{2}\max_{i,j\in[N]:i\neq j}\mathbb{E}\big{(} \mathbf{Y}_{2})_{ij}^{2}\leq C\delta^{-2}N^{-1/2}.\]
By Lemma 2.5, the same argument implies \(\lim_{N\to\infty}\|\mathbf{Y}_{3}\|_{\mathrm{op}}=0\), and we have that
\[\mathbb{P}\big{(}\|\mathbf{Y}_{4}\|_{\mathrm{op}}>K\big{)} \leq K^{-2}\beta^{2}N^{2}\max_{i,j\in[N]:i\neq j}\mathbb{E}\big{(} \mathbf{Y}_{4})_{ij}^{2}\] \[\leq CK^{-2}N^{2}N^{-1}\sum_{j_{1}}\Big{\|}\sum_{j}g_{1j}\big{(} \delta_{1}m_{jj_{1}}^{[1]}\big{)}\Big{\|}_{2}^{2}\leq CK^{-2},\]
for some \(C>0\), independent of \(N\), where we used symmetry among the sites to reduce the computation to the first row of \(\mathbf{G}\). Combining these remarks, Prop. 1.2 follows if
\[\lim_{N\to\infty}\|\mathbf{Y}_{1}\|_{\mathrm{op}}=0 \tag{3.3}\]
in probability.
To prove (3.3), we argue first as in (3.1) to obtain that
\[\mathbb{E}\Big{|}\sum_{j}g_{ij}\Big{(}\delta_{i}m_{j}^{[i]}-\beta\sum_{k}g_{ik }m_{kj}^{(i)}\Big{)}\Big{|}^{2}\leq CN^{-5/2}.\]
Indeed, this bound is a direct consequence of Ito's lemma, which shows that
\[\mathrm{d}\sum_{j}g_{ij}\Big{(}\delta_{i}m_{j}^{[i]}-\beta\sum_{k}g _{ik}m_{kj}^{(i)}\Big{)} =\beta\!\sum_{j}\Big{(}\delta_{i}m_{j}^{[i]}\!-\!\beta\sum_{k}g_{ik }m_{kj}^{(i)}\Big{)}\,\mathrm{d}g_{ij}\!+\!\beta\sum_{j,k}g_{ij}\big{(}\Delta_{ i}m_{jk}^{[i]}\big{)}\mathrm{d}g_{ik}\] \[\quad+\frac{\beta^{2}}{2}(1-N^{-1})\sum_{j}g_{ij}\big{(}\delta_{i} m_{j;1}^{[i]}\big{)}\,\mathrm{d}t+\frac{\beta}{N}\sum_{j}\big{(}\Delta_{i}m_{jk}^{[ i]}\big{)}\,\mathrm{d}t,\]
and estimating the different contributions on the r.h.s. of the previous identity using Lemmas 2.4 and 2.5. Combining the previous bound with Markov thus implies
\[\lim_{N\to\infty}\sup_{i\in[N]}\Big{|}(\mathbf{Y}_{1})_{ii}-\Big{(}\beta^{2}(1 -q)-\beta^{2}\sum_{j}g_{ij}\sum_{k}g_{ik}m_{kj}^{(i)}\Big{)}\Big{|}=0\]
in probability.
We now split
\[\beta^{2}(1-q)-\beta^{2}\sum_{j}g_{ij}\sum_{k}g_{ik}m_{kj}^{(i)}\] \[=\beta^{2}\big{\langle}R_{1,2}-q\big{\rangle}^{(i)}-\beta^{2}\sum _{j}\big{(}g_{ij}^{2}-(N-1)^{-1}\big{)}\big{(}1-\big{(}m_{j}^{(i)}\big{)}^{2} \big{)}-\beta^{2}\sum_{j,k:j\neq k}g_{ij}g_{ik}m_{kj}^{(i)},\]
and it is straightforward to deduce from Theorem 2.1 that
\[\lim_{N\to\infty}\sup_{i\in[N]}\big{|}\big{\langle}R_{1,2}-q\big{\rangle}^{(i )}\big{|}=0\]
in probability. Indeed, this follows from Markov's inequality combined with Theorem 2.1, the continuity of \((\beta,\mu_{h},\sigma_{h}^{2})\to q_{\beta,h}=q_{\beta,\mu_{h},\sigma_{h}^{2}}\) and the fact that \(\langle\cdot\rangle^{(i)}\) is a SK measure with interaction coupling \(\beta^{\prime}=\beta^{\prime}_{N}\) and Gaussian external field \(h^{\prime}=h^{\prime}_{N}\) such that \(|\beta-\beta^{\prime}|\to 0\), \(|\sigma_{h}^{2}-\sigma_{h^{\prime}}^{2}|\to 0\) as \(N\to\infty\) and \(|\mu_{h}-\mu_{h^{\prime}}|=0\), as remarked already at (2.10).
On the other hand, a standard exponential concentration bound yields
\[\lim_{N\to\infty}\sup_{i\in[N]}\Big{|}\sum_{j}\big{(}g_{ij}^{2}-(N-1)^{-1} \big{)}\big{(}1-\big{(}m_{j}^{(i)}\big{)}^{2}\big{)}\Big{|}=\lim_{N\to\infty} \sup_{i\in[N]}\Big{|}\sum_{j}\big{(}g_{ij}^{2}-N^{-1}\big{)}\Big{|}=0\]
and, finally, applying once more Theorem 2.1, a straightforward computation shows that
\[\mathbb{E}\Big{(}\sum_{j,k:j\neq k}g_{ij}g_{ik}m_{kj}^{(i)}\Big{)}^{4}\leq CN ^{-4}\sum_{j_{1},j_{2},j_{3},j_{4}}\mathbb{E}\left(m_{j_{1}j_{2}}^{(i)}\right) ^{2}\!\big{(}m_{j_{3}j_{4}}^{(i)}\big{)}^{2}.\]
Then, using the identity
\[\big{(}m_{j_{1}j_{2}}^{(i)}\big{)}^{2}=\big{\langle}\big{(}\sigma_{j_{1}}^{1}- \sigma_{j_{1}}^{2}\big{)}\big{(}\sigma_{j_{2}}^{1}-\sigma_{j_{2}}^{3}\big{)} \big{(}\sigma_{j_{1}}^{-1}-\sigma_{j_{1}}^{-2}\big{)}\big{(}\sigma_{j_{2}}^{- 1}-\sigma_{j_{2}}^{-3}\big{)}\big{\rangle}^{(i)}\]
together with Theorem 2.1 and Cauchy-Schwarz, we arrive at
\[\mathbb{E}\Big{(}\sum_{j,k:j\neq k}g_{ij}g_{ik}m^{(i)}_{kj}\Big{)}^{4}\leq C\, \mathbb{E}\left(\big{\langle}\big{(}R_{1,-1}+R_{2,-2}-R_{1,-2}-R_{2,-1}\big{)}^{ 2}\big{\rangle}^{(i)}\right)^{2}\leq CN^{-2}.\]
Hence, by Markov's inequality, we obtain that
\[\lim_{N\to\infty}\sup_{i\in[N]}\Big{|}\sum_{j,k:j\neq k}g_{ij}g_{ik}m^{(i)}_{kj }\Big{|}=0,\]
in probability. This proves \(\lim_{N\to\infty}\|\mathbf{Y}_{1}\|_{\mathrm{op}}=0\) and concludes Prop. 1.2.
## 4 Proof of Proposition 1.3
In this section we conclude Theorem 1.4 by proving Prop. 1.3. Our arguments rely on the main results of [5, 6, 14] and translate, up to a few modifications, the main ideas from [9] to the present context. For the rest of this paper, \(h>0\) denotes a deterministic field strength as in the definition of \(H_{N}\) in Eq. (1.1) in Section 1.
We first recall the main result of [14], which states that, in a suitable subregion of the replica symmetric phase, the magnetization vector \(\mathbf{m}\) is well approximated by an iterative solution to the TAP [2] equations introduced by Bolthausen in [5, 6]. To state this precisely, we need to recall Bolthausen's construction of the TAP solution and collect some of its properties. Here, we find it convenient to follow the conventions and notation used in [6, 8]. We start with the sequences \((\alpha_{k})_{k\in\mathbb{N}}\), \((\gamma_{k})_{k\in\mathbb{N}}\) and \((\Gamma_{k})_{k\in\mathbb{N}}\) which have initializations
\[\alpha_{1}=\sqrt{q}\gamma_{1},\ \ \ \ \gamma_{1}=E\tanh(h+\beta\sqrt{q}Z),\ \ \ \ \Gamma_{1}^{2}=\gamma_{1}^{2}\]
and we define \(\psi:[0,q]\to[0,q]\) by
\[\psi(t)=E\tanh\left(h+\beta\sqrt{t}Z+\beta\sqrt{q-t}Z^{\prime}\right)\tanh \left(h+\beta\sqrt{t}Z+\beta\sqrt{q-t}Z^{\prime\prime}\right)\]
Then, we set recursively
\[\alpha_{k}=\psi(\alpha_{k-1}),\ \ \ \ \gamma_{k}=\frac{\alpha_{k}-\Gamma_{k-1}^{2} }{\sqrt{q-\Gamma_{k-1}^{2}}},\ \ \ \ \Gamma_{k}^{2}=\sum_{j=1}^{k}\gamma_{j}^{2}.\]
**Lemma 4.1**.: _([5, Lemma 2.2, Corollary 2.3, Lemma 2.4], [6, Lemma 2])_
1. \(\psi\) _is strictly increasing and convex in_ \([0,q]\) _with_ \(0<\psi(0)<\psi(q)=q\)_. If (_1.3_) is satisfied, then_ \(q\) _is the unique fixed point of_ \(\psi\) _in_ \([0,q]\)_._
2. \((\alpha_{k})_{k\in\mathbb{N}}\) _is increasing and_ \(\alpha_{k}>0\) _for all_ \(k\in\mathbb{N}\)_. If (_1.3_) is satisfied, then_ \(\lim_{k\to\infty}\alpha_{k}=q\)
3. _For_ \(k\geq 2\)_, we have that_ \(0<\Gamma_{k-1}^{2}<\alpha_{k}<q\) _and that_ \(0<\gamma_{k}<\sqrt{q-\Gamma_{k-1}^{2}}\)_. If (_1.3_) is satisfied, then_ \(\lim_{k\to\infty}\Gamma_{k}^{2}=q\) _and, consequently,_ \(\lim_{k\to\infty}\gamma_{k}=0\)_._
Next, we recall Bolthausen's decomposition of the interaction matrix \(\mathbf{G}\), which yields a convenient representation of the iterative TAP solution. Without loss of generality, we may assume that the interaction matrix \(\mathbf{G}\in\mathbb{R}^{N\times N}\) in (1.1) is equal to
\[\mathbf{G}=\frac{\mathbf{W}+\mathbf{W}^{T}}{\sqrt{2}}=\bar{\mathbf{W}}\]
for a random matrix \(\mathbf{W}=(w_{ij})_{1\leq i,j\leq N}\in\mathbb{R}^{N\times N}\) with zero diagonal and i.i.d. Gaussian entries \(w_{ij}\sim\mathcal{N}(0,N^{-1})\) (without symmetry constraint). Here and in the following, we abbreviate \(\bar{\mathbf{X}}=(\mathbf{X}+\mathbf{X}^{T})/\sqrt{2}\) for \(\mathbf{X}\in\mathbb{R}^{N\times N}\). Then, we set
\[\mathbf{W}^{(1)}=\mathbf{W},\ \ \ \ \mathbf{G}^{(1)}=\bar{\mathbf{W}}^{(1)},\ \ \ \ \phi^{(1)}=N^{-1/2}\,\mathbf{1}\in\mathbb{R}^{N},\ \ \ \ \mathbf{m}^{(1)}=\sqrt{q}\mathbf{1}\in\mathbb{R}^{N}\]
and, assuming \(\mathbf{W}^{(s)},\mathbf{G}^{(s)}=\bar{\mathbf{W}}^{(s)},\phi^{(s)},\mathbf{ m}^{(s)}\) are defined for \(1\leq s\leq k\), we set3
Footnote 3: Notice that here we use the convention that \((\phi^{(s)})_{s=1}^{k}\) forms an orthonormal sequence in \((\mathbb{R}^{N},(\cdot,\cdot))\). In contrast to that, in [6, 8] the inner product \(\langle\cdot,\cdot\rangle\) and the tensor product \(\otimes\) are rescaled by a factor \(N^{-1}\).
\[\zeta^{(s)}=\mathbf{G}^{(s)}N^{1/2}\phi^{(s)}.\]
Moreover, \(\mathcal{G}_{k}\) denotes the \(\sigma\)-algebra
\[\mathcal{G}_{k}=\sigma\big{(}\mathbf{W}^{(s)}N^{1/2}\phi^{(s)},(\mathbf{W}^{( s)})^{T}N^{1/2}\phi^{(s)}:1\leq s\leq k\big{)}.\]
Conditional expectations and respectively probabilities with respect to \(\mathcal{G}_{k}\) are denoted by \(\mathbb{E}_{k}\) and \(\mathbb{P}_{k}\). We then define the iterative cavity field \(\mathbf{z}^{(k+1)}\in\mathbb{R}^{N}\) and the iterative TAP solution \(\mathbf{m}^{(k+1)}\in\mathbb{R}^{N}\) at step \(k+1\) by
\[\mathbf{z}^{(k+1)}=\sum_{s=1}^{k-1}\gamma_{s}\zeta^{(s)}+\sqrt{q-\Gamma_{k-1}^ {2}}\zeta^{(k)},\ \ \ \ \mathbf{m}^{(k+1)}=\tanh\big{(}h\mathbf{1}+\beta\,\mathbf{z}^{(k+1)}\big{)} \tag{4.1}\]
and we also set
\[\phi^{(k+1)}=\frac{\mathbf{m}^{(k+1)}-\sum_{s=1}^{k}(\mathbf{m}^{(k+1)},\phi^ {(s)})\phi^{(s)}}{\left\|\mathbf{m}^{(k+1)}-\sum_{s=1}^{k}(\mathbf{m}^{(k+1)},\phi^{(s)})\phi^{(s)}\right\|},\]
recalling that \(\phi^{(k+1)}\) is well-defined for all \(k<N\)[6, Lemma 5]. Finally, we define
\[\mathbf{W}^{(k+1)}= \,\mathbf{W}^{(k)}-\rho^{(k)},\ \ \ \ \text{for}\] \[\rho^{(k)}= \,\mathbf{W}^{(k)}\phi^{(k)}\otimes\phi^{(k)}+\phi^{(k)}\otimes( \mathbf{W}^{(k)})^{T}\phi^{(k)}-(\mathbf{W}^{(k)}\phi^{(s)},\phi^{(k)})\,\phi^{ (k)}\otimes\phi^{(k)},\]
where \((\mathbf{x}\otimes\mathbf{y})(\mathbf{z})=(\mathbf{y},\mathbf{z})\,\mathbf{x}\), and by symmetrization
\[\mathbf{G}^{(k+1)} =\bar{\mathbf{W}}^{(k+1)}=\mathbf{G}^{(k)}-\bar{\rho}^{(k)},\quad \text{ for }\] \[\bar{\rho}^{(k)} =N^{-1/2}\zeta^{(k)}\otimes\phi^{(k)}+N^{-1/2}\phi^{(k)}\otimes \zeta^{(k)}-N^{-1/2}(\zeta^{(k)},\phi^{(k)})\,\phi^{(k)}\otimes\phi^{(k)}.\]
\((\phi^{(s)})_{s=1}^{k}\) is orthonormal in \((\mathbb{R}^{N},(\cdot,\cdot))\) and \(\mathbf{P}^{(k)}\), \(\mathbf{Q}^{(k)}\) denote the orthogonal projections
\[\mathbf{P}^{(k)}=\sum_{s=1}^{k}\phi^{(s)}\otimes\phi^{(s)}=(P^{(k)}_{ij})_{1 \leq i,j\leq N},\quad\ \mathbf{Q}^{(k)}=\mathbf{1}_{\mathbb{R}^{N}}-\mathbf{P}^{(k)}=(Q^{(k)}_{ij})_ {1\leq i,j\leq N}.\]
Notice that \(\mathbf{P}^{(k)}\) equals the orthogonal projection onto
\[\text{span}\left(\mathbf{m}^{(s)}:1\leq s\leq k\right)=\text{span}\left(\phi^{ (s)}:1\leq s\leq k\right).\]
In the next result we collect several useful facts about \((\mathbf{z}^{(s)})_{s=1}^{k},(\mathbf{m}^{(s)})_{s=1}^{k}\) and \((\mathbf{G}^{(s)})_{s=1}^{k}\). We say for \((X_{N})_{N\geq 1},(Y_{N})_{N\geq 1}\) that may depend on parameters like \(\beta,h\), etc. that
\[X_{N}\simeq Y_{N}\]
if and only if there exist positive constants \(c,C>0\), which may depend on the parameters, but which are independent of \(N\), such that for every \(t>0\) we have
\[\mathbb{P}(|X_{N}-Y_{N}|>t)\leq Ce^{-cNt^{2}}.\]
**Proposition 4.2**.: _([5, Prop. 2.5], [6, Prop. 4, Prop. 6, Lemmas 3, 11, 14 & 16])_
1. \(\mathbf{m}^{(k)}\) _and_ \(\phi^{(k)}\) _are_ \(\mathcal{G}_{k-1}\)_-measurable for every_ \(k\geq 1\) _and_ \[\mathbf{W}^{(k)}\phi^{(s)}=(\mathbf{W}^{(k)})^{T}\phi^{(s)}=\mathbf{G}^{(k)} \phi^{(s)}=0,\ \forall\ s<k.\]
2. _Conditionally on_ \(\mathcal{G}_{k-2}\)_,_ \(\mathbf{W}^{(k)}\) _and_ \(\mathbf{W}^{(k-1)}\) _are Gaussian, and we have that_ \[\mathbb{E}_{k-2}\,w^{(k)}_{ij}w^{(k)}_{st}=\frac{1}{N}Q^{(k-1)}_{is}Q^{(k-1)}_ {jt}\] _and, consequently, that_ \[\mathbb{E}_{k-2}\,g^{(k)}_{ij}g^{(k)}_{st}=\frac{1}{N}Q^{(k-1)}_{is}Q^{(k-1)} _{jt}+\frac{1}{N}Q^{(k-1)}_{it}Q^{(k-1)}_{js}.\]
3. _Conditionally on_ \(\mathcal{G}_{k-2}\)_,_ \(\mathbf{W}^{(k)}\) _is independent of_ \(\mathcal{G}_{k-1}\)_. In particular, conditionally on_ \(\mathcal{G}_{k-1}\)_,_ \(\mathbf{W}^{(k)}\) _and_ \(\mathbf{G}^{(k)}\) _are Gaussian with the same covariance as in 2)._
4. _For every_ \(k\geq 1\)_,_ \(N^{-1/2}(\zeta^{(k)},\phi^{(k)})\) _is unconditionally Gaussian with variance_ \(2/N\)
_._
5. _Conditionally on_ \(\mathcal{G}_{k-1}\)_, the random variables_ \(\zeta^{(k)}\) _are Gaussian with_ \[\mathbb{E}_{k-1}\zeta^{(k)}_{i}\zeta^{(k)}_{j}=Q^{(k-1)}_{ij}+\phi^{(k)}_{i}\phi^ {(k)}_{j}.\]
6. _For every_ \(k\geq 1\) _and_ \(s<k\)_, one has_ \[N^{-1/2}(\boldsymbol{m}^{(k)},\phi^{(s)})\simeq\gamma_{s},\;\;N^ {-1/2}(\boldsymbol{m}^{(k)},\phi^{(k)})\simeq\sqrt{q-\Gamma_{k-1}^{2}},\] \[N^{-1}(\boldsymbol{m}^{(k)},\boldsymbol{m}^{(s)})\simeq\alpha_{s}, \;\;N^{-1}(\boldsymbol{m}^{(k)},\boldsymbol{m}^{(k)})\simeq q.\]
_In particular, \(\lim_{N\to\infty}N^{-1}\mathbb{E}\left\|\boldsymbol{m}^{(k)}\right\|^{2}=q\) and, assuming (1.3), we have that_
\[\limsup_{j,k\to\infty}\lim_{N\to\infty}N^{-1}\mathbb{E}\left\|\boldsymbol{m}^{ (j)}-\boldsymbol{m}^{(k)}\right\|^{2}=0.\]
7. _For every_ \(k\geq 1\)_, we have that_ \[\left\|N^{-1/2}\big{(}\zeta^{(k)},\boldsymbol{m}^{(k+1)}\big{)}-\beta(1-q) \sqrt{q-\Gamma_{k-1}^{2}}\right\|_{2}=0,\] _and, for_ \(1\leq s\leq k-1\)_, we have that_ \[\left\|N^{-1/2}\big{(}\zeta^{(s)},\boldsymbol{m}^{(k+1)}\big{)}-\beta(1-q) \gamma_{s}\right\|_{2}=0.\]
In addition to the properties listed in Prop. 4.2 it is well-known that the sequences \((\boldsymbol{m}^{(s)})_{s=1}^{k}\) and \((\boldsymbol{z}^{(s)})_{s=2}^{k+1}\) have an explicit joint limiting law, which we recall in the next proposition. In its statement, convergence of a sequence \((\mu_{n})_{n\in\mathbb{N}}\) of probability measures on \(\mathbb{R}^{d}\), \(\mu_{n}\in\mathcal{P}(\mathbb{R}^{d})\) for each \(n\in\mathbb{N}\), to a limiting measure \(\mu\in\mathcal{P}(\mathbb{R}^{d})\) in \(\mathcal{W}_{2}(\mathbb{R}^{d})\) means that
\[\lim_{n\to\infty}\mathcal{W}_{2}(\mu_{n},\mu)=0,\;\;\;\;\;\text{where}\;\;\; \;\;\;\mathcal{W}_{2}(\mu,\nu)=\inf_{\Pi}\sqrt{E\|\mathbf{X}-\mathbf{Y}\|^{2}}\]
denotes the usual Wasserstein 2-distance between two probability measures \(\mu,\nu\in\mathcal{P}(\mathbb{R}^{d})\) (the infimum is taken over all couplings \(\Pi\in\mathcal{P}(\mathbb{R}^{2d})\) of \(\mu\) and \(\nu\), and \((\mathbf{X},\mathbf{Y})\) has joint distribution \((\mathbf{X},\mathbf{Y})\sim\Pi\)). The next theorem follows from combining [22, 5, 6].
**Theorem 4.3**.: _In the sense of probability (w.r.t. the disorder **G**), we have that_
\[\lim_{N\to\infty}\frac{1}{N}\sum_{i=1}^{N}\delta_{m^{(1)}_{i},\ldots,m^{(k)}_{ i},z^{(2)}_{i},\ldots,z^{(k+1)}_{i}}=\mathcal{L}_{(M_{1},\ldots,M_{k},Z_{2}, \ldots,Z_{k+1})}\]
_in \(W_{2}(\mathbb{R}^{2k})\), where \(M_{1}=\sqrt{q}\), \(M_{s}\stackrel{{ s\geq 1}}{{=}}\tanh(h+\beta Z_{s})\), \((Z_{2},\ldots,Z_{k+1})\sim\mathcal{N}(\boldsymbol{0},\mathbf{K}_{\leq k})\) is a Gaussian vector in \(\mathbb{R}^{k}\) and \(\mathcal{L}_{(M_{1},\ldots,M_{K},Z_{2},\ldots,Z_{k+1})}\) is the law of \((M_{1},\ldots,M_{K},Z_{2},\ldots,Z_{k+1})\). The covariance \(\mathbf{K}_{\leq k}\in\mathbb{R}^{k\times k}\) equals \(K_{st}=\operatorname{Cov}(Z_{s+1},Z_{t+1})=EM_{s}M_{t}\); in particular_
\[K_{st}\stackrel{{ s,t>1}}{{=}}E\tanh(h+\beta Z_{s})\tanh(h+\beta Z _{t})=EM_{s}M_{t}=\begin{cases}q&:s=t,\\ \alpha_{s\wedge t}&:s\neq t.\end{cases}\]
**Remark 4.3**.: _In [22, 5], the iterative TAP sequence is defined differently compared to (4.1). Namely, in the language of [22, 5] one sets \(\widetilde{\boldsymbol{m}}^{(0)}=\boldsymbol{0},\widetilde{\boldsymbol{m}}^{(1) }=\sqrt{q}\boldsymbol{1}\) and for \(k\geq 1\)_
\[\widetilde{\boldsymbol{z}}^{(k+1)}=\boldsymbol{G}\widetilde{\boldsymbol{m}}^{ (k)}-\beta(1-q)\widetilde{\boldsymbol{m}}^{(k-1)},\ \ \ \ \widetilde{\boldsymbol{m}}^{(k+1)}=\tanh\big{(}h \boldsymbol{1}+\beta\,\widetilde{\boldsymbol{z}}^{(k+1)}\big{)}.\]
_The validity of Theorem 4.3 then follows from_
\[\lim_{N\to\infty}N^{-1}\mathbb{E}\left\|\widetilde{\boldsymbol{z}}^{(k)}- \boldsymbol{z}^{(k)}\right\|^{2}=0,\ \ \ \ \lim_{N\to\infty}N^{-1}\mathbb{E}\left\|\widetilde{\boldsymbol{m}}^{(k)}- \boldsymbol{m}^{(k)}\right\|^{2}=0\]
_for every \(k\geq 2\), which can be proved inductively (note that \(\widetilde{\boldsymbol{m}}^{(1)}=\boldsymbol{m}^{(1)},\widetilde{\boldsymbol{ m}}^{(2)}=\boldsymbol{m}^{(2)}\)), based on Prop. 4.2 and the decomposition \(\boldsymbol{G}=\boldsymbol{G}^{(k)}+\sum_{s=1}^{k-1}\bar{\rho}^{(s)}\), and which implies that_
\[\lim_{N\to\infty}\mathcal{W}_{2}\bigg{(}\frac{1}{N}\sum_{i=1}^{N}\delta_{m_{i }^{(1)},\ldots,m_{i}^{(k)},z_{i}^{(2)},\ldots,z_{i}^{(k+1)}},\frac{1}{N}\sum_ {i=1}^{N}\delta_{\widetilde{m}_{i}^{(1)},\ldots,\widetilde{m}_{i}^{(k)}, \widetilde{z}_{i}^{(2)},\ldots,\widetilde{z}_{i}^{(k+1)}}\bigg{)}=0.\]
_in probability._
The properties of Bolthausen's iterative solution \((\boldsymbol{m}^{(k)})_{k\geq 1}\) suggest that it is close to the magnetization vector \(\boldsymbol{m}\) of the SK model, at least in a suitable subregion of the RS phase. This is proved in [14], based on a locally uniform overlap concentration assumption.
**Theorem 4.4** ([14]).: _Assume that \(\beta,h>0\) are such that for some \(\delta>0\) we have_
\[\lim_{N\to\infty}\sup_{\beta-\delta\leq\beta^{\prime}\leq\beta}\mathbb{E} \left\langle|R_{12}-q_{\beta^{\prime},h}|^{2}\right\rangle_{\beta^{\prime},h} =0\,. \tag{4.2}\]
_Then we have that_
\[\lim_{k\to\infty}\lim_{N\to\infty}N^{-1}\,\mathbb{E}\left\|\boldsymbol{m}- \boldsymbol{m}^{(k)}\right\|^{2}=0. \tag{4.3}\]
**Remark 4.4**.: _Observe that (4.2) is satisfied under (1.3) & (1.5), by Eq. (2.2) in Theorem 2.1. Notice furthermore that [14] defines the TAP iteration differently compared to (4.1):_
\[(\boldsymbol{z}^{\prime})^{(k+1)}=\ \boldsymbol{G}\,(\boldsymbol{m}^{\prime})^{(k )}-\beta\big{(}1-N^{-1}\|(\boldsymbol{m}^{\prime})^{(k)}\|^{2}\big{)}\,( \boldsymbol{m}^{\prime})^{(k-1)},\ \ \ (\boldsymbol{m}^{\prime})^{(k+1)}=\tanh\big{(}h+\beta( \boldsymbol{z}^{\prime})^{(k+1)}\big{)}.\]
_Similar remarks as for Theorem 4.3 apply: based on Prop. 4.2 and induction, one obtains_
\[\lim_{N\to\infty}N^{-1}\mathbb{E}\left\|(\boldsymbol{z}^{\prime})^{(k)}- \boldsymbol{z}^{(k)}\right\|^{2}=0,\ \ \ \lim_{N\to\infty}N^{-1}\mathbb{E}\left\|(\boldsymbol{m}^{\prime})^{(k)}- \boldsymbol{m}^{(k)}\right\|^{2}=0,\]
_which implies (4.3) with \(\boldsymbol{m}^{(k)}\) as defined in (4.1)._
Equipped with the above preparations, we now turn to the proof of Prop. 1.3, which is a direct consequence of the next proposition. For a sequence \((X_{N})_{N\in\mathbb{N}}\), we set
\[\text{p-}\liminf_{N\to\infty}X_{N} =\sup\Big{\{}t\in\mathbb{R}\colon\lim_{N\to\infty}\mathbb{P}(X_{N }\leq t)=0\Big{\}},\] \[\text{p-}\limsup_{N\to\infty}X_{N} =\inf\Big{\{}t\in\mathbb{R}\colon\lim_{N\to\infty}\mathbb{P}(X_{N }\geq t)=0\Big{\}}.\]
The next result follows by translating the main ideas [9, Section 4] to the present context. For completeness, we carry out the key steps, with a few modifications, in detail below.
**Proposition 4.5**.: _Assume that \((\beta,h)\) satisfy (1.3), and define for \(\textbf{v}\in(-1,1)^{N}\) the diagonal matrix \(\textbf{D}_{\textbf{v}}\in\mathbb{R}^{N\times N}\) and the set \(S_{N,\varepsilon,k}\subset\mathbb{R}^{N}\times(-1,1)^{N}\) by_
\[\big{(}\textbf{D}_{\textbf{v}}\big{)}_{ij}=\frac{\delta_{ij}}{1-v_{i}^{2}},\ \ S_{N, \varepsilon,k}=\Big{\{}(\textbf{u},\textbf{v})\in\mathbb{R}^{N}\times(-1,1)^{N }:\|\textbf{u}\|^{2}=1,\|\textbf{v}-\textbf{m}^{[k]}\|^{2}\leq N\varepsilon \Big{\}}. \tag{4.4}\]
_Then, there exists a constant \(c=c_{\beta,h}>0\), that is independent of \(N\), such that_
\[\liminf_{\varepsilon\to 0}\ \liminf_{k\to\infty}\ \text{p-}\liminf_{N\to\infty} \left[\inf_{(\textbf{u},\textbf{v})\in S_{N,\varepsilon,k}}\Big{(}\textbf{u},\big{(}\textbf{D}_{\textbf{v}}+\beta^{2}(1-q)-\frac{2\beta^{2}}{N}\textbf{v} \otimes\textbf{v}-\beta\textbf{G}\big{)}\textbf{u}\Big{)}\right]\geq c.\]
**Remark 4.5**.: _As already remarked in Section 1, note that Prop. 4.5 only requires \((\beta,h)\) to satisfy the AT condition (1.3). The additional assumption (1.5) in Prop. 1.3 is used to approximate the magnetization **m** of the SK model by the iterative TAP solution \(\textbf{m}^{(k)}\)._
Before giving the proof of Prop. 4.5, let us first record that it implies Prop. 1.3.
Proof of Prop. 1.3, given Prop. 4.5.: Observe, first of all, that \(\textbf{D}=\textbf{D}_{\textbf{m}}\), by (1.8) & (4.4). Now, choosing \(\varepsilon>0\) sufficiently small and \(k\in\mathbb{N}\) sufficiently large, Prop. 4.5 implies that
\[\inf_{(\textbf{u},\textbf{v})\in S_{N,\varepsilon,k}}\Big{(}\textbf{u},\big{(} \textbf{D}_{\textbf{v}}+\beta^{2}(1-q)-\frac{2\beta^{2}}{N}\textbf{v}\otimes \textbf{v}-\beta\textbf{G}\big{)}\textbf{u}\Big{)}\geq\widetilde{c}/2\]
for some constant \(\widetilde{c}>0\), with probability tending to one as \(N\to\infty\). By the assumptions (1.3) & (1.5), we can apply Theorem 4.4 which combined with Markov implies
\[\big{\|}\textbf{m}-\textbf{m}^{(k)}\big{\|}^{2}\leq N\varepsilon\]
with probability tending to one as \(N\to\infty\). Thus
\[\inf_{\|\textbf{u}\|^{2}=1}\big{(}\textbf{u},\big{(}\textbf{D}+ \beta^{2}(1-q)-\beta\textbf{G}\big{)}\textbf{u}\big{)}\] \[\geq\inf_{\|\textbf{u}\|^{2}=1}\big{(}\textbf{u},\big{(}\textbf{D} _{\textbf{m}}+\beta^{2}(1-q)-\frac{2\beta^{2}}{N}\textbf{m}\otimes\textbf{m}- \beta\textbf{G}\big{)}\textbf{u}\big{)}\] \[\geq\inf_{(\textbf{u},\textbf{v})\in S_{N,\varepsilon,k}}\Big{(} \textbf{u},\big{(}\textbf{D}_{\textbf{v}}+\beta^{2}(1-q)-\frac{2\beta^{2}}{N} \textbf{v}\otimes\textbf{v}-\beta\textbf{G}\big{)}\textbf{u}\Big{)}\geq \widetilde{c}/2,\]
i.e. \(\textbf{D}+\beta^{2}(1-q)-\beta\textbf{G}\geq c\) for \(c=\widetilde{c}/2\), with probability tending to one as \(N\to\infty\).
The rest of this section is devoted to the proof of Prop. 4.5.
Proof of Prop. 4.5.: **Step 1:** We adapt the main idea of [9] and apply the Sudakov-Fernique inequality [27, 18, 26], conditionally on \(\mathcal{G}_{k}\), to reduce the problem to a solvable variational problem. To this end, we first write
\[\beta\big{(}\textbf{u},\textbf{Gu}\big{)}=\beta\big{(}\textbf{u},\textbf{G}^{( k+1)}\textbf{u}\big{)}+\frac{2\beta}{\sqrt{N}}\sum_{s=1}^{k}(\textbf{u},\zeta^{(s)} \otimes\phi^{(s)},\textbf{u})+o_{\textbf{u}}(1)\]
for an error \(o_{\mathbf{u}}(1)=-N^{-1/2}\sum_{s=1}^{k}(\zeta^{(s)},\phi^{(s)}|(\phi^{(s)}, \mathbf{u})|^{2}\) which satisfies \(\sup_{\|u\|=1}o_{\mathbf{u}}(1)\to 0\) as \(N\to\infty\) almost surely, by Prop. 4.25). This means that
\[\operatorname{p-}\limsup_{N\to\infty}\bigg{[}\inf_{(\mathbf{u}, \mathbf{v})\in S_{N,\varepsilon,k}}\Big{(}\mathbf{u},\big{(}\beta\mathbf{G}+ \frac{2\beta^{2}}{N}\mathbf{v}\otimes\mathbf{v}-\mathbf{D}_{\mathbf{v}}-\beta ^{2}(1-q)\big{)}\mathbf{u}\Big{)}\bigg{]}\] \[=\operatorname{p-}\limsup_{N\to\infty}\bigg{[}\inf_{(\mathbf{u}, \mathbf{v})\in S_{N,\varepsilon,k}}\Big{(}\mathbf{u},\big{(}\beta\mathbf{G}^{( k+1)}\!+\!\frac{2\beta}{\sqrt{N}}\sum_{s=1}^{k}\!\zeta^{(s)}\!\otimes\phi^{(s)}\!+\! \frac{2\beta^{2}}{N}\mathbf{v}\!\otimes\!\mathbf{v}\!-\!\mathbf{D}_{\mathbf{v} }\!-\!\beta^{2}(1-q)\big{)}\mathbf{u}\Big{)}\bigg{]}.\]
Now, comparing the (conditionally on \(\mathcal{G}_{k}\)) Gaussian processes
\[\big{(}\mathbf{X}_{\mathbf{u},\mathbf{v}}\big{)}_{(\mathbf{u}, \mathbf{v})\in S_{N,\varepsilon,k}} =\Big{(}\beta\big{(}\mathbf{u},\mathbf{G}^{(k+1)}\mathbf{u}\big{)} +\mathbf{f}(\mathbf{u},\mathbf{v})\Big{)}_{(\mathbf{u},\mathbf{v})\in S_{N, \varepsilon,k}},\] \[\big{(}\mathbf{Y}_{\mathbf{u},\mathbf{v}}\big{)}_{(\mathbf{u}, \mathbf{v})\in S_{N,\varepsilon,k}} =\Big{(}\frac{2\beta}{\sqrt{N}}\|\mathbf{Q}^{(k)}\mathbf{u}\|( \mathbf{Q}^{(k)}\mathbf{u},\xi)+\mathbf{f}(\mathbf{u},\mathbf{v})\Big{)}_{( \mathbf{u},\mathbf{v})\in S_{N,\varepsilon,k}},\]
where \(\xi\sim\mathcal{N}(0,\operatorname{id}_{\mathbb{R}^{N}})\) denotes a Gaussian vector independent of the remaining disorder and where we abbreviate
\[\mathbf{f}(\mathbf{u},\mathbf{v})=\frac{2\beta}{\sqrt{N}}\sum_{s=1}^{k}( \mathbf{u},\zeta^{(s)})(\phi^{(s)},\mathbf{u})+\!\frac{2\beta^{2}}{N}(\mathbf{ u},\mathbf{v})^{2}\!-(\mathbf{u},\mathbf{D}_{\mathbf{v}}\mathbf{u})-\beta^{2}(1-q),\]
we have \(\mathbb{E}_{k}\mathbf{X}_{\mathbf{u},\mathbf{v}}=\mathbb{E}_{k}\mathbf{Y}_{ \mathbf{u},\mathbf{v}}=\mathbf{f}(\mathbf{u},\mathbf{v})\) and an application of Prop. 4.2, part 4), shows that
\[\mathbb{E}_{k}\big{(}\mathbf{X}_{\mathbf{u},\mathbf{v}}-\mathbf{X}_{\mathbf{u }^{\prime},\mathbf{v}^{\prime}}\big{)}^{2}\leq\mathbb{E}_{k}\big{(}\mathbf{Y} _{\mathbf{u},\mathbf{v}}-\mathbf{Y}_{\mathbf{u}^{\prime},\mathbf{v}^{\prime}} \big{)}^{2}\]
for every \((\mathbf{u},\mathbf{v}),(\mathbf{u}^{\prime},\mathbf{v}^{\prime})\in S_{N, \varepsilon,k}\). Thus, by Vitale's extension [30] of the Sudakov-Fernique inequality [27, 18, 26] we obtain that a.s.
\[\mathbb{E}_{k}\sup_{(\mathbf{u},\mathbf{v})\in S_{N,\varepsilon,k}}\Big{[} \beta\big{(}\mathbf{u},\mathbf{G}^{(k+1)}\mathbf{u}\big{)}+\mathbf{f}(\mathbf{u },\mathbf{v})\Big{]}\leq\mathbb{E}_{k}\!\!\sup_{(\mathbf{u},\mathbf{v})\in S_{ N,\varepsilon,k}}\!\Big{[}\frac{2\beta}{\sqrt{N}}\|\mathbf{Q}^{(k)}\mathbf{u}\|( \mathbf{Q}^{(k)}\mathbf{u},\xi)+\mathbf{f}(\mathbf{u},\mathbf{v})\Big{]}. \tag{4.5}\]
Next, observing that conditionally on \(\mathcal{G}_{k}\), we have the distributional equality
\[\mathbf{G}^{(k+1)}\stackrel{{\mathrm{d}}}{{=}}\mathbf{Q}^{(k)} \frac{1}{\sqrt{2N}}\big{(}\mathbf{U}+\mathbf{U}^{T}\big{)}\mathbf{Q}^{(k)}\]
for some random matrix \(\mathbf{U}=(u_{ij})_{1\leq i,j\leq N}\in\mathbb{R}^{N\times N}\) with i.i.d. entries \(u_{ij}\sim\mathcal{N}(0,1)\) (without symmetry constraint) for \(i\neq j\) and \(u_{ii}=0\), a standard application of Gaussian concentration (see, for instance, [28, Theorem 1.3.4]) implies that
\[\mathbb{P}_{k}\bigg{(}\bigg{|}\sup_{(\mathbf{u},\mathbf{v})\in S_{ N,\varepsilon,k}}\bigg{[}\beta\big{(}\mathbf{u},\mathbf{G}^{(k+1)}\mathbf{u} \big{)}+\mathbf{f}(\mathbf{u},\mathbf{v})\bigg{]}\] \[\qquad\qquad\qquad-\mathbb{E}_{k}\sup_{(\mathbf{u},\mathbf{v})\in S _{N,\varepsilon,k}}\bigg{[}\beta\big{(}\mathbf{u},\mathbf{G}^{(k+1)}\mathbf{u }\big{)}+\mathbf{f}(\mathbf{u},\mathbf{v})\bigg{]}\bigg{|}>t\bigg{)}\leq 2e^{-CNt^{2}}.\]
Indeed, for \(\mathbf{X},\mathbf{Y}\in\mathbb{R}^{N\times N}\), notice that
\[\sup_{(\mathbf{u},\mathbf{v})\in S_{N,\varepsilon,k}}\left[\frac{ \beta}{\sqrt{N}}\big{(}\mathbf{Q}^{(k)}\mathbf{u},\bar{\mathbf{X}}\mathbf{Q}^{( k)}\mathbf{u}\big{)}+\mathbf{f}(\mathbf{u},\mathbf{v})\right]-\sup_{(\mathbf{u}, \mathbf{v})\in S_{N,\varepsilon,k}}\left[\frac{\beta}{\sqrt{N}}\big{(} \mathbf{Q}^{(k)}\mathbf{u},\bar{\mathbf{Y}}\mathbf{Q}^{(k)}\mathbf{u}\big{)}+ \mathbf{f}(\mathbf{u},\mathbf{v})\right]\] \[=\sup_{(\mathbf{u},\mathbf{v})\in S_{N,\varepsilon,k}}\left[\frac {\sqrt{2}\beta}{\sqrt{N}}\big{(}\mathbf{Q}^{(k)}\mathbf{u},(\mathbf{X}- \mathbf{Y})\mathbf{Q}^{(k)}\mathbf{u}\big{)}+\frac{\sqrt{2}\beta}{\sqrt{N}} \big{(}\mathbf{Q}^{(k)}\mathbf{u},\mathbf{Y}\mathbf{Q}^{(k)}\mathbf{u}\big{)}+ \mathbf{f}(\mathbf{u},\mathbf{v})\right]\] \[\quad-\sup_{(\mathbf{u},\mathbf{v})\in S_{N,\varepsilon,k}} \left[\frac{2\beta}{\sqrt{N}}\big{(}\mathbf{Q}^{(k)}\mathbf{u},\mathbf{Y} \mathbf{Q}^{(k)}\mathbf{u}\big{)}+\mathbf{f}(\mathbf{u},\mathbf{v})\right]\] \[\leq\sup_{\mathbf{u}^{\prime}\in\mathbb{R}^{N}:\|\mathbf{u}^{ \prime}\|^{2}\leq 1}\frac{\sqrt{2}\beta}{\sqrt{N}}\big{(}\mathbf{u}^{\prime},( \mathbf{X}-\mathbf{Y})\,\mathbf{u}^{\prime}\big{)}\leq\frac{C}{\sqrt{N}} \bigg{(}\sum_{i,j=1}^{N}(\mathbf{X}-\mathbf{Y})_{ij}^{2}\bigg{)}^{1/2}=\frac{ C}{\sqrt{N}}\|\mathbf{X}-\mathbf{Y}\|\]
so that, upon switching the roles of \(\mathbf{X}\in\mathbb{R}^{N\times N}\) and \(\mathbf{Y}\in\mathbb{R}^{N\times N}\), we get
\[\leq\frac{C}{\sqrt{N}}\|\mathbf{X}-\mathbf{Y}\|\]
for the deterministic constant \(C=\sqrt{2}\beta>0\). Arguing along the same lines, we also find
\[\mathbb{P}_{k}\bigg{(}\left|\sup_{(\mathbf{u},\mathbf{v})\in S_{N,\varepsilon,k}}\left[\frac{2\beta}{\sqrt{N}}\|\mathbf{Q}^{(k)}\mathbf{u}\|( \mathbf{Q}^{(k)}\mathbf{u},\xi)+\mathbf{f}(\mathbf{u},\mathbf{v})\right]\right.\] \[\left.\qquad\qquad\qquad-\mathbb{E}_{k}\sup_{(\mathbf{u}, \mathbf{v})\in S_{N,\varepsilon,k}}\left[\frac{2\beta}{\sqrt{N}}\|\mathbf{Q}^{ (k)}\mathbf{u}\|(\mathbf{u},\xi)+\mathbf{f}(\mathbf{u},\mathbf{v})\right] \right|>t\bigg{)}\leq 2e^{-CNt^{2}},\]
and since \(C>0\) is independent of \(\mathcal{G}_{k}\), we can take the expectation \(\mathbb{E}(\cdot)\) over the previous two tail bounds to see that they hold true unconditionally. Thus, we conclude
\[\lim_{N\to\infty}\left|\sup_{(\mathbf{u},\mathbf{v})\in S_{N,\varepsilon,k}} \left[\beta\big{(}\mathbf{u},\mathbf{G}^{(k+1)}\mathbf{u}\big{)}+\mathbf{f}( \mathbf{u},\mathbf{v})\right]-\mathbb{E}_{k}\sup_{(\mathbf{u},\mathbf{v})\in S _{N,\varepsilon,k}}\left[\beta\big{(}\mathbf{u},\mathbf{G}^{(k+1)}\mathbf{u} \big{)}+\mathbf{f}(\mathbf{u},\mathbf{v})\right]\right|=0\]
and
\[\lim_{N\to\infty}\bigg{|}\sup_{(\mathbf{u},\mathbf{v})\in S_{N, \varepsilon,k}}\left[\frac{2\beta}{\sqrt{N}}\|\mathbf{Q}^{(k)}\mathbf{u}\|( \mathbf{Q}^{(k)}\mathbf{u},\xi)+\mathbf{f}(\mathbf{u},\mathbf{v})\right]\] \[\qquad\qquad\qquad\qquad-\mathbb{E}_{k}\sup_{(\mathbf{u}, \mathbf{v})\in S_{N,\varepsilon,k}}\left[\frac{2\beta}{\sqrt{N}}\|\mathbf{Q}^{ (k)}\mathbf{u}\|(\mathbf{Q}^{(k)}\mathbf{u},\xi)+\mathbf{f}(\mathbf{u}, \mathbf{v})\right]\right|=0\]
in the sense of probability. Combining these observations with (4.5), we finally arrive at
\[\limsup_{\varepsilon\to 0}\limsup_{k\to\infty}\text{p-}\limsup_{N\to\infty} \bigg{[}\sup_{(\mathbf{u},\mathbf{v})\in S_{N,\varepsilon,k}}\Big{(}\mathbf{u},\big{(}\beta\mathbf{G}+\frac{2\beta^{2}}{N}\mathbf{v}\otimes\mathbf{v}- \mathbf{D}_{\mathbf{v}}-\beta^{2}(1-q)\big{)}\mathbf{u}\Big{)}\bigg{]} \tag{4.6}\] \[\leq\limsup_{\varepsilon\to 0}\limsup_{k\to\infty}\text{p-}\limsup_{N\to\infty} \bigg{[}\sup_{(\mathbf{u},\mathbf{v})\in S_{N,\varepsilon,k}}\text{F}\big{(} \mathbf{u},\mathbf{v},\xi,\mathbf{M}^{(k)},\mathbf{Z}^{(k)}\big{)}\bigg{]},\]
where \(\mathbf{M}^{(k)}\) and \(\mathbf{Z}^{(k)}\) denote the matrices
\[\mathbf{M}^{(k)}=(\mathbf{m}^{(1)}\,\mathbf{m}^{(2)}\,\dots\,\mathbf{m}^{(k)}) \in\mathbb{R}^{N\times k},\ \ \ \ \mathbf{Z}^{(k)}=(\mathbf{z}^{(2)},\dots,\mathbf{z}^{(k+1)})\in\mathbb{R}^{N \times k}\]
and where
\[\mathrm{F}\big{(}\mathbf{u},\mathbf{v},\xi,\mathbf{M}^{(k)}, \mathbf{Z}^{(k)}\big{)} =\frac{2\beta}{\sqrt{N}}\|\mathbf{Q}^{(k)}\mathbf{u}\|(\mathbf{u},\xi)+2\beta\big{(}\mathbf{u},\mathbf{Z}^{(k)}\big{(}\mathbf{M}^{(k)T}\mathbf{ M}^{(k)}\big{)}^{-1}\mathbf{M}^{(k)T}\mathbf{u}\big{)}\] \[\quad+\frac{2\beta^{2}}{N}(\mathbf{u},\mathbf{m}^{(k)})^{2}-( \mathbf{u},\mathbf{D_{v}}\mathbf{u})-\beta^{2}(1-q).\]
Here, we used additionally that
\[\sup_{\|\mathbf{v}-\mathbf{m}^{(k)}\|^{2}\leq N\varepsilon}N^{-1}\big{\|} \mathbf{v}\otimes\mathbf{v}-\mathbf{m}^{(k)}\otimes\mathbf{m}^{(k)}\big{\|}_ {\mathrm{op}}\leq 2\sqrt{\varepsilon}\]
and that, in the sense of probability, we have
\[\lim_{N\to\infty}\big{|}\sup_{\|\mathbf{u}\|^{2}=1}N^{-1/2}\big{(}\mathbf{P}^ {(k)}\mathbf{u},\xi\big{)}\big{|}\leq\lim_{N\to\infty}N^{-1/2}\big{\|}\mathbf{ P}^{(k)}\xi\big{\|} =0,\]
\[\lim_{N\to\infty}\Big{\|}\frac{1}{\sqrt{N}}\sum_{s=1}^{k}\zeta^{(s)}\otimes \phi^{(s)}-\,\mathbf{Z}^{(k)}\big{(}\mathbf{M}^{(k)T}\mathbf{M}^{(k)}\big{)}^{ -1}\mathbf{M}^{(k)T}\Big{\|}_{\mathrm{op}}=0.\]
The last two identities readily follow e.g. from a simple second moment bound and, respectively, from Prop. 4.2. Notice that both matrices \(N^{-1/2}\sum_{s=1}^{k}\zeta^{(s)}\otimes\phi^{(s)}\) and \(\mathbf{Z}^{(k)}\big{(}\mathbf{M}^{(k)T}\mathbf{M}^{(k)}\big{)}^{-1}\mathbf{M} ^{(k)T}\) are at most of rank \(k\)\((\mathbf{Q}^{(k)}\mathbb{R}^{N}\) contained in their kernels) so that the norm convergence follows from pointwise convergence on the vectors \((\mathbf{m}^{(s)})_{s=1}^{k}\).
**Step 2:** The remainder of the proof is based on analyzing the optimization problem on the r.h.s. of Eq. (4.6). This can be done with the same arguments as in [9, Sections 4.1 to 4.4]. For completeness, we translate the arguments from [9] to the present setting. In particular, we point out at which steps the AT condition (1.3) enters the analysis.
First, we control the r.h.s. of Eq. (4.6) through an optimization problem that only involves the limiting distributions of \(\mathbf{M}^{(k)}\) and \(\mathbf{Z}^{(k)}\), based on Theorem 4.3. Setting
\[\mu_{N}^{(k)}=\frac{1}{N}\sum_{i=1}^{N}\delta_{\sqrt{N}u_{i},v_{i},\xi_{i}, \mathbf{M}^{(k)}_{i\cdot},\mathbf{Z}^{(k)}_{i\cdot}}\in\mathcal{P}(\mathbb{R}^ {2k+3}),\]
where by \(\mathbf{X}_{i\cdot}\in\mathbb{R}^{n}\) we denote the \(i\)-th row of \(\mathbf{X}\in\mathbb{R}^{m\times n}\), and setting
\[\mathbb{E}_{\mu}(f)=\int\mu(dx)\,f(x),\ \ \mathbb{P}_{\mu}(S)=\int\mu(dx)\, \mathbf{1}_{S}(x)=\mu(S),\ \ \langle f,g\rangle_{L^{2}(d\mu)}=\int\mu(dx)\,f(x)g(x)\]
for \(\mu\in\mathcal{P}(\mathbb{R}^{2k+3})\), we have that
\[\mathrm{F}\big{(}\mathbf{u},\mathbf{v},\xi,\mathbf{M}^{(k)},\mathbf{Z}^{(k)} \big{)}=\Phi\big{(}\mu_{N}^{(k)}\big{)},\]
where for every \(\mu\in\mathcal{P}(\mathbb{R}^{2k+3})\) with finite second moment, we define
\[\Phi(\mu) =2\beta\,\mathbb{E}_{\mu}(\mathrm{U}\underline{Z}^{T})\big{[} \mathbb{E}_{\mu}(\underline{\mathrm{M}}\,\underline{\mathrm{M}}^{T})\big{]}^{-1 }\mathbb{E}_{\mu}(\underline{\mathrm{M}}\mathrm{U})+2\beta\|\mathbf{Q}_{ \underline{\mathrm{M}}}\mathrm{U}\|_{L^{2}(d\mu)}\langle\mathrm{U},\Xi\rangle_{ L^{2}(d\mu)}\] \[\quad+2\beta^{2}\langle M_{k},\mathrm{U}\rangle_{L^{2}(d\mu)}^{2 }-\mathbb{E}_{\mu}\big{(}(1-\mathrm{V}^{2})^{-1}\mathrm{U}^{2}\big{)}-\beta^{2} (1-q).\]
Here, the coordinates in the integrals over \(\mathbb{R}^{2k+3}\) w.r.t. to \(\mu\) are denoted by \((\mathrm{U},\mathrm{V},\Xi,\underline{\mathrm{M}},\underline{\mathrm{Z}})\) (or, in other words, \(\mathcal{L}(\mathrm{U},\mathrm{V},\Xi,\underline{\mathrm{M}},\underline{ \mathrm{Z}})=\mu\)) and \(\mathbf{Q}_{\underline{\mathrm{M}}}\) denotes the projection onto the orthogonal complement of the coordinates of \(\underline{\mathrm{M}}=(M_{1},\ldots,M_{k})\), i.e.
\[\mathbf{Q}_{\underline{\mathrm{M}}}\mathrm{U}=\mathrm{U}-\underline{\mathrm{M} }^{T}\big{(}\mathbb{E}_{\mu}\,\underline{\mathrm{M}}\,\underline{\mathrm{M}}^ {T}\big{)}^{-1}\mathbb{E}_{\mu}\,\underline{\mathrm{M}}\mathrm{U}.\]
To stay consistent with the construction in (4.1) and with the previous notation, we write in the following \(\underline{\mathrm{Z}}=(Z_{2},\ldots,Z_{k+1})\). Now, applying [9, Lemma 1] (whose proof in [9, Appendix B.2] carries over directly to the present context upon replacing the functional \(\mathrm{F}_{\mathrm{x},k}\) from [9] by \(\Phi\) defined above, based on Prop. 4.2), we obtain the upper bound
\[\limsup_{\varepsilon\to 0}\limsup_{k\to\infty}\mathrm{p-}\limsup_{N\to\infty} \sup_{(\mathbf{u},\mathbf{v})\in S_{N,\varepsilon,k}}\mathrm{F}\big{(}\mathbf{ u},\mathbf{v},\xi,\mathbf{M}^{(k)},\mathbf{Z}^{(k)}\big{)}\leq\limsup_{ \varepsilon\to 0}\limsup_{k\to\infty}\sup_{\mu\in\mathcal{S}_{\varepsilon,k}}\Phi(\mu), \tag{4.7}\]
where
\[\mathcal{S}_{\varepsilon,k} =\big{\{}\mu=\mathcal{L}(\mathrm{U},\mathrm{V},\Xi,\underline{ \mathrm{M}},\underline{\mathrm{Z}}):\,\|\mathrm{U}\|_{L^{2}(d\mu)}^{2}=1,\;\| \mathrm{V}-M_{k}\|_{L^{2}(d\mu)}^{2}\leq\varepsilon,\mathbb{P}_{\mu}(| \mathrm{V}|<1)=1,\] \[\underline{\mathrm{Z}}=(Z_{2},\ldots,Z_{k+1})\sim\mathcal{N}( \mathbf{0},\mathbf{K}_{\leq k}),\,\underline{\mathrm{M}}=(M_{1},\ldots,M_{k}) \text{ with }M_{1}=\sqrt{q},\] \[M_{s}\overset{s>1}{=}\tanh(h+\beta Z_{s-1}),\,\Xi\sim\mathcal{N} (0,1)\text{ independent of }\underline{\mathrm{Z}}\,\big{\}}.\]
Next, using that \(\mathbb{E}_{\mu}(\underline{\mathrm{Z}}\,\underline{\mathrm{Z}}^{T})= \mathbb{E}_{\mu}(\underline{\mathrm{M}}\,\underline{\mathrm{M}}^{T})=\mathbf{ K}_{\leq k}\), we observe that
\[\big{\|}\underline{\mathrm{Z}}^{T}\big{[}\mathbb{E}_{\mu}( \underline{\mathrm{M}}\,\underline{\mathrm{M}}^{T})\big{]}^{-1}\mathbb{E}_{ \mu}(\underline{\mathrm{M}}\mathrm{U})\big{\|}_{L^{2}(d\mu)}^{2}+\|\mathbf{Q} _{\underline{\mathrm{M}}}\mathrm{U}\|_{L^{2}(d\mu)}^{2}\|\Xi\|_{L^{2}(d\mu)}^{2}\] \[=\mathbb{E}_{\mu}(\underline{\mathrm{M}}\mathrm{U})^{T}\big{[} \mathbb{E}_{\mu}(\underline{\mathrm{M}}\,\underline{\mathrm{M}}^{T})\big{]}^ {-1}\mathbb{E}_{\mu}(\underline{\mathrm{Z}}\,\underline{\mathrm{Z}}^{T}) \big{[}\mathbb{E}_{\mu}(\underline{\mathrm{M}}\,\underline{\mathrm{M}}^{T}) \big{]}^{-1}\mathbb{E}_{\mu}(\underline{\mathrm{M}}\mathrm{U})+\|\mathbf{Q}_{ \underline{\mathrm{M}}}\mathrm{U}\|_{L^{2}(d\mu)}^{2}=1\]
so that, by the independence of \(\underline{\mathrm{Z}}\) and \(\Xi\), we have that under \(\mu\)
\[\underline{\mathrm{Z}}^{T}\big{[}\mathbb{E}_{\mu}(\underline{\mathrm{M}}\, \underline{\mathrm{M}}^{T})\big{]}^{-1}\mathbb{E}_{\mu}(\underline{\mathrm{M }}\mathrm{U})+\|\mathbf{Q}_{\underline{\mathrm{M}}}\mathrm{U}\|_{L^{2}(d\mu)} \Xi\sim\mathcal{N}(0,1).\]
Arguing similarly that
\[\big{\langle}Z_{k+1},\underline{\mathrm{Z}}^{T}\big{[}\mathbb{E}_{\mu}( \underline{\mathrm{M}}\,\underline{\mathrm{M}}^{T})\big{]}^{-1}\mathbb{E}_{\mu} (\underline{\mathrm{M}}\mathrm{U})\big{\rangle}_{L^{2}(d\mu)}=\langle M_{k}, \mathrm{U}\rangle_{L^{2}(d\mu)}\]
and using that \(\|Z_{k+1}\|_{L^{2}(d\mu)}=\|M_{k}\|_{L^{2}(d\mu)}=\sqrt{q}\), we can thus write
\[\underline{\mathrm{Z}}^{T}\big{[}\mathbb{E}_{\mu}(\underline{\mathrm{M}}\, \underline{\mathrm{M}}^{T})\big{]}^{-1}\mathbb{E}_{\mu}(\underline{\mathrm{M}} \mathrm{U})+\|\mathbf{Q}_{\underline{\mathrm{M}}}\mathrm{U}\|_{L^{2}(d\mu)}\Xi=q ^{-1}\langle M_{k},\mathrm{U}\rangle_{L^{2}(d\mu)}Z_{k+1}+\|\mathbf{Q}_{M_{k}} \mathrm{U}\|_{L^{2}(d\mu)}\,\Xi^{\prime}\]
for some Gaussian \(\Xi^{\prime}\sim\mathcal{N}(0,1)\), which is independent of \(Z_{k+1}\), and where \(Q_{M_{k}}\) denotes the projection onto the orthogonal complement of \(M_{k}\) in \(L^{2}(d\mu)\). In particular, we have
\[\Phi(\mu) =\frac{2\beta}{q}\,\langle\mathrm{U},M_{k}\rangle_{L^{2}(d\mu)} \langle Z_{k+1},\mathrm{U}\rangle_{L^{2}(d\mu)}+2\beta\|\mathbf{Q}_{M_{k}} \mathrm{U}\|_{L^{2}(d\mu)}\langle\mathrm{U},\Xi^{\prime}\rangle_{L^{2}(d\mu)}\] \[\quad+2\beta^{2}\langle M_{k},U\rangle_{L^{2}(d\mu)}^{2}-\mathbb{ E}_{\mu}\big{(}(1-\mathrm{V}^{2})^{-1}\mathrm{U}^{2}\big{)}-\beta^{2}(1-q),\]
and setting \(M_{k+1}=\tanh(h+\beta Z_{k+1})\), such that under the AT condition (1.3) we have \(\|M_{k+1}-M_{k}\|_{L^{2}(d\mu)}^{2}=2q-2\alpha_{k}\to 0\) as \(k\to\infty\) by Lemma 4.1 and Prop. 4.24), we find
\[\limsup_{\varepsilon\to 0}\limsup_{k\to\infty}\sup_{\mu\in\mathcal{S}_{ \varepsilon,k}}\Phi(\mu)\leq\limsup_{\varepsilon\to 0}\sup_{\mu\in \mathcal{S}_{\varepsilon}}\Psi(\mu), \tag{4.8}\]
where \(\Psi\) is defined by
\[\Psi(\mu) =2\beta q^{-1/2}\,\langle\mathrm{U},\mathrm{M}\rangle_{L^{2}(d \mu)}\langle\mathrm{Z},\mathrm{U}\rangle_{L^{2}(d\mu)}+2\beta\sqrt{1-q^{-1} \langle\mathrm{M},\mathrm{U}\rangle_{L^{2}(d\mu)}^{2}}\langle\mathrm{W}, \mathrm{U}\rangle_{L^{2}(d\mu)}\] \[\quad+2\beta^{2}\langle\mathrm{M},\mathrm{U}\rangle_{L^{2}(d\mu )}^{2}-\mathbb{E}_{\mu}\big{(}(1-\mathrm{V}^{2})^{-1}\mathrm{U}^{2}\big{)}- \beta^{2}(1-q),\]
for every \(\mu=\mathcal{L}(\mathrm{M},\mathrm{U},\mathrm{V},\mathrm{W},\mathrm{Z})\in \mathcal{S}_{\varepsilon}\), with \(\mathcal{S}_{\varepsilon}\) defined by
\[\mathcal{S}_{\varepsilon} =\big{\{}\mathcal{L}(\mathrm{M},\mathrm{U},\mathrm{V},\mathrm{W},\mathrm{Z}):\|\mathrm{U}\|_{L^{2}(d\mu)}^{2}=1,\|\mathrm{V}-\mathrm{M}\|_{L^{ 2}(d\mu)}^{2}\leq\varepsilon,\] \[\quad\quad\mathbb{P}_{\mu}(|\mathrm{V}|<1)=1,(\mathrm{W},\mathrm{ Z})\sim\mathcal{N}\big{(}0,\mathrm{id}_{\mathbb{R}^{2}}\big{)},\mathrm{M}=\tanh(h+ \beta\sqrt{q}\mathrm{Z})\big{\}}.\]
**Step 3:** Finally, we need to analyze the optimization problem on the r.h.s. in Eq. (4.8). Here, we introduce Lagrange multipliers and simply upper bound the r.h.s. by
\[\Psi(\mu) =2\beta q^{-1/2}\,\langle\mathrm{U},\mathrm{M}\rangle_{L^{2}(d \mu)}\langle\mathrm{Z},\mathrm{U}\rangle_{L^{2}(d\mu)}+2\beta\sqrt{1-q^{-1} \langle\mathrm{M},\mathrm{U}\rangle_{L^{2}(d\mu)}^{2}}\langle\mathrm{W}, \mathrm{U}\rangle_{L^{2}(d\mu)}\] \[\quad+2\beta^{2}\langle\mathrm{M},\mathrm{U}\rangle_{L^{2}(d\mu) }^{2}-\mathbb{E}_{\mu}\big{(}(1-\mathrm{V}^{2})^{-1}\mathrm{U}^{2}\big{)}- \beta^{2}(1-q)\] \[=2\beta q^{-1/2}x\ \langle\mathrm{Z},\mathrm{U}\rangle_{L^{2}(d\mu) }+2\beta\sqrt{1-q^{-1}x^{2}}\,\langle\mathrm{W},\mathrm{U}\rangle_{L^{2}(d \mu)}+2\beta^{2}x^{2}-\beta^{2}(1-q)\] \[\quad+\lambda_{u}(\|U\|_{L^{2}(d\mu)}^{2}-1)+\lambda_{x}(\langle \mathrm{U},\mathrm{M}\rangle_{L^{2}(d\mu)}-x)-\mathbb{E}_{\mu}\big{(}(1- \mathrm{V}^{2})^{-1}\mathrm{U}^{2}\big{)}\] \[\leq 2\beta^{2}x^{2}-\beta^{2}(1-q)-\lambda_{u}-\lambda_{x}x+ \mathbb{E}_{\mu}\Theta(\mathrm{M},\mathrm{V},\mathrm{W},\mathrm{Z}),\]
where we set \(x=x(\mathrm{U},\mathrm{M})=\langle\mathrm{U},\mathrm{M}\rangle_{L^{2}(d\mu)}\) as well as
\[\Theta(m,v,w,z)=\sup_{u\in\mathbb{R}}\Big{[}\big{(}2\beta q^{-1/2}xz+2\beta \sqrt{1-q^{-1}x^{2}}\,w+\lambda_{x}m\big{)}u+\big{(}\lambda_{u}-(1-v^{2})^{-1 }\big{)}u^{2}\Big{]}.\]
Fixing from now on \(\lambda_{u}<1\) and assuming w.l.o.g. \(|v|<1\) (in accordance to the fact that \(\mathbb{P}_{\mu}(|V|<1)=1\)), strict concavity implies that
\[\Theta(m,v,w,z)=\frac{1}{4}\frac{\big{(}2\beta q^{-1/2}xz+2\beta\sqrt{1-q^{-1} x^{2}}\,w+\lambda_{x}m\big{)}^{2}}{\big{(}(1-v^{2})^{-1}-\lambda_{u}\big{)}}.\]
Together with the global Lipschitz continuity of \([-1,1]\ni v\mapsto\left((1-v^{2})^{-1}-\lambda_{u}\right)^{-1}\in[0,\infty)\), we thus obtain for every fixed \(\lambda_{u}<1\) and \(K>0\) the simple upper bound
\[\limsup_{\varepsilon\to 0}\sup_{\mu\in\mathcal{S}_{ \varepsilon}}\Psi(\mu)\] \[\leq\max_{|x|\leq 1}\min_{|\lambda_{x}|\leq K}\bigg{[}2\beta^{2}x^{2}- \beta^{2}(1-q)-\lambda_{u}-\lambda_{x}x\] \[\qquad\qquad\qquad+\mathbb{E}_{\mu}\frac{\left(2\beta q^{-1/2}x \mathrm{Z}+2\beta\sqrt{1-q^{-1}x^{2}}\,\mathrm{W}+\lambda_{x}\mathrm{M} \right)^{2}}{4(\cosh^{2}(h+\beta\sqrt{q}\mathrm{Z})-\lambda_{u})}\bigg{]} \tag{4.9}\] \[=\max_{|x|\leq 1}\min_{|\lambda_{x}|\leq K}\bigg{[}2\beta^{2}x^{2}- \beta^{2}(1-q)-\lambda_{u}+\mathbb{E}_{\mu}\frac{\beta^{2}(1-q^{-1}x^{2})}{( \cosh^{2}(h+\beta\sqrt{q}\mathrm{Z})-\lambda_{u})}\] \[\qquad\qquad\qquad-\lambda_{x}x+\mathbb{E}_{\mu}\frac{\beta^{2} q^{-1}x^{2}\mathrm{Z}^{2}+\beta q^{-1/2}x\lambda_{x}\mathrm{M}\mathrm{Z}+\lambda_{x}^ {2}\mathrm{M}^{2}/4}{(\cosh^{2}(h+\beta\sqrt{q}\mathrm{Z})-\lambda_{u})}\bigg{]}.\]
Notice that, in the last step, we used the independence of \(\mathrm{W}\) of \(\mathrm{Z}\) and, consequently, of \(\mathrm{M}=\tanh(h+\beta\sqrt{q}\mathrm{Z})\). Applying Gaussian integration by parts w.r.t. \(\mathrm{Z}\) and rescaling \(x\) and \(\lambda_{x}\) by a factor \(1/\sqrt{q}\) and respectively \(\sqrt{q}\), we find that
\[\max_{|x|\leq 1}\min_{|\lambda_{x}|\leq K}\bigg{[}2\beta^{2}x^{2}- \beta^{2}(1-q)-\lambda_{u}+\mathbb{E}_{\mu}\frac{\beta^{2}(1-q^{-1}x^{2})}{( \cosh^{2}(h+\beta\sqrt{q}\mathrm{Z})-\lambda_{u})}\] \[\qquad\qquad\qquad-\lambda_{x}x+\mathbb{E}_{\mu}\frac{\beta^{2}q ^{-1}x^{2}\mathrm{Z}^{2}+\beta q^{-1/2}x\lambda_{x}\mathrm{M}\mathrm{Z}+ \lambda_{x}^{2}\mathrm{M}^{2}/4}{(\cosh^{2}(h+\beta\sqrt{q}\mathrm{Z})- \lambda_{u})}\bigg{]}\] \[=\max_{|x|\leq q^{-1/2}}\bigg{\{}2\beta^{2}qx^{2}-\beta^{2}(1-q)- \lambda_{u}+\mathbb{E}_{\mu}\frac{\beta^{2}}{(\cosh^{2}(h+\beta\sqrt{q} \mathrm{Z})-\lambda_{u})}\] \[\qquad\qquad\qquad-\mathbb{E}_{\mu}\frac{2\beta^{4}q(\cosh^{2}(h +\beta\sqrt{q}\mathrm{Z})+\sinh^{2}(h+\beta\sqrt{q}\mathrm{Z}))}{(\cosh^{2} (h+\beta\sqrt{q}\mathrm{Z})-\lambda_{u})^{2}}x^{2}\] \[\qquad\qquad\qquad\qquad+\mathbb{E}_{\mu}\frac{8\beta^{4}q\cosh ^{2}(h+\beta\sqrt{q}\mathrm{Z})\sinh^{2}(h+\beta\sqrt{q}\mathrm{Z})}{(\cosh^{ 2}(h+\beta\sqrt{q}\mathrm{Z})-\lambda_{u})^{3}}x^{2} \tag{4.10}\] \[\qquad\qquad\qquad\qquad+\min_{|\lambda_{x}|\leq\sqrt{q}K}\mathbb{ E}_{\mu}\bigg{[}\frac{\beta^{2}}{\cosh^{2}(h+\beta\sqrt{q}\mathrm{Z})(\cosh^{2}(h+ \beta\sqrt{q}\mathrm{Z})-\lambda_{u})}x\lambda_{x}\] \[\qquad\qquad\qquad\qquad\qquad-\frac{2\beta^{2}\cosh^{2}(h+ \beta\sqrt{q}\mathrm{Z})\tanh^{2}(h+\beta\sqrt{q}\mathrm{Z})}{(\cosh^{2}(h+ \beta\sqrt{q}\mathrm{Z})-\lambda_{u})^{2}}x\lambda_{x}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad-x\lambda_{x}+\frac{1 }{4}\frac{\mathrm{M}^{2}}{(\cosh^{2}(h+\beta\sqrt{q}\mathrm{Z})-\lambda_{u})} \lambda_{x}^{2}\bigg{]}\bigg{\}}\] \[=\max_{|x|\leq q^{-1/2}}\Sigma_{\lambda_{u}}(x).\]
Now, evaluating the Hessian of \(\Sigma_{\lambda_{u}}\) for \(\lambda_{u}=0\), one obtains with \(q_{4}=\mathbb{E}_{\mu}\operatorname{M}^{4}\) that
\[\frac{1}{2}\frac{d^{2}\Sigma_{0}}{dx^{2}} =2\beta^{2}q-2\beta^{4}q\left(1-4q+3q_{4}\right)-\left(q-q_{4} \right)^{-1}\bigl{(}1-\beta^{2}(1-4q+3q_{4})\bigr{)}^{2}\] \[=\frac{(1-\beta^{2}(1-4q+3q_{4}))}{(q-q_{4})}\bigl{(}2\beta^{2}q (q-q_{4})-\bigl{(}1-\beta^{2}(1-2q+q_{4})\bigr{)}-2\beta^{2}(q-q_{4})\bigr{)}\] \[=\frac{\bigl{(}1-\beta^{2}(1-4q+3q_{4})\bigr{)}}{(q-q_{4})}\bigl{(} -\bigl{(}1-\beta^{2}(1-2q+q_{4})\bigr{)}-2\beta^{2}(1-q)(q-q_{4})\bigr{)}.\]
Finally, by the AT condition (1.3) and \(q-q_{4}=\mathbb{E}_{\mu}\bigl{(}\frac{\tanh^{2}}{\cosh^{2}}\bigr{)}(h+\beta \sqrt{q}\operatorname{Z})\geq 0\), we get
\[1-\beta^{2}(1-2q+q_{4}) =1-\beta^{2}\mathbb{E}_{\mu}\text{sech}^{4}(h+\beta\sqrt{q} \operatorname{Z})>0,\] \[1-\beta^{2}(1-4q+3q_{4}) =1-\beta^{2}(1-2q+q_{4})+2\beta^{2}(q-q_{4})>0,\]
and together with \(1-q=\mathbb{E}_{\mu}\text{sech}^{2}(h+\beta\sqrt{q}\operatorname{Z})\geq 0\) and smoothness in \(\lambda_{u}\), this implies
\[\frac{d^{2}\Sigma_{\lambda_{u}}}{dx^{2}}<0\]
for \(\lambda_{u}\in(0,1)\) sufficiently small. By concavity, we obtain for \(\lambda_{u}>0\) small enough that
\[\max_{|x|\leq 1}\min_{|\lambda_{x}|\leq K}\left[2\beta^{2}x^{2}- \beta^{2}(1-q)-\lambda_{u}+\mathbb{E}_{\mu}\frac{\beta^{2}(1-q^{-1}x^{2})}{( \cosh^{2}(h+\beta\sqrt{q}\operatorname{Z})-\lambda_{u})}\right.\] \[\left.\qquad\qquad\qquad\qquad-\lambda_{x}x+\mathbb{E}_{\mu} \frac{\beta^{2}q^{-1}x^{2}\operatorname{Z}^{2}+\beta q^{-1/2}x\lambda_{x} \operatorname{M}\!Z+\lambda_{x}^{2}\!\operatorname{M}^{2}\!/4}{(\cosh^{2}(h+ \beta\sqrt{q}\operatorname{Z})-\lambda_{u})}\right]\] \[=-\beta^{2}(1-q)-\lambda_{u}+\beta^{2}\mathbb{E}_{\mu}\frac{1}{ \cosh^{2}(h+\beta\sqrt{q}\operatorname{Z})-\lambda_{u})}\] \[=-\int_{0}^{\lambda_{u}}dt\left[1-\beta^{2}\mathbb{E}_{\mu}\frac{ 1}{(\cosh^{2}(h+\beta\sqrt{q}\operatorname{Z})-t)^{2}}\right]\leq-c<0\]
for some positive constant \(c=c_{\beta,h}>0\). Here, the last inequality follows from the assumption that \(\lambda_{u}>0\) is sufficiently small and from the AT condition (1.3), noting that
\[-\left[1-\beta^{2}\mathbb{E}_{\mu}\frac{1}{(\cosh^{2}(h+\beta\sqrt{q} \operatorname{Z})-t)^{2}}\right]_{|t=0}=-\bigl{(}1-\beta^{2}\mathbb{E}_{\mu} \text{sech}^{4}(h+\beta\sqrt{q}\operatorname{Z})\bigr{)}<0.\]
Combining this with (4.6), (4.7), (4.8) and (4.10), this proves Prop. 4.5.
**Acknowledgements.** C. B. acknowledges support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - GZ 2047/1, Projekt-ID 390685813. The work of C. X. is partially funded by a Simons Investigator award. The work of H.-T. Y. is partially supported by the NSF grant DMS-1855509 and DMS-2153335 and a Simons Investigator award. |
2306.15173 | Robust propensity score weighting estimation under missing at random | Missing data is frequently encountered in many areas of statistics.
Propensity score weighting is a popular method for handling missing data. The
propensity score method employs a response propensity model, but correct
specification of the statistical model can be challenging in the presence of
missing data. Doubly robust estimation is attractive, as the consistency of the
estimator is guaranteed when either the outcome regression model or the
propensity score model is correctly specified. In this paper, we first employ
information projection to develop an efficient and doubly robust estimator
under indirect model calibration constraints. The resulting propensity score
estimator can be equivalently expressed as a doubly robust regression
imputation estimator by imposing the internal bias calibration condition in
estimating the regression parameters. In addition, we generalize the
information projection to allow for outlier-robust estimation. Some asymptotic
properties are presented. The simulation study confirms that the proposed
method allows robust inference against not only the violation of various model
assumptions, but also outliers. A real-life application is presented using data
from the Conservation Effects Assessment Project. | Hengfang Wang, Jae Kwang Kim, Jeongseop Han, Youngjo Lee | 2023-06-27T03:14:52Z | http://arxiv.org/abs/2306.15173v3 | # An information projection approach to robust propensity score estimation under missing at random
###### Abstract
Missing data is frequently encountered in many areas of statistics. Propensity score weighting is a popular method for handling missing data. The propensity score method employs a response propensity model, but correct specification of the statistical model can be challenging in the presence of missing data. Doubly robust estimation is attractive, as the consistency of the estimator is guaranteed when either the outcome regression model or the propensity score model is correctly specified. In this paper, we first employ information projection to develop an efficient and doubly robust estimator under indirect model calibration constraints. The resulting propensity score estimator can be equivalently expressed as a doubly robust regression imputation estimator by imposing the internal bias calibration condition in estimating the regression parameters. In addition, we generalize the information projection to allow for outlier-robust estimation. Some asymptotic properties are presented. The simulation study confirms that the proposed method allows robust inference against not only the violation of various model assumptions, but also outliers. A real-life application is presented using data from the Conservation Effects Assessment Project.
Introduction
Missing data is a fundamental problem in statistics. Ignoring missing data can lead to biased estimates of parameters, loss of information, decreased statistical power, increased standard errors, and weak generalizability of findings (Dong and Peng, 2013). Propensity score weighting is a popular method to handle missing data. It often employs a response propensity model, but correct specification of the statistical model can be challenging in the presence of missing data. How to make the propensity score weighting method less dependent on the response propensity model is an important practical problem.
There are two main approaches to implement robust propensity score estimation. One approach is to use a flexible model, nonparametric or semiparametric, to develop a robust propensity score estimation method. The nonparametric kernel method Hahn (1998), the sieve logistic regression method (Hirano et al., 2003), the general calibration method of Chan et al. (2016) using increasing dimension of the basis functions, the generalized covariate balancing estimator using tailor loss functions (Zhao, 2019) and the random forest approach of Dagdoug et al. (2023) are examples of the robust propensity score estimation method using flexible models. The other approach is to use the outcome regression model explicitly to obtain a doubly robust estimation. Doubly robust estimation has been extensively investigated in the literature. For examples, see Bang and Robins (2005), Cao et al. (2009), Yang et al. (2020).
In this paper, we consider the second approach and develop a unified framework for doubly robust propensity score (PS) estimation under the setup of missing at random (Rubin, 1976). Specifically, we apply the information projection (Csiszar and Shield, 2004) to the PS weighting problem under the covariate-balancing constraints. The initial weights of the PS are obtained from the working PS model, but the balancing constraints in the information projection are obtained from the working outcome regression model. Imposing covariate-balancing constraints can be understood as indirect model calibration. Indirect model calibration is the key condition to achieve double robustness in the PS weighting estimator. Information projection is used to obtain the augmented PS model.
The resulting PS weighting estimator can also be expressed as a regression imputation estimator using the balancing functions as covariates in the outcome regression model. This algebraical equivalence, called self-efficiency, is useful in developing outlier robust PS estimation. Outliers in the outcome variable often significantly reduce the efficiency of the resulting estimator. How to reduce the effect of outliers into estimation is a very important problem in statistics. While outlier-robust procedures are well studied in the literature on regression or classification (Stefanski et al., 1986; Wu and Liu, 2007; Zhang et al., 2010), it is not fully developed in the context of missing data. To our knowledge, constructing an outlier robust propensity score estimator has not been investigated in the literature. To solve this challenging problem, we first develop an outlier-robust regression imputation and then express it as a PS weighting estimator by incorporating a condition for self-efficiency.
The information projection approach to developing the augmented PS model and obtaining the doubly robust estimation is based on the Kullback-Leibler divergence. The Kullback-Leibler-divergence-based projection method is useful in incorporating the moment-type constraints. The \(\gamma\)-power divergence, originally proposed by Basu et al. (1998) and further developed by Eguchi (2021), is a generalization of the Kullback-Leibler divergence to expand the class of statistical models by introducing an additional scale parameter \(\gamma\). Furthermore, it is well known that the statistical model derived from \(\gamma\)-power divergence produces robust inferences against outliers. We adopt the \(\gamma\)-power divergence to develop an outlier-robust regression imputation estimator. By self-efficiency, the regression estimator can be expressed as a PS weighting estimator. The resulting estimator is also robust against misspecification of the response probability model.
The structure of the paper is as follows. In Section 2, the basic setup and the research problem are introduced. In Section 3, we develop a balancing constraint to adjust propensity weights and construct an augmented propensity model using information projection. In Section 4, we present the use of the \(\gamma\)-power divergence to enlarge the class of propensity score models and develop outlier-robust doubly robust estimators. Extension to causal inference is discussed in Section 5. The simulation study in Section 6 shows the
usefulness of the proposed method. A real-life application using data from the Conservation Effects Assessment Project is presented in Section 7. Some concluding remarks are made in Section 8. All required evidences are presented in the Appendix.
## 2 Basic Setup
Suppose that there are \(n\) independently and identically distributed realizations of \((\mathbf{X},Y,\delta)\), denoted by \(\{(\mathbf{x}_{i},y_{i},\delta_{i}):i=1,\ldots,n\}\), where \(\mathbf{x}_{i}\) is a vector of observed covariates and \(\delta_{i}\) is the missingness indicator associated with unit \(i\). In particular, \(\delta_{i}=1\) if \(y_{i}\) is observed and \(\delta_{i}=0\) otherwise. Thus, instead of observing \((\mathbf{x}_{i},y_{i},\delta_{i})\), we only observe \((\mathbf{x}_{i},\delta_{i},\delta_{i}y_{i})\) for \(i=1,\ldots,n\). We are interested in estimating \(\theta=E(Y)\) from the observed sample.
Suppose that we have a working outcome regression model given by
\[m_{0}\left(\mathbf{x};\mathbf{\beta}\right)\in\mathrm{span}\{b_{0}(\mathbf{x})\equiv 1,b_{ 1}(\mathbf{x}),\ldots,b_{L}(\mathbf{x})\} \tag{1}\]
for some given basis functions \(b_{1}(\mathbf{x}),\ldots,b_{L}(\mathbf{x})\). The model (1) can be expressed equivalently as
\[y_{i}=m(\mathbf{x}_{i};\mathbf{\beta})+\varepsilon_{i},\ m(\mathbf{x}_{i};\mathbf{\beta})= \beta_{0}+\beta_{1}b_{1}(\mathbf{x}_{i})+\ldots+\beta_{L}b_{L}(\mathbf{x}_{i})\]
for \(\beta=(\beta_{0},\beta_{1},\ldots,\beta_{L})^{\mathrm{T}}\), where \(\varepsilon_{i}\) is an error term that is independent of \(\mathbf{x}_{i}\) and satisfies \(E(\varepsilon_{i})=0\). Furthermore, the assumption of missing at random (Rubin, 1976) implies that \(\varepsilon_{i}\) and \(\delta_{i}\) are conditionally independent given \(\mathbf{x}_{i}\).
We are interested in using the following propensity score weighting (PSW) estimator
\[\hat{\theta}_{\mathrm{PSW}}=\frac{1}{n}\sum_{i=1}^{n}\delta_{i}\omega_{i}y_{i}. \tag{2}\]
The following lemma provides a sufficient condition for the unbiasedness of the propensity score weighting estimators under the model (1).
**Lemma 2.1**.: Assume that the response mechanism is missing at random. If the weights \(\omega_{i}\)'s satisfy
\[\sum_{i=1}^{n}\delta_{i}\omega_{i}\left[b_{0}(\mathbf{x}_{i}),b_{1}(\mathbf{x}_{i}), \ldots,b_{L}(\mathbf{x}_{i})\right]=\sum_{i=1}^{n}\left[b_{0}(\mathbf{x}_{i}),b_{1}( \mathbf{x}_{i}),\ldots,b_{L}(\mathbf{x}_{i})\right], \tag{3}\]
then \(\hat{\theta}_{\text{PSW}}\) in (2) is unbiased for \(\theta\) under the outcome regression model in (1).
By Lemma 2.1, condition (3) is the key condition that incorporates the outcome regression model into the PSW estimator. Condition (3) is often called the covariate balancing condition (Imai and Ratkovic, 2014) in the missing data literature. It is closely related to the calibration estimation in survey sampling (Deville and Sarndal, 1992). Because the balancing functions in (3) are equal to the basis functions of the outcome regression model in (1), we can achieve
\[\sum_{i=1}^{n}\delta_{i}\omega_{i}E(Y_{i}\mid\mathbf{x}_{i})=\sum_{i=1}^{n}E(Y_{i} \mid\mathbf{x}_{i}), \tag{4}\]
which is often referred to as the model calibration (Wu and Sitter, 2001). To distinguish from the direct model calibration of Wu and Sitter (2001), we may call (3) the _indirect_ model calibration.
The indirect model calibration condition also implies the following algebraic equivalence.
**Lemma 2.2**.: If the weights \(\omega_{i}\) satisfy (3), then we have
\[\sum_{i=1}^{n}\delta_{i}\omega_{i}y_{i}=\sum_{i=1}^{n}\{\delta_{i}y_{i}+(1- \delta_{i})\mathbf{b}_{i}^{\mathrm{T}}\mathbf{\hat{\beta}}\}, \tag{5}\]
where \(\mathbf{\hat{\beta}}\) satisfies
\[\sum_{i=1}^{n}\delta_{i}\left(\omega_{i}-1\right)\left(y_{i}-\mathbf{b}_{i}^{ \mathrm{T}}\mathbf{\hat{\beta}}\right)=0. \tag{6}\]
and \(\mathbf{b}_{i}=(1,b_{1}(\mathbf{x}_{i}),\ldots,b_{L}(\mathbf{x}_{i}))^{\mathrm{T}}\).
Lemma 2.2 means that, under (6), the PSW estimator that satisfies the covariate-balancing condition is algebraically equivalent to a regression imputation estimator using the balancing functions as covariates in the outcome regression model. In other words, regression imputation under the outcome regression model in (1) can be viewed as a propensity score weighting estimator where the propensity score weights \(\omega_{i}\) and the estimated regression coefficients \(\mathbf{\hat{\beta}}\) are related by equation (6). Equation (5) means that the final propensity score weights \(\omega_{i}\) do not directly use the outcome regression model (1) for
imputation, but implement regression imputation indirectly. Condition (5) is referred to as the self-efficiency condition.
We now wish to achieve double robustness by including the propensity score model. In the working propensity score model, the propensity score weights are computed as \(\omega_{i}=\{\pi(\mathbf{x}_{i};\hat{\mathbf{\phi}})\}^{-1}:=\hat{d}_{i}\), where \(\hat{\mathbf{\phi}}\) is the maximum likelihood estimation of \(\mathbf{\phi}\). However, the propensity score \(\hat{d}_{i}\) does not necessarily satisfy the balancing condition in (3). Therefore, to reduce the bias due to model misspecification in the propensity score model, it makes sense to impose the balancing condition in the final weighting. To achieve this goal, Hainmueller (2012) proposed the so-called entropy balancing method that minimizes
\[Q(\mathbf{\omega})=\sum_{i=1}^{n}\delta_{i}\omega_{i}\log(\omega_{i}/\hat{d}_{i}), \tag{7}\]
subject to the balancing constraint in (3). Using the Lagrange multiplier method, the solution takes the form of
\[\hat{\omega}_{i}=\frac{\hat{d}_{i}\exp\left(\mathbf{b}_{i}^{\mathrm{T}}\hat{\mathbf{ \lambda}}\right)}{\sum_{k=1}^{n}\left\{\delta_{k}\hat{d}_{k}\exp\left(\mathbf{b}_ {k}^{\mathrm{T}}\hat{\mathbf{\lambda}}\right)\right\}},\]
where \(\hat{\mathbf{\lambda}}\) is chosen to satisfy the covariate balancing condition. Chan et al. (2016) generalized this idea further to develop a general calibration weighting method that satisfies the covariance balancing property with increasing dimensions of basis functions \(\mathbf{b}(\mathbf{x})\).
## 3 Proposed method
### Information projection
We now develop an alternative approach to modifying the initial propensity score weights to satisfy the covariate balancing condition in (3). Instead of using a distance measure for propensity weights, as in (7), we use the Kullback-Leibler divergence measure to develop the information projection under some moment constraint. Because the Kullback-Leibler divergence is defined in terms of density functions, we need to formulate the weighting problem as an optimization problem for density functions.
To apply the information projection, using the Bayes theorem, we obtain
\[\frac{P(\delta=0\mid\mathbf{x})}{P(\delta=1\mid\mathbf{x})}=\frac{1-p}{p}\times\frac{f_{ 0}(\mathbf{x})}{f_{1}(\mathbf{x})},\]
where \(p=P(\delta=1)\) and \(f_{k}(\mathbf{x})=f(\mathbf{x}\mid\delta=k)\) for \(k=0,1\). Thus, the propensity score (PS) weight can be written as
\[\omega(\mathbf{x})\equiv\frac{1}{P(\delta=1\mid\mathbf{x})}=1+c\times\frac{f_{0}(\mathbf{x })}{f_{1}(\mathbf{x})}. \tag{8}\]
where \(c=1/p-1\). Furthermore, assume that the basis function \(\mathbf{b}(\mathbf{x})\) in the outcome regression model in (1) is integrable in the sense that
\[pE_{1}\{\mathbf{b}(\mathbf{X})\}+(1-p)E_{0}\{\mathbf{b}(\mathbf{X})\}=E\{\mathbf{b}(\mathbf{X})\}, \tag{9}\]
where \(E_{k}\{\mathbf{b}(\mathbf{X})\}=\int\mathbf{b}(\mathbf{x})f_{k}(\mathbf{x})d\mu(\mathbf{x})\) for \(k=0,1\). If \(E\{\mathbf{b}(\mathbf{X})\}\) is known, we can impose a constraint on \(f_{0}\) and \(f_{1}\) such that (9) holds for a given \(E\{\mathbf{b}(\mathbf{X})\}\).
Let \(\pi_{0}(\mathbf{x})\) be the "working" model for \(P(\delta=1\mid\mathbf{x})\). For a fixed \(f_{1}\), by (8), we can define \(f_{0}^{(0)}\) to satisfy
\[\frac{1}{\pi_{0}(\mathbf{x})}=1+c\times\frac{f_{0}^{(0)}(\mathbf{x})}{f_{1}(\mathbf{x})}. \tag{10}\]
We can treat \(f_{0}^{(0)}\) as the baseline density for \(f_{0}\) derived from the working propensity score model \(\pi_{0}(\mathbf{x})\). Our objective is to modify \(f_{0}^{(0)}\) to satisfy (9). Finding the density function \(f_{0}\) satisfying the balancing condition in (9) can be formulated as an optimization problem using the Kullback-Leibler divergence. Given the baseline density \(f_{0}^{(0)}\), the Kullback-Leibler divergence of \(f_{0}\) at \(f_{0}^{(0)}\) is defined by
\[D_{\text{KL}}\left(f_{0}\|f_{0}^{(0)}\right)=\int\log\left(f_{0}/f_{0}^{(0)} \right)f_{0}d\mu. \tag{11}\]
Given \(f_{0}^{(0)}\), we wish to find the minimizer of \(D_{\text{KL}}(f_{0}\|f_{0}^{(0)})\) subject to (9). By the Euler-Lagrange equation, the solution can be written as
\[f_{0}^{*}(\mathbf{x};\mathbf{\lambda})=f_{0}^{(0)}(\mathbf{x})\frac{\exp\{\mathbf{b}(\mathbf{x})^ {\text{T}}\mathbf{\lambda}\}}{\int\exp\{\mathbf{b}(\mathbf{x})^{\text{T}}\mathbf{\lambda}\}f_{ 0}^{(0)}(\mathbf{x})d\mu(\mathbf{x})} \tag{12}\]
where \(\mathbf{\lambda}\) is the Lagrange multiplier associated with the constraint (9). By (12), we obtain
\[\frac{f_{0}^{*}(\mathbf{x};\mathbf{\lambda})}{f_{1}(\mathbf{x})}=\frac{f_{0}^{(0)}(\mathbf{x} )}{f_{1}(\mathbf{x})}\times\frac{\exp\{\sum_{j=1}^{L}\lambda_{j}b_{j}(\mathbf{x})\}}{ \int\exp\{\sum_{j=1}^{L}\lambda_{j}b_{j}(\mathbf{x})\}f_{0}^{(0)}(\mathbf{x})d\mu(\mathbf{x })}. \tag{13}\]
Combining the above results, we can establish the following lemma.
**Lemma 3.1**.: Given the working PS model \(\pi_{0}(\mathbf{x})\), the final PS weight function (8) minimizing the Kullback-Leibler divergence in (11) subject to the balancing condition (9) is given by
\[\pi^{*}(\mathbf{x};\mathbf{\lambda})=\frac{\pi_{0}(\mathbf{x})}{\pi_{0}(\mathbf{x})+\{1-\pi_{0}( \mathbf{x})\}\exp\Big{\{}\lambda_{0}+\sum_{j=1}^{L}\lambda_{j}b_{j}(\mathbf{x})\Big{\}}}, \tag{14}\]
where \(\lambda_{0},\lambda_{1},\ldots,\lambda_{L}\) are the Lagrange multipliers satisfying (9).
In (14), the Lagrange multiplier \(\mathbf{\lambda}\) is determined to satisfy the balancing condition (9) with
\[E_{0}\{\mathbf{b}(\mathbf{X})\}=\frac{\int\mathbf{b}(\mathbf{x})O_{0}(\mathbf{x})\exp\{\sum_{j=1} ^{L}\lambda_{j}b_{j}(\mathbf{x})\}f_{1}(\mathbf{x})d\mu(\mathbf{x})}{\int O_{0}(\mathbf{x}) \exp\{\sum_{j=1}^{L}\lambda_{j}b_{j}(\mathbf{x})\}f_{1}(\mathbf{x})d\mu(\mathbf{x})}.\]
where \(O_{0}(\mathbf{x})=\{\pi_{0}(\mathbf{x})\}^{-1}-1\). If the working propensity score model is correct, then the balancing constraint (9) is already satisfied. In this case, we have \(\lambda_{j}\equiv 0\) for all \(j=1,\ldots,L\). Thus, the information projection is used to obtain the augmented propensity score model (Kim and Riddles, 2012).
### Model parameter estimation
We now discuss parameter estimation in the augmented propensity score model in (14). Suppose that the working propensity score model is given by \(\pi_{0}(\mathbf{x};\mathbf{\phi})\) with the unknown parameter \(\mathbf{\phi}\). We can estimate \(\mathbf{\phi}\) by maximizing
\[\ell(\mathbf{\phi})=\sum_{i=1}^{n}\left[\delta_{i}\log\left\{\pi_{0}(\mathbf{x}_{i}; \mathbf{\phi})\right\}+(1-\delta_{i})\log\{1-\pi_{0}(\mathbf{x}_{i};\mathbf{\phi})\}\right]\]
with respect to \(\mathbf{\phi}\). Note that the propensity score model has nothing to do with the outcome regression model in (1).
Now, to incorporate the outcome regression model in (1), we use the augmented propensity score model in (14) to obtain the final propensity score weights.
\[\hat{\omega}_{i}=1+(\hat{d}_{i}-1)\exp\left\{\sum_{j=0}^{L}\hat{\lambda}_{j}b _{j}(\mathbf{x}_{i})\right\}, \tag{15}\]
where \(\hat{d}_{i}=\{\pi_{0}(\mathbf{x}_{i};\hat{\mathbf{\phi}})\}^{-1}\) and \(\hat{\lambda}_{0},\hat{\lambda}_{1},\ldots,\hat{\lambda}_{L}\) are computed from the calibration equation:
\[\sum_{i=1}^{n}\delta_{i}\hat{\omega}_{i}b_{j}(\mathbf{x}_{i})=\sum_{i=1}^{n}b_{j}( \mathbf{x}_{i}),\quad\forall j=0,1,\ldots,L. \tag{16}\]
As discussed in Section 2, the propensity score estimator satisfying the indirect model calibration condition can be expressed as an imputation estimator. Since \(\hat{\omega}_{i}\) satisfy (16), by Lemma 2.2, we can obtain
\[\frac{1}{n}\sum_{i=1}^{n}\delta_{i}\hat{\omega}_{i}y_{i}=\frac{1}{n}\sum_{i=1}^{ n}\left\{\delta_{i}y_{i}+(1-\delta_{i})\hat{y}_{i}\right\}, \tag{17}\]
where \(\hat{y}_{i}=\mathbf{b}_{i}^{\mathrm{T}}\hat{\mathbf{\beta}}\) and
\[\hat{\mathbf{\beta}} = \left\{\sum_{i=1}^{n}\delta_{i}\left(\hat{\omega}_{i}-1\right) \mathbf{b}_{i}\mathbf{b}_{i}^{\mathrm{T}}\right\}^{-1}\left\{\sum_{i=1}^{n}\delta_{i} \left(\hat{\omega}_{i}-1\right)\mathbf{b}_{i}y_{i}\right\}. \tag{18}\]
Note that, by (15), \(\hat{\mathbf{\beta}}\) in (18) can be expressed as the minimizer of
\[Q(\mathbf{\beta})=\sum_{i=1}^{n}\delta_{i}\left(y_{i}-\mathbf{b}_{i}^{ \mathrm{T}}\mathbf{\beta}\right)^{2}(\hat{d}_{i}-1)\hat{g}_{i}, \tag{19}\]
where \(\hat{g}_{i}=\exp(\mathbf{b}_{i}^{\mathrm{T}}\hat{\mathbf{\lambda}})\). The first term \(\hat{d}_{i}-1\) is the adjustment term to incorporate the working PS model into the imputation. The second term \(\hat{g}_{i}=\exp(\mathbf{b}_{i}^{\mathrm{T}}\hat{\mathbf{\lambda}})\) satisfy the covariate balancing condition in (15) in constructing the PS weights. The role of \(\hat{g}_{i}>0\) is to achieve the self-efficiency in (18) and make \(\hat{\omega}_{i}>1\).
If the working PS model is correct, then \(\hat{g}_{i}\to 1\) in probability. In this case, the resulting imputation estimator satisfies
\[\frac{1}{n}\sum_{i=1}^{n}\left\{\delta_{i}y_{i}+(1-\delta_{i}) \hat{y}_{i}\right\}\cong\frac{1}{n}\sum_{i=1}^{n}\left\{\hat{y}_{i}+\delta_{i }\hat{d}_{i}\left(y_{i}-\hat{y}_{i}\right)\right\}. \tag{20}\]
Condition (20) is called the _internal bias calibration_ (IBC), which was originally termed by Firth and Bennett (1998) in the context of design-consistent estimation of model parameters under complex sampling. The imputation estimator satisfying the IBC condition (20) achieves consistency even when the outcome regression model is incorrect. Thus, the IBC condition is a sufficient condition for the double robustness of the regression imputation estimator.
Equation (17) deserves further discussion. In the PSW estimation, the final estimator \(\hat{\theta}_{\mathrm{PSW}}\) is a function of two estimated nuisance parameters \(\hat{\mathbf{\phi}}\) and \(\hat{\mathbf{\lambda}}\) while the imputation
estimator is a function of other estimated nuisance parameters \(\hat{\mathbf{\phi}}\) and \(\hat{\mathbf{\beta}}\). Thus, \(\hat{\mathbf{\lambda}}\) and \(\hat{\mathbf{\beta}}\) are in one-to-one correspondence with each other through the following estimating equation:
\[\sum_{i=1}^{n}\delta_{i}(y_{i}-\mathbf{b}_{i}^{\mathrm{T}}\hat{\mathbf{\beta}})\mathbf{b}_{i }(\hat{d}_{i}-1)\exp(\mathbf{b}_{i}^{\mathrm{T}}\hat{\mathbf{\lambda}})=\mathbf{0}. \tag{21}\]
### Statistical properties
Now we formally describe the asymptotic properties of the augmented PSW estimator using the final propensity score weight (15) with the calibration constraint in (16). By the algebraic equivalence established in Lemma 2.2, the result is directly applicable to the regression imputation estimator using the regression coefficient in (18).
Let \(\pi_{0}(\mathbf{x};\mathbf{\phi})\) be the working propensity score model. We can estimate \(\mathbf{\phi}\) by solving
\[\hat{\mathbf{U}}_{1}(\mathbf{\phi})\equiv\frac{1}{n}\sum_{i=1}^{n}\left\{\frac{\delta_ {i}}{\pi_{0}(\mathbf{x}_{i};\mathbf{\phi})}-1\right\}\mathbf{h}(\mathbf{x}_{i};\mathbf{\phi})=\bm {0} \tag{22}\]
for some \(\mathbf{h}(\mathbf{x};\mathbf{\phi})\) such that the solution to (22) exists uniquely almost everywhere. The estimating equation (22) for \(\mathbf{\phi}\) includes the score equation for \(\mathbf{\phi}\) as a special case. For a given \(\hat{\mathbf{\phi}}\), let \(\hat{\mathbf{U}}_{2}(\mathbf{\lambda}\mid\hat{\mathbf{\phi}})\) be the estimating equation for \(\mathbf{\lambda}\). By (3), we can estimate \(\mathbf{\lambda}\) by solving
\[\hat{\mathbf{U}}_{2}(\mathbf{\lambda}\mid\hat{\mathbf{\phi}})\equiv\frac{1}{n}\sum_{i=1}^{ n}\delta_{i}\omega(\mathbf{x}_{i};\hat{\mathbf{\phi}},\mathbf{\lambda})\mathbf{b}_{i}-\frac{1}{n} \sum_{i=1}^{n}\mathbf{b}_{i}=\mathbf{0}, \tag{23}\]
where
\[\omega\left(\mathbf{x}_{i};\mathbf{\phi},\mathbf{\lambda}\right)=1+\left\{\frac{1}{\pi_{ 0}(\mathbf{x}_{i};\mathbf{\phi})}-1\right\}\exp\left(\mathbf{b}_{i}^{\mathrm{T}}\mathbf{ \lambda}\right). \tag{24}\]
Let \(\mathbf{\lambda}^{*}\) and \(\mathbf{\phi}^{*}\) be the probability limits of \(\hat{\mathbf{\lambda}}\) and \(\hat{\mathbf{\phi}}\), respectively. If the maximum likelihood estimation is used to estimate \(\mathbf{\phi}\), then \(\mathbf{\phi}^{*}\) can be understood as the minimizer of the cross-entropy
\[H(\pi,\pi_{0}(\mathbf{\phi}))=-E\left[\pi(\mathbf{x})\log\left\{\pi_{0}(\mathbf{x};\mathbf{ \phi})\right\}+\left\{1-\pi(\mathbf{x})\right\}\log\left\{1-\pi_{0}(\mathbf{x};\mathbf{ \phi})\right\}\right]\]
with respect to \(\mathbf{\phi}\), where \(\pi(\mathbf{x})=P(\delta=1\mid\mathbf{x})\) is the true response probability. Now, using (24), we can treat
\[\hat{\theta}_{\mathrm{APSW}}=\frac{1}{n}\sum_{i=1}^{n}\delta_{i}\omega(\mathbf{x}_ {i};\hat{\mathbf{\phi}},\hat{\mathbf{\lambda}})y_{i}:=\hat{\theta}_{\mathrm{APSW}}( \hat{\mathbf{\phi}},\hat{\mathbf{\lambda}}) \tag{25}\]
as a function of \((\hat{\mathbf{\phi}},\hat{\mathbf{\lambda}})\) and apply the standard Taylor linearization to obtain the following theorem. The regularity conditions and the proof are presented in the Supplementary Material.
**Theorem 3.1**.: Let \(\hat{\theta}_{\mathrm{APSW}}\) in (25) be the PSW estimator under the augmented PS model in (14) with \(\hat{\mathbf{\phi}}\) and \(\hat{\mathbf{\lambda}}\) satisfying (22) and (23), respectively. Under the regularity conditions described in the Appendix, we have
\[\hat{\theta}_{\mathrm{APSW}}-\theta = \frac{1}{n}\sum_{i=1}^{n}\eta(\mathbf{x}_{i},y_{i},\delta_{i})+o_{p}( n^{-1/2}), \tag{26}\]
where
\[\eta(\mathbf{x}_{i},y_{i},\delta_{i})=\mathbf{b}_{i}^{\mathrm{T}}\mathbf{\beta}^{*}-\theta +\delta_{i}\omega(\mathbf{x}_{i};\mathbf{\phi}^{*},\mathbf{\lambda}^{*})\left(y_{i}-\mathbf{b }_{i}^{\mathrm{T}}\mathbf{\beta}^{*}\right)+\left\{1-\frac{\delta_{i}}{\pi_{0}( \mathbf{x}_{i};\mathbf{\phi}^{*})}\right\}\mathbf{h}_{i}^{\mathrm{T}}\mathbf{\kappa}^{*}, \tag{27}\]
\(\omega(\mathbf{x};\mathbf{\phi},\mathbf{\lambda})\) is defined in (24), \(\mathbf{\beta}^{*}\) is the probability limit of \(\hat{\mathbf{\beta}}\) in (18), and \(\mathbf{\kappa}^{*}\) is the probability limit of \(\hat{\mathbf{\kappa}}\) that satisfies
\[\sum_{i=1}^{n}\delta_{i}(\widehat{\pi}_{i}^{-1}-1)\left\{\exp(\mathbf{b}_{i}^{ \mathrm{T}}\hat{\mathbf{\lambda}})(y_{i}-\mathbf{b}_{i}^{\mathrm{T}}\hat{\mathbf{\beta}}) -\mathbf{h}_{i}^{\mathrm{T}}\hat{\mathbf{\kappa}}\right\}\left[\frac{\partial}{ \partial\mathbf{\phi}}\text{logit}\left\{\pi_{0}\left(\mathbf{x}_{i};\mathbf{\phi}\right) \right\}\Big{|}_{\mathbf{\phi}=\hat{\mathbf{\phi}}}\right]=\mathbf{0}. \tag{28}\]
In (27), \(\eta(\mathbf{x},y,\delta)\) is called the influence function of \(\hat{\theta}_{\mathrm{APSW}}\)(Tsiatis, 2006). Note that we can obtain
\[E\left\{\eta(\mathbf{X},Y,\delta)\right\}= E\left\{\delta\omega(\mathbf{X};\mathbf{\phi}^{*},\mathbf{\lambda}^{*})Y\right\}-\theta\] \[+E\left[\{1-\delta\omega(\mathbf{X};\mathbf{\phi}^{*},\mathbf{\lambda}^{*}) \}\mathbf{b}^{\mathrm{T}}(\mathbf{X})\mathbf{\beta}^{*}\right]\] \[+E\left[\left\{1-\frac{\delta}{\pi_{0}(\mathbf{X};\mathbf{\phi}^{*})} \right\}\mathbf{h}^{\mathrm{T}}(\mathbf{X})\mathbf{\kappa}^{*}\right],\]
where the second term is equal to zero by the definition of \(\mathbf{\lambda}^{*}\) and the third term is equal to zero by the definition of \(\mathbf{\phi}^{*}\). If the outcome regression model is correctly specified, we have
\[E\left\{\delta\omega(\mathbf{X};\mathbf{\phi}^{*},\mathbf{\lambda}^{*})Y\right\}= E\left\{\delta\omega(\mathbf{X};\mathbf{\phi}^{*},\mathbf{\lambda}^{*})\mathbf{b}^{ \mathrm{T}}(\mathbf{X})\mathbf{\beta}^{*}\right\}\] \[= E\left\{\mathbf{b}^{\mathrm{T}}(\mathbf{X})\mathbf{\beta}^{*}\right\}=E(Y),\]
where the second equality follows from the definition of \(\mathbf{\lambda}^{*}\). Furthermore, the correct specification of the outcome regression model gives \(\mathbf{\kappa}^{*}=\mathbf{0}\). Thus, we can summarize the asymptotic result in the outcome regression model as follows.
**Corollary 3.1**.: Suppose that the regularity conditions in Theorem 3.1 hold and the outcome regression model is correctly specified. Then, we obtain
\[\sqrt{n}\left(\hat{\theta}_{\mathrm{APSW}}-\theta\right)\xrightarrow{\mathcal{ L}}N(0,V_{0}), \tag{29}\]
where
\[V_{0}=\mathrm{var}\left\{E(Y\mid\mathbf{X})\right\}+E\left[\delta\{\omega(\mathbf{X}; \mathbf{\phi}^{*},\mathbf{\lambda}^{*})\}^{2}\,\mathrm{var}\left(Y\mid\mathbf{X}\right) \right]. \tag{30}\]
Now, consider the case where the propensity score model is correctly specified. In this case, we obtain \(\lambda^{*}=0\) and \(\pi_{0}(\mathbf{x};\mathbf{\phi}^{*})=P(\delta=1\mid\mathbf{x})\). Thus,
\[E\left\{\delta\omega(\mathbf{X};\mathbf{\phi}^{*},\mathbf{\lambda}^{*})Y\right\}=E\left[P( \delta=1\mid\mathbf{X})\{\pi_{0}(\mathbf{X};\mathbf{\phi}^{*})\}^{-1}Y\right]=E(Y).\]
Moreover, the variance is
\[\mathrm{var}\left\{\eta(\mathbf{X},Y,\delta)\right\} = \mathrm{var}\left\{\mathbf{b}(\mathbf{X})^{\mathrm{T}}\mathbf{\beta}^{*}+\bm {h}^{\mathrm{T}}(\mathbf{X})\mathbf{\kappa}^{*}\right\}\] \[+E\left[\{P(\delta=1\mid\mathbf{X})\}^{-1}\{Y-\mathbf{b}^{\mathrm{T}}( \mathbf{X})\mathbf{\beta}^{*}-\mathbf{h}^{\mathrm{T}}(\mathbf{X})\mathbf{\kappa}^{*}\}^{2}\right].\]
Thus, the efficiency gain using the estimated propensity score function over the true one is visible only when the propensity score model is true but the outcome regression model is incorrect.
If the two models, the outcome regression model and the propensity score model, are correctly specified, then the two variance forms are equal to
\[\mathrm{var}\left\{\eta(\mathbf{X},Y,\delta)\right\}=\mathrm{var}\left\{E(Y\mid \mathbf{X})\right\}+E\left[\{P(\delta=1\mid\mathbf{X})\}^{-1}\,\mathrm{var}\left(Y \mid\mathbf{X}\right)\right],\]
which is equal to the lower bound of the semiparametric efficient estimator considered in Robins et al. (1994). The influence function in (27) can also be used to develop the linearized variance estimation formula for \(\hat{\theta}_{\mathrm{APSW}}\) regardless of whether the outcome regression model or the propensity score model holds.
In the next section, we address how to obtain the robust propensity score weighting estimator against model misspecification of the propensity score model and outliers in the outcome regression model.
## 4 Adding outlier-robustness
In many real cases, outliers exist in addition to missingness. In this case, imposing robustness against outliers is an important practical problem. This section discusses how to allow robust inference against outliers in the outcome variable in the context of doubly robust estimator. In the presence of outliers, one may use the heavy-tailed distribution (e.g. \(t\)-distribution) (Lange et al., 1989) to allow robust inference against outliers. However, it is not straightforward to extend the indirect model calibration condition to the \(t\)-distribution.
Basu et al. (1998) introduced the density power divergence as a generalization of Kullback-Leibler divergence to expand the class of statistical models by introducing an additional scale parameter \(\gamma\). Density power divergence is also called \(\gamma\)-power divergence (Eguchi, 2021). We can use the \(\gamma\)-power divergence to develop an outlier robust imputation method. Specifically, we employ the following \(M\)-estimator to handle the misspecification of the propensity score model. Define
\[Q_{\gamma}(\mathbf{\beta}\mid\mathbf{\lambda})=\sum_{i=1}^{n}\delta_{i}(\hat{d_{i}}-1 )\Psi_{\gamma}\left(y_{i}-\mathbf{b}_{i}^{\mathrm{T}}\mathbf{\beta}\right)g_{i}(\mathbf{ \lambda}), \tag{31}\]
where \(\Psi_{\gamma}\) is an objective function that reduces the effect of outliers, and \(g_{i}=\exp\left(\mathbf{b}_{i}^{\mathrm{T}}\mathbf{\lambda}\right)\) is the adjustment term to achieve the self-efficiency. Thus, under some suitable choice of \(\Psi_{\gamma}\), the resulting estimator is not only doubly robust, but also outlier-robust. Note that the Huber-type loss function could be used in (31), but it is computationally less attractive as the Huber loss function is non-smooth.
To incorporate the \(\gamma\)-divergence, we now consider the following minimization problem,
\[\operatorname*{arg\,min}_{(\mathbf{\beta},\sigma^{2})}Q_{\gamma}(\mathbf{\beta}; \sigma^{2}\mid\mathbf{\lambda})=-(2\pi\sigma^{2})^{-\frac{\gamma}{1+\gamma}}\sum_ {i=1}^{n}\delta_{i}(\hat{d_{i}}-1)\exp\left\{-\frac{\gamma}{2\sigma^{2}}(y_{i} -\mathbf{b}_{i}^{\mathrm{T}}\mathbf{\beta})^{2}\right\}g_{i}(\lambda). \tag{32}\]
That is, in (31), we use
\[\Psi_{\gamma}\left(x\right)=-(2\pi\sigma^{2})^{-\frac{\gamma}{1+\gamma}}\exp\left\{ -\frac{\gamma}{2\sigma^{2}}x^{2}\right\}. \tag{33}\]
As a result, the optimization problem (32) can be solved by the iterated reweighted least squares method. Apparently, we use
\[\sum_{i=1}^{n}\delta_{i}\left(y_{i}-\mathbf{b}_{i}^{\mathrm{T}}\mathbf{ \beta}\right)(\hat{d}_{i}-1)g_{i}(\mathbf{\lambda})q_{\gamma,i}(\mathbf{\beta},\sigma^ {2})\mathbf{b}_{i} = \mathbf{0} \tag{34}\] \[\sum_{i=1}^{n}\delta_{i}\left\{\left(y_{i}-\mathbf{b}_{i}^{\mathrm{T} }\mathbf{\beta}\right)^{2}-\frac{\sigma^{2}}{1+\gamma}\right\}(\hat{d}_{i}-1)g_{i }(\mathbf{\lambda})q_{\gamma,i}(\mathbf{\beta},\sigma^{2}) = 0 \tag{35}\]
to estimate \(\mathbf{\beta}\) and \(\sigma^{2}\) for given \(\mathbf{\lambda}\), where
\[\hat{q}_{\gamma,i}(\mathbf{\beta},\sigma^{2})=\exp\{-0.5\gamma\sigma^{-2}(y_{i}- \mathbf{b}_{i}^{\mathrm{T}}\mathbf{\beta})^{2}\}. \tag{36}\]
Note that during the iterative reweighted least squares estimation procedure, if \(|y_{i}-\mathbf{b}_{i}^{\mathrm{T}}\hat{\mathbf{\beta}}|\gg 0\), then the effect of the \(i\)-th subject for the estimation of \(\mathbf{\beta}\) will be greatly mitigated as \(\hat{w}_{\gamma,i}\) will be smaller, indicating the robustness of our proposed method. The parameter \(\gamma>0\) plays the role of the tuning parameter for robust estimation. As the value of \(\gamma\) increases, the estimator becomes more robust but less efficient.
Let \(\hat{\mathbf{\beta}}_{q}\) and \(\hat{\sigma}_{q}^{2}\) be the solution to the above estimating equations, (34) and (35). We can construct robust imputation with \(\hat{y}_{i}=\mathbf{b}_{i}^{\mathrm{T}}\hat{\mathbf{\beta}}_{q}\) easily. Furthermore, we can use the robust imputation to construct robust propensity score weights with calibration constraints. Specifically, by (34), we can obtain
\[\sum_{i=1}^{n}\delta_{i}\left(y_{i}-\mathbf{b}_{i}^{\mathrm{T}}\hat{\mathbf{\beta}}_{q }\right)\mathbf{b}_{i}(\hat{d}_{i}-1)g_{i}(\hat{\mathbf{\lambda}})\hat{q}_{\gamma,i}= \mathbf{0}, \tag{37}\]
which establishes
\[\sum_{i=1}^{n}\left\{\delta_{i}y_{i}+(1-\delta_{i})\hat{y}_{i}\right\}=\sum_{ i=1}^{n}\delta_{i}\hat{\omega}_{\gamma,i}y_{i}, \tag{38}\]
where \(\hat{y}_{i}=\mathbf{b}_{i}^{\mathrm{T}}\hat{\mathbf{\beta}}_{q}\) and
\[\omega_{\gamma,i}(\mathbf{x}_{i};\mathbf{\lambda},\mathbf{\beta},\sigma^{2},\hat{\mathbf{ \phi}})=1+(d_{i}(\hat{\mathbf{\phi}})-1)g_{i}(\mathbf{\lambda})\{q_{\gamma,i}(\mathbf{ \beta},\sigma^{2})\}. \tag{39}\]
Note that (38) is the self-efficiency of the propensity score weights in (39). The role of \(g_{i}(\hat{\mathbf{\lambda}})\) is to guarantee self-efficiency in (38) and \(\hat{\omega}_{\gamma,i}\geq 1\). The final PS weights satisfy
\[\sum_{i=1}^{n}\delta_{i}\hat{\omega}_{\gamma,i}\mathbf{b}_{i}=\sum_{i=1}^{n}\mathbf{b}_{ i}. \tag{40}\]
Thus, for given \(\hat{q}_{\gamma,i}=q_{\gamma,i}(\hat{\mathbf{\beta}}_{q},\hat{\sigma}_{q}^{2})\), we can compute \(\hat{\mathbf{\lambda}}\) from calibration constraints in (40). That is, we can treat (34), (35), and (40) as the estimating equations for \(\mathbf{\beta}\), \(\sigma^{2}\) and \(\mathbf{\lambda}\). Note that (39) is equivalent to (15) for \(q_{\gamma,i}=1\). The additional factor \(q_{\gamma,i}(\mathbf{\beta},\sigma^{2})\) controls the effect of outliers in the final weighting.
Let \(\mathbf{\beta}^{*}\) and \(\sigma^{*2}\) be the probability limits of \(\hat{\mathbf{\beta}}_{q}\) and \(\hat{\sigma}_{q}^{2}\), respectively. The following theorem presents the Taylor linearization for our proposed estimator in (38).
**Theorem 4.1**.: Let \(\hat{\theta}_{\mathrm{APSW},\gamma}=n^{-1}\sum_{i=1}^{n}\delta_{i}\hat{\omega }_{\gamma,i}y_{i}\) be the propensity score weighting estimator under the augmented propensity score model in (14) with \(\hat{\mathbf{\phi}}\) and \(\hat{\mathbf{\lambda}}\) satisfying (22) and (37), \(\hat{\mathbf{\beta}}_{q}\) and \(\hat{\sigma}_{q}^{2}\) minimizing (32) respectively. Under the regularity conditions described in the Appendix, we have
\[\hat{\theta}_{\mathrm{APSW},\gamma}-\theta=\frac{1}{n}\sum_{i=1}^{n}\eta_{ \gamma}(\mathbf{x}_{i},y_{i},\delta_{i})+o_{p}(n^{-1/2}), \tag{41}\]
where
\[\eta_{\gamma}(\mathbf{x}_{i},y_{i},\delta_{i})\] \[=\mathbf{b}_{i}^{\mathrm{T}}\mathbf{\mu}-\theta+\delta_{i}\omega_{\gamma,i }(\mathbf{x}_{i};\mathbf{\lambda}^{*},\mathbf{\beta}^{*},\sigma^{*2},\mathbf{\phi}^{*})(y_{i} -\mathbf{b}_{i}^{\mathrm{T}}\mathbf{\mu}^{*})+\left\{1-\delta_{i}d_{i}(\mathbf{\phi}^{*}) \right\}\mathbf{\kappa}^{*\mathrm{T}}\mathbf{h}(\mathbf{x}_{i};\mathbf{\phi}^{*})\] \[\quad+\delta_{i}\{\omega_{\gamma,i}(\mathbf{x}_{i};\mathbf{\lambda}^{*}, \mathbf{\beta}^{*},\sigma^{*2},\mathbf{\phi}^{*})-1\}(y_{i}-\mathbf{b}_{i}^{\mathrm{T}}\bm {\beta}^{*})\mathbf{b}_{i}^{\mathrm{T}}\mathbf{\zeta}^{*}\] \[\quad+\delta_{i}\{\omega_{\gamma,i}(\mathbf{x}_{i};\mathbf{\lambda}^{*}, \mathbf{\beta}^{*},\sigma^{*2},\mathbf{\phi}^{*})-1\}(y_{i}-\mathbf{b}_{i}^{\mathrm{T}}\bm {\beta}^{*})^{2}\nu^{*}\left\{(y_{i}-\mathbf{b}_{i}^{\mathrm{T}}\mathbf{\beta}^{*})^{2 }-\sigma^{*2}/(1+\gamma)\right\}, \tag{42}\]
where \(\mathbf{\mu}^{*}\), \(\mathbf{\kappa}^{*}\), \(\nu^{*}\), \(\mathbf{\zeta}^{*}\) are the probability limits of the solutions for the equations presented in the Appendix.
Note that the first line and the last line has the same structure as in (27) of Theorem 3.1. The other two lines are the additional influence functions that reflect the effect of the gamma-divergence model. Specifically, the second line reflects the estimation effect of \(\hat{\mathbf{\beta}}_{q}\), and the third line reflects the estimation effect of \(\hat{\sigma}_{q}^{2}\).
Regarding the choice of the tuning parameter \(\gamma\), we can use cross-validation to choose the optimal \(\gamma\). Specifically, we can randomly partition the sample into \(K\)-groups \((S_{1},\cdots,S_{K})\) and then compute
\[\text{MSPE}(\gamma)=\sum_{k=1}^{K}\sum_{i\in S_{k}}\delta_{i}\left\{y_{i}-\hat{y }_{i}^{(-k)}(\gamma)\right\}^{2}\]
where \(\hat{y}_{i}^{(-k)}(\gamma)=\mathbf{b}_{i}^{\mathrm{T}}\hat{\mathbf{\beta}}_{q}^{(-k)}\) and \(\hat{\mathbf{\beta}}_{q}^{(-k)}\) is obtained by solving the same estimation formula using the sample in \(S_{k}^{c}\). The minimizer of \(\text{MSPE}(\gamma)\) can be used as the optimal value of the tuning parameter. As we can find in our simulation study in Section 6, the optimal choice of \(\gamma\) is not critical. The simulation result shows similar performances for different values of \(\gamma\).
For variance estimation, we can use the influence function in (42) to construct a linearization variance estimator. Specifically, the linearization variance estimator for \(\hat{\theta}_{\text{APSW},\gamma}\) for a given \(\gamma\) is
\[\widehat{\mathrm{V}}(\hat{\theta}_{\text{APSW},\gamma})=\frac{1}{n(n-1)}\sum_{ i=1}^{n}(\hat{\eta}_{\gamma}(\mathbf{x}_{i},y_{i},\delta_{i})-\bar{\eta}_{\gamma})^{2}, \tag{43}\]
where
\[\hat{\eta}_{\gamma}(\mathbf{x}_{i},y_{i},\delta_{i})\] \[=\mathbf{b}_{i}^{\mathrm{T}}\hat{\mathbf{\mu}}+\delta_{i}\omega_{\gamma,i }(\mathbf{x}_{i};\hat{\mathbf{\lambda}},\hat{\mathbf{\beta}}_{q},\hat{\sigma}_{q}^{2},\hat {\mathbf{\phi}})(y_{i}-\mathbf{b}_{i}^{\mathrm{T}}\hat{\mathbf{\mu}})+\left\{1-\delta_{i} d_{i}(\hat{\mathbf{\phi}})\right\}\hat{\mathbf{\kappa}}^{\mathrm{T}}\mathbf{h}(\mathbf{x}_{i}; \hat{\mathbf{\phi}})\] \[\quad+\delta_{i}\{\omega_{\gamma,i}(\mathbf{x}_{i};\hat{\mathbf{\lambda} },\hat{\mathbf{\beta}}_{q},\hat{\sigma}_{q}^{2},\hat{\mathbf{\phi}})-1\}(y_{i}-\mathbf{b}_ {i}^{\mathrm{T}}\hat{\mathbf{\beta}}_{q})\mathbf{b}_{i}^{\mathrm{T}}\hat{\mathbf{\zeta}}\] \[\quad+\delta_{i}\{\omega_{\gamma,i}(\mathbf{x}_{i};\hat{\mathbf{\lambda} },\hat{\mathbf{\beta}}_{q},\hat{\sigma}_{q}^{2},\hat{\mathbf{\phi}})-1\}(y_{i}-\mathbf{b}_ {i}^{\mathrm{T}}\hat{\mathbf{\beta}}_{q})^{2}\hat{\nu}\left\{(y_{i}-\mathbf{b}_{i}^{ \mathrm{T}}\hat{\mathbf{\beta}}_{q})^{2}-\hat{\sigma}_{q}^{2}/(1+\gamma)\right\}, \tag{44}\]
and
\[\bar{\eta}_{\gamma}=\frac{1}{n}\sum_{i=1}^{n}\hat{\eta}_{\gamma}(\mathbf{x}_{i},y _{i},\delta_{i}).\]
Note that \(\hat{\mathbf{\beta}}_{q}\), \(\hat{\sigma}_{q}^{2}\) and \(\hat{\mathbf{\lambda}}\) are estimated from (34), (35) and (40), \(\hat{\mathbf{\mu}}\), \(\hat{\mathbf{\kappa}}\), \(\hat{\nu}\) and \(\hat{\mathbf{\zeta}}\) can be estimated from the procedure listed in the Appendix.
Extension to causal inference
We now consider an extension of the proposed method to causal inference in observational studies. Let \(T\) be a binary treatment indicator. We define \(Y(1)\) and \(Y(0)\) as potential outcomes when an individual is assigned to the treatment group or the control group, respectively. We are interested in estimating the population average treatment effect, \(\tau=E\{Y(1)\}-E\{Y(0)\}\). The potential outcome \(Y(1)\) is observed only when \(T=1\), and \(Y(0)\) is observed only when \(T=0\). Instead of observing \(\{Y(1),Y(0)\}\), we only observe \(Y=T\cdot Y(1)+(1-T)\cdot Y(0)\) and \(T\). In addition to \((Y,T)\), we assume that a vector of covariates \(\mathbf{X}\) is observed throughout the sample. We assume that \(\{(T_{i},Y_{i}(1),Y_{i}(0),\mathbf{X}_{i}),i=1,\ldots,n\}\) are independent and identically distributed. We assume that the treatment assignment mechanism is unconfounded in the sense that
\[T\perp(Y(1),Y(0))\mid\mathbf{X}.\]
The above unconfoundedness assumption is essentially equivalent to the MAR assumption in the missing data context. In addition, we assume that \(P(T=1\mid\mathbf{X}=\mathbf{x})\in(0,1)\) for all \(\mathbf{x}\) in the support of \(\mathbf{X}\).
Under the above identification assumptions, we can apply the proposed triple-robust estimation method to estimate \(\tau\) from the observational data. Specifically, we first use a working propensity score model for the treatment assignment mechanism, given by
\[P\left(T=1\mid\mathbf{x}\right):=\pi(\mathbf{x};\mathbf{\phi})\in(0,1),\]
where \(\mathbf{\phi}\) is an unknown parameter. Once the maximum likelihood estimator \(\hat{\mathbf{\phi}}\) of \(\mathbf{\phi}\) is obtained, we can compute \(\hat{\pi}_{i}=\pi(\mathbf{x}_{i};\hat{\mathbf{\phi}})\). After that, the final propensity weights for the treatment group are constructed by
\[\hat{\omega}_{1i}=1+\left(\hat{d}_{1i}-1\right)\exp(\mathbf{b}_{i}^{\rm T}\hat{ \mathbf{\lambda}}_{1})q_{\gamma,i}(\hat{\mathbf{\beta}}_{1},\hat{\sigma}_{1}^{\,2}), \tag{45}\]
where \(\hat{d}_{1i}=\hat{\pi}_{i}^{-1}\) and \(\hat{q}_{\gamma,i}(\mathbf{\beta}_{1},\sigma_{1}^{2})=\exp\{-0.5\gamma\sigma_{1}^{ -2}(y_{i}-\mathbf{b}_{i}^{\rm T}\mathbf{\beta}_{1})^{2}\}\). In the final weights in (45), \(\hat{\mathbf{\lambda}}_{1}\) is chosen to satisfy
\[\sum_{i=1}^{n}T_{i}\hat{\omega}_{1i}\mathbf{b}_{i}=\sum_{i=1}^{n}\mathbf{b}_{i}\]
and \((\hat{\mathbf{\beta}}_{1},\hat{\sigma}_{1}^{2})\) solves
\[\sum_{i=1}^{n}T_{i}\left(y_{i}-\mathbf{b}_{i}^{\rm T}\mathbf{\beta}_{1} \right)(\hat{d}_{1i}-1)g_{i}(\mathbf{\lambda})q_{\gamma,i}(\mathbf{\beta}_{1},\sigma_{1 }^{2})\mathbf{b}_{i} = \mathbf{0}\] \[\sum_{i=1}^{n}T_{i}\left\{\left(y_{i}-\mathbf{b}_{i}^{\rm T}\mathbf{\beta} _{1}\right)^{2}-\frac{\sigma_{1}^{2}}{1+\gamma}\right\}(\hat{d}_{1i}-1)g_{i}( \mathbf{\lambda})q_{\gamma,i}(\mathbf{\beta}_{1},\sigma_{1}^{2}) = 0.\]
Similarly, the final propensity weights for the control group are constructed by
\[\hat{\omega}_{0i}=1+\left(\hat{d}_{0i}-1\right)\exp(\mathbf{b}_{i}^{ \rm T}\hat{\mathbf{\lambda}}_{0})q_{\gamma,i}(\hat{\mathbf{\beta}}_{0},\sigma_{0}^{2}), \tag{46}\]
where \(\hat{d}_{0i}=(1-\hat{\pi}_{i})^{-1}\), \(\hat{\mathbf{\lambda}}_{0}\) is chosen to satisfy
\[\sum_{i=1}^{n}(1-T_{i})\hat{\omega}_{0i}\mathbf{b}_{i}=\sum_{i=1}^{n} \mathbf{b}_{i}\]
and \((\hat{\mathbf{\beta}}_{0},\hat{\sigma}_{0}^{2})\) satisfies
\[\sum_{i=1}^{n}(1-T_{i})\left(y_{i}-\mathbf{b}_{i}^{\rm T}\mathbf{\beta}_ {0}\right)(\hat{d}_{1i}-1)g_{i}(\mathbf{\lambda})q_{\gamma,i}(\mathbf{\beta}_{0}, \sigma_{0}^{2})\mathbf{b}_{i} = \mathbf{0}\] \[\sum_{i=1}^{n}T_{i}\left\{\left(y_{i}-\mathbf{b}_{i}^{\rm T}\mathbf{ \beta}_{0}\right)^{2}-\frac{\sigma^{2}}{1+\gamma}\right\}(\hat{d}_{1i}-1)g_{i} (\mathbf{\lambda})q_{\gamma,i}(\mathbf{\beta}_{0},\sigma_{0}^{2}) = 0,\]
where \(\hat{d}_{1i}=\hat{\pi}_{i}^{-1}\). Therefore, we can use
\[\hat{\tau}=\frac{1}{n}\sum_{i=1}^{n}T_{i}\hat{\omega}_{1i}y_{i}- \frac{1}{n}\sum_{i=1}^{n}(1-T_{i})\hat{\omega}_{0i}y_{i} \tag{47}\]
to estimate \(\tau\), where \(\hat{\omega}_{1i}\) and \(\hat{\omega}_{0i}\) are defined in (45) and (46), respectively. The asymptotic properties of the proposed estimator can be obtained in a similar way using the same argument as in Section 4. More details are skipped here for brevity.
## 6 Empirical studies
### Simulation study
We examine the performance of proposed methods under various propensity score models. Let \(\mathbf{x}=(x_{1},x_{2})\), where \(X_{1}\) follow the uniform distribution in \([0,2]\) and \(X_{2}\sim N(0,1^{2})\) follow the standard normal distribution. The simulation experiment can be described as
a \(2\times 2\) factorial design with two factors. The sample size is \(1,000\) and the Monte Carlo sample size is \(1,000\). The first factor is the outcome regression model (OM1, OM2) and the second factor is the propensity score model (PM1, PM2).
The models for generating \(Y\) are described as follows: OM1, where \(Y\mid\mathbf{x}\) follows a normal distribution with mean \(E(Y\mid\mathbf{x})=1+x_{1}+x_{2}\) and variance 1; OM2, where \(Y\mid\mathbf{x}\) follows a normal distribution with \(E(Y\mid\mathbf{x})=1+x_{1}+x_{2}+(x_{1}-0.5)x_{2}^{4}\) and variance 1. For calibration, we use \(b(x)=(1,x_{1},x_{2})\) as the calibration variable. Thus, the implicit model calibration using \(b(x)=(1,x_{1},x_{2})\) is justified under the OM1 outcome model, but it is incorrectly specified under the OM2 outcome model. In addition, the models for generating \(\delta\) can be described as follows: PM1, where \(\delta\mid\mathbf{x}\) follows Bernoulli distribution with \(\text{logit}\{P(\delta=1\mid\mathbf{x})\}=\phi_{0}+0.5x_{1}+0.5x_{2}\), where \(\phi_{0}\) is chosen to achieve 60% response rates; PM2, where \(\delta\mid\mathbf{x}\) follows the Bernoulli distribution with \(P(\delta=1\mid\mathbf{x})=0.8\) if \(a+x_{1}+x_{2}>0\), and \(P(\delta=1\mid\mathbf{x})=0.4\) otherwise, where \(a\) is chosen to achieve 60% response rates. Further, we use
\[\text{logit}\left\{\pi_{0}(\mathbf{x};\mathbf{\phi})\right\}=\phi_{0}+\phi_{1}x_{1}+ \phi_{2}x_{2}. \tag{48}\]
as the working model for the propensity score function, where \(\text{logit}(x)=\log\{x/(1-x)\}\). Thus, the working propensity score model is correctly specified under PM1 only.
For each sample, we compute the following propensity score estimators.
1. _Mean of the complete cases_ (CC): \(\hat{\theta}_{\text{CC}}=n_{1}^{-1}\sum_{i=1}^{n}\delta_{i}y_{i}\), that is, mean of observed \(y_{i}\)'s, where \(n_{1}=\sum_{i=1}^{n}\delta_{i}\).
2. _Maximum likelihood estimation_ (MLE): the classical propensity score estimator \(\hat{\theta}_{\text{PSW}}=n^{-1}\sum_{i=1}^{n}\delta_{i}\hat{d}_{i}y_{i}\) using \(\hat{d}_{i}=1/\pi_{0}(\mathbf{x}_{i};\hat{\mathbf{\phi}})\) as the estimated propensity score weight, where \(\hat{\mathbf{\phi}}\) is the maximum likelihood estimate of \(\mathbf{\phi}\) from the working propensity score model in (48).
3. _Entropy balancing method in Hainmueller (2012)_: \(\hat{\theta}_{\text{HM}}=n^{-1}\sum_{i=1}^{n}\delta_{i}\hat{\omega}_{i}y_{i}\), where \(\hat{\omega}_{i}\) is obtained from (7).
* _Regularized calibration method in Tan (2019)_: \(\hat{\theta}_{\text{Tan}}=n^{-1}\sum_{i=1}^{n}\delta_{i}\pi_{0}^{-1}(\mathbf{x}_{i}; \hat{\mathbf{\phi}})y_{i}\), where the logistic regression coefficients \(\hat{\mathbf{\phi}}\) are obtained by minimizing \(\ell_{\text{CAL}}(\phi)=\sum_{i=1}^{n}[\delta_{i}\exp\{-\mathbf{\phi}^{\text{T}}f( \mathbf{x}_{i})\}-(1-\delta_{i})\mathbf{\phi}^{\text{T}}f(\mathbf{x}_{i})]\), where we take the indentity mapping for \(f(\cdot)\) in this simulation.
* _Augmented propensity score weighting estimator_ (APS): \(\hat{\theta}_{\text{APSW}}=n^{-1}\sum_{i=1}^{n}\delta_{i}\hat{\omega}_{i}^{*}y_ {i}\) using \(\hat{\omega}_{i}^{*}\) in (15).
* _Augmented propensity score weighting estimator with \(\gamma\)-divergence_ (APS\({}_{\gamma}\)): the augmented propensity score weighting estimator in (38) using \(\gamma=0.3,0.5,0.7,1\).
Figure 1 shows the results of the simulation study above, where the dashed line represents the true parameter \(\theta=E(Y)\). Table 1 presents the corresponding bias, standard error, and root mean square error for each method. Since the consistency of all estimators is guaranteed under OM1PM1, we can see that all estimators give similar performances
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Criteria} & \multicolumn{8}{c}{Method} \\ \cline{3-11} & & CC & MLE & APS & APS\({}_{0.3}\) & APS\({}_{0.5}\) & APS\({}_{0.7}\) & APS\({}_{1}\) & HM & Tan \\ \hline \multirow{4}{*}{OM1PM1} & Bias & 2.45 & -0.01 & -0.01 & -0.01 & -0.01 & -0.01 & -0.01 & -0.01 & -0.01 \\ & S.E. & 0.61 & 0.56 & 0.55 & 0.56 & 0.56 & 0.57 & 0.58 & 0.55 & 0.55 \\ & RMSE & 2.53 & 0.56 & 0.55 & 0.56 & 0.56 & 0.57 & 0.58 & 0.55 & 0.55 \\ \cline{2-11} & Bias & 4.03 & 0.06 & 0.10 & 0.68 & 0.72 & 0.74 & 0.76 & 0.13 & 0.11 \\ & S.E. & 2.54 & 3.10 & 2.97 & 1.66 & 1.65 & 1.65 & 1.66 & 2.44 & 2.95 \\ & RMSE & 4.77 & 3.10 & 2.97 & 1.79 & 1.80 & 1.81 & 1.82 & 2.45 & 2.95 \\ \cline{2-11} & Bias & 3.07 & -0.29 & -0.00 & -0.00 & -0.00 & -0.00 & -0.00 & -0.00 & -0.00 \\ \cline{2-11} & S.E. & 0.61 & 0.59 & 0.55 & 0.56 & 0.56 & 0.57 & 0.58 & 0.55 & 0.55 \\ \cline{2-11} & RMSE & 3.13 & 0.66 & 0.55 & 0.56 & 0.56 & 0.57 & 0.58 & 0.55 & 0.55 \\ \cline{2-11} & Bias & 3.14 & -4.67 & -3.22 & -0.78 & -0.63 & -0.56 & -0.51 & -2.45 & -3.16 \\ \cline{2-11} & S.E. & 2.59 & 6.45 & 4.42 & 1.73 & 1.71 & 1.71 & 1.70 & 3.25 & 4.30 \\ \cline{2-11} & RMSE & 4.07 & 7.96 & 5.47 & 1.89 & 1.82 & 1.79 & 1.78 & 4.07 & 5.33 \\ \hline \end{tabular}
\end{table}
Table 1: Bias, standard error (S.E.), root mean square error (RMSE) for estimators comparison with sample size 1000 and Monte Carlo sample 1000. All criteria are multiplied by 10. The errors from outcome models are generated from _i.i.d._ standard Gaussian distribution. CC: mean of the complete cases; MLE: maximum likelihood estimation; APS: augmented propensity score weighting estimator; APS\({}_{\gamma}\): augmented propensity score weighting with \(\gamma\)-divergence estimator, for \(\gamma=0.3,0.5,0.7,1\); HM: entropy balancing method in Hainmueller (2012); Tan: regularized calibration method in Tan (2019)
except for the mean of the complete cases method. However, under OM1PM2, the consistency of the generalized linear model method is no longer guaranteed, while that of other estimators is still valid. In OM1PM2, the covariate balancing condition becomes critical, and so all methods satisfying the covariate balancing will be unbiased. In OM2PM1, the proposed augmented PSW estimator is still consistent, as the propensity score model is
Figure 1: Boxplots for estimators comparison with sample size 1000 and Monte Carlo sample 1000. The errors from outcome models are generated from _i.i.d._ standard Gaussian distribution. CC: mean of the complete cases; MLE: maximum likelihood estimation; APS: augmented propensity score weighting estimator; APS\({}_{\gamma}\): augmented propensity score weighting with \(\gamma\)-divergence estimator, for \(\gamma=0.3,0.5,0.7,1\); HM: entropy balancing method in Hainmueller (2012); Tan: regularized calibration method in Tan (2019).
correctly specified. Our proposed augmented PSW estimator with \(\gamma\)-divergence is biased in this scenario due to the misspecification of the outcome model, where other estimators are built under the correct specified response model. In OM2PM2, all estimators present some deviations from the true mean of \(y_{i}^{\prime}s\). However, our proposed augmented PSW estimator is not biased in this scenario due to the misspecification of the outcome model, where other estimators are built under the correct specified response model. In OM2PM2, the estimators present some deviations from the true mean of \(y_{i}^{\prime}s\).
timator with \(\gamma\)-divergence is robust against other estimators, with fewer outliers and the smallest root mean square error.
We also investigated the performance of the proposed linearization variance estimator in (43). We computed confidence intervals based on asymptotic normality. The results are presented in Table 3. The performances are satisfactory when the outcome regression model is specified correctly.
In addition, we check the robustness of the augmented propensity score weighting estimator with \(\gamma\)-divergence against outliers. After the data are generated under the same \(2\times 2\) factorial design as in the previous simulation setting, additional noise generated from \(Uinf(-50,50)\) is added to \(20\%\) of the observed outcomes. Again, the sample size is \(1,000\) and the Monte Carlo sample size is \(1,000\).
Figure 2 shows the performance of various estimators, where the dashed line represents the true mean of \(y_{i}^{\prime}s\). Table 2 presents the corresponding bias, standard error, and root
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Criteria} & \multicolumn{8}{c}{Method} \\ \cline{3-11} & & CC & MLE & APS & APS\({}_{0.3}\) & APS\({}_{0.5}\) & APS\({}_{0.7}\) & APS\({}_{1}\) & HM & Tan \\ \hline \multirow{4}{*}{OM1PM1} & Bias & 2.28 & -0.21 & -0.21 & -0.12 & -0.12 & -0.12 & -0.12 & -0.21 & -0.19 \\ & S.E. & 5.12 & 5.31 & 5.31 & 3.13 & 3.13 & 3.13 & 3.13 & 5.27 & 5.31 \\ & RMSE & 5.60 & 5.31 & 5.31 & 3.13 & 3.13 & 3.13 & 3.13 & 5.27 & 5.31 \\ \cline{2-11} & Bias & 3.86 & -0.14 & -0.10 & 0.57 & 0.62 & 0.64 & 0.66 & -0.07 & -0.07 \\ \multirow{2}{*}{OM2PM1} & S.E. & 5.63 & 6.03 & 5.96 & 3.46 & 3.46 & 3.46 & 3.47 & 5.70 & 5.96 \\ & RMSE & 6.82 & 6.03 & 5.96 & 3.51 & 3.52 & 3.52 & 3.53 & 5.70 & 5.95 \\ \cline{2-11} & Bias & 2.92 & -0.51 & -0.21 & -0.09 & -0.09 & -0.09 & -0.09 & -0.20 & -0.18 \\ \multirow{2}{*}{OM1PM2} & S.E. & 5.17 & 5.70 & 5.51 & 3.13 & 3.13 & 3.13 & 3.13 & 5.43 & 5.50 \\ & RMSE & 5.93 & 5.72 & 5.51 & 3.13 & 3.13 & 3.13 & 3.13 & 5.43 & 5.50 \\ \multirow{2}{*}{OM2PM2} & Bias & 3.00 & -4.89 & -3.42 & -0.79 & -0.71 & -0.65 & -0.59 & -2.65 & -3.35 \\ & S.E. & 5.69 & 8.27 & 6.81 & 3.42 & 3.48 & 3.49 & 3.48 & 6.14 & 6.75 \\ \multirow{2}{*}{OM2PM2} & RMSE & 6.43 & 9.61 & 7.62 & 3.50 & 3.55 & 3.55 & 3.53 & 6.68 & 7.54 \\ \hline \end{tabular}
\end{table}
Table 2: Bias, standard error (S.E.), root mean square error (RMSE) for estimators comparison with sample size 1000 and Monte Carlo sample 1000; 20% of the samples are contaminated with the additive noise from \(Unif(-50,50)\). The errors from outcome models are generated from _i.i.d._ standard Gaussian distribution. All criteria are multiplied by 10. CC: mean of the complete cases; MLE: maximum likelihood estimation; APS: augmented propensity score weighting estimator; APS\({}_{\gamma}\): augmented propensity score weighting with \(\gamma\)-divergence estimator, for \(\gamma=0.3,0.5,0.7,1\); HM: entropy balancing method in Hainmueller (2012); Tan: regularized calibration method in Tan (2019)
mean square error for each method. Compared to other estimators, in all four scenarios, our proposed augmented propensity score weighting estimator with \(\gamma\)-divergence apparently gives the smallest root mean square error, which validates our robustness claim in Section 4.
### Real data application
We further check our proposed estimators for artificial missingness with real data from the California API program ([http://api.cde.ca.gov/](http://api.cde.ca.gov/) or survey package (Lumley, 2010) in R (R Core Team, 2022)). Within the data set, standardized student tests are performed to calculate the API for California schools. Specifically, we select the API for year 2000 (api00) as the response variable. For covariates, we set the API for year 1999 (api99) as \(x_{1}\), the percentage of students eligible for subsidized meals (meals) as \(x_{2}\), the percentage of English language learners (ell) as \(x_{3}\), the average level of parental education (avg) as \(x_{4}\), the percentage of fully qualified teachers (full) as \(x_{5}\), and the number of students enrolled (enroll) as \(x_{6}\), where the words in parentheses are the abbreviations for variable names in the dataset. We artificially created the missingness with the following two response mechanisms: PM3, where \(\delta\mid\mathbf{x}\) follows the Bernoulli distribution with \(\text{logit}\{P(\delta=1\mid\mathbf{x})\}=\phi_{0}+2x_{1}+x_{2}+0.5x_{3}\), and \(\phi_{0}\) is chosen to achieve the 60% response rates; PM4, where \(\delta\mid\mathbf{x}\) follows the Bernoulli distribution with \(P(\delta=1\mid\mathbf{x})=0.8\), if \(a+2x_{1}+x_{2}+x_{3}+x_{4}+x_{5}+x_{6}>0\), \(P(\delta=1\mid\mathbf{x})=0.4\) otherwise, and \(a\) is chosen to achieve the response rates 60%.
We compare the same estimators as in previous subsections with 1000 Monte Carlo
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Coverage rate} & \multicolumn{5}{c}{Method} \\ \cline{3-6} & & APS\({}_{0.3}\) & APS\({}_{0.5}\) & APS\({}_{0.7}\) & APS\({}_{1.0}\) \\ \hline \multirow{2}{*}{OM1RM1} & 90\% & 89.9 \% & 89.8 \% & 90.1 \% & 89.7 \% \\ & 95\% & 95.1 \% & 94.5 \% & 94.3 \% & 94.3 \% \\ \cline{2-6} & 90\% & 90.1 \% & 90.3 \% & 90.4 \% & 90.4 \% \\ \cline{2-6} & 95\% & 95.2 \% & 95.0 \% & 94.7 \% & 94.6 \% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Linearized variance estimation for gamma divergence method where the errors are _i.i.d._ from standard Gaussian distribution
samples. In particular, the mean of the complete cases method does not behave well in both cases and will affect the scale of the resultant box plots, so we do not show its performance. Furthermore, we adopt the MSPE as the 5-fold cross-validation criterion for the selection strategy of \(\gamma\), as described in Section 4. For each Monte Carlo sample, we select the best \(\gamma\) from \(\{0.1,0.2,\ldots,1\}\) and denote the corresponding estimator as \(\text{APS}_{\gamma}\). The results are summarized with box plots in Figure 3, where the dashed line denotes the true population mean. As we can see, the estimator \(\text{APS}_{\gamma}\) always gives the unbiased estimate with a relatively low root mean square error, outperforming other estimators.
## 7 Application to CEAP data
The Conservation Effects Assessment Project (CEAP) is a program initiated by the United States Department of Agriculture (USDA) Natural Resources Conservation Service (NRCS). CEAP collects and analyzes data from various sources, including field studies, monitoring sites, and modeling, to evaluate the impact of water and wind erosion. Further
Figure 3: Estimators comparison in real data with 1000 Monte Carlo sample, with artificially missingness mechanism PM3 and PM4. APS: augmented propensity score weighting estimator; \(\text{APS}_{\gamma}\): augmented propensity score weighting with \(\gamma\)-divergence estimator, where \(\gamma\) is chosen by the cross-validation method based on MSPE criterion; MLE: maximum likelihood estimation; HM: entropy balancing method in Hainmueller (2012); Tan: regularized calibration method in Tan (2019).
details of the CEAP data can be found in Berg and Yu (2019). The farmer's interview data together with the NRI data serve as input to the revised Universal Soil Loss Equation (RUSLE2) to generate a measure of sheet and rill erosion. In addition, RUSLE2 is also an advancement of a traditional approximation called the Universal Soil Loss Equation (USLE). For the CEAP sample, the RUSLE2 suffers from missingness due to the refuse of the farmer interviewed, while the corresponding USLE is available.
We are interested in estimating the population mean of RUSLE2 using USLE as an auxiliary variable for calibration weighting in Arkansas. Specifically, the total sampled points are 1509 and the fully observed are 406, with observed rate 26.9%. The normal quantile-quantile (Q-Q) plot generated from linear models based on the complete cases is presented in Sub-figure (a) of Figure 4. By the normal Q-Q plot, numerous outlier data points are evident, and the residuals deviate from satisfying the assumption of a normal distribution. Therefore, given this scenario, we have strong reasons to place greater trust in our suggested robust approach. The associated mean estimation result is presented in Sub-figure (b) of Figure 4. In this presentation, our proposed method using gamma divergence exhibits a narrow 95% confidence interval and yields a low RUSLE2 value.
Figure 4: Quantile-quantile plot and mean estimation with 95% confidence band for CEAP data in Arkansas.
Concluding Remarks
We have applied the information projection is applied to obtain an augmented PS model that allows for doubly robust estimation. We have introduced self-efficiency to obtain the equivalence between the PSW estimator and the regression imputation estimator. In addition, \(\gamma\)-power divergence can be used to obtain an outlier-robust regression imputation estimator. By imposing self-efficiency, the resulting PSW estimator is also robust to outliers. In practice, an efficient and outlier-robust PSW estimator is very attractive as the existence of outliers can often damage the efficiency of the result estimator.
There are several directions of further extensions of the proposed method. The proposed method is directly applicable to the calibration weighting in survey sampling (Fuller, 2009; Tille, 2020). The proposed method is based on the assumption of missing at random. Extension to nonignorable nonresponse can also be an interesting research direction.
|
2303.01900 | Central limit theorem for components in meandric systems through high
moments | We investigate here the behaviour of a large typical meandric system, proving
a central limit theorem for the number of components of given shape. Our main
tool is a theorem of Gao and Wormald, that allows us to deduce a central limit
theorem from the asymptotics of large moments of our quantities of interest. | Svante Janson, Paul Thévenin | 2023-03-03T12:55:58Z | http://arxiv.org/abs/2303.01900v1 | # Central limit theorem for components in meandric systems through high moments
###### Abstract.
We investigate here the behaviour of a large typical meandric system, proving a central limit theorem for the number of components of given shape. Our main tool is a theorem of Gao and Wormald, that allows us to deduce a central limit theorem from the asymptotics of large moments of our quantities of interest.
2020 Mathematics Subject Classification: 60C05 Supported by the Knut and Alice Wallenberg Foundation.
## 1. Model and main result
### Definitions and some notation
Let \(n\geq 1\) be an integer. A _meandric system_ of size \(n\) is a collection of non-crossing loops in the plane that intersect the horizontal axis exactly at the points \([2n]:=\{1,\ldots,2n\}\), we call these points the _vertices_ of the meandric system; two meandric systems that differ only by a continuous deformation of the plane that fixes the horizontal axis are regarded as the same. Meandric systems were introduced, to our knowledge, by Di Francesco, Golinelli and Guitter [1], and have recently become again a topic of interest [6, 4, 2]. A meandric system can be regarded as a set of \(n\) non-crossing arcs with endpoints \([2n]\) in the upper half-plane, and another such set in the lower half-plane; a meandric system thus determines two non-crossing matchings (pair-partitions) of \([2n]\), one for each half-plane, and it is easily seen that this yields a bijection between meandric systems of size \(n\) and pairs of two non-crossing matchings of \([2n]\). In particular, since the number of non-crossing matchings of \([2n]\) is the Catalan number
\[\operatorname{Cat}_{n}:=\frac{(2n)!}{n!(n+1)!}, \tag{1.1}\]
see e.g. [7, item 61], the number of meandric systems of size \(n\) is \(\operatorname{Cat}_{n}^{2}\).
Each connected component of a meandric system is a single loop, intersecting the horizontal axis in a subset of \([2n]\), say \(\{i_{1}<\cdots<i_{2k}\}\), which we call the _support_ of the loop. Note that necessarily there is an even number of vertices in the support, and an
Introduction
Let \(M\) be a finite dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-dimensional \(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-\(n\)-dimensional \(n\)-\(n\)-dimensional \(n\)-\(n\)-\(n\)-dimensional \(n\)-\(n\)-\(n\)-dimensional \(n\)-\(n\)-\(n\)-dimensional \(n\)-\(n\)-\(n\)-dimensional \(n\)-\(n\)-\(n\)-dimensional \(n\)-\(n\)-\(n\)-dimensional \(n\)-\(n\)-\(n\)-dimensional \(n\)-\(n\)-\(n\)-dimensional \(n\)-\(n\)-\(n\)-dimensional \(n\)-\(n\)-\(n\)-dimensional \(n\)-\(n\)-\(n\)-\(n\)-dimensional \(n\)-\(n\)-\(n\)-\(n\)-dimensional \(n\)-\(n\)-\(n\)-\(n\)-dimensional \(n\)-\(n\)-\(n\)-\(n\)-dimensional \(n\)-\(n\)-\(n\)-\(n\)-dimensional \(n\)-\(n\)-\(n\)-\(n\)-\(n\)-dimensional \(n\)-\(n\)-\(n\)-\(n\)-\(n\)-\(n\)-dimensional \
We use standard \(o\) and \(O\) notation. Furthermore, for two (positive) sequences \(a_{n}\) and \(b_{n}\), \(a_{n}\sim b_{n}\) means \(a_{n}/b_{n}\to 1\) as \(n\to\infty\), i.e., \(a_{n}=b_{n}(1+o(1))\), and \(a_{n}=\Theta(b_{n})\) means that there exist constants \(c>0\) and \(C\) such that \(c\leq a_{n}/b_{n}\leq C\) for sufficiently large \(n\). Note that, for example, \(a_{n,r}\sim b_{n,r}\) for \(r=O(\sqrt{n})\) means that this holds for every sequence \(r=r(n)=O(\sqrt{n})\), which is equivalent to \(a_{n,r}\sim b_{n,r}\) uniformly for \(r\leq C\sqrt{n}\), for any \(C<\infty\); uniformity in \(r\) is thus automatic in such cases. We write "uniformly for \(r=O(\sqrt{n})\)" for "uniformly for \(r\leq C\sqrt{n}\), for any \(C<\infty\)". Unspecified limits are as \(n\to\infty\).
### The key tool: Gao and Wormald's theorem
Our proof relies on a theorem due to Gao and Wormald [5], stating that we can deduce a central limit theorem for a sequence of variables from the asymptotic behaviour of their high (factorial) moments. Let us recall this result.
**Theorem 2.1** (Gao & Wormald [5]).: _Let \(\mu_{n}s_{n}>-1\) and set \(\sigma_{n}:=\sqrt{\mu_{n}+\mu_{n}^{2}s_{n}}\), where \(0<\mu_{n}\to\infty\). Suppose that \(\sigma_{n}=o(\mu_{n})\), \(\mu_{n}=o(\sigma_{n}^{3})\), and that a sequence \(\{X_{n}\}\) of nonnegative random variables satisfies as \(n\to\infty\):_
\[\mathbb{E}\left[(X_{n})_{r}\right]\sim\mu_{n}^{r}\exp\left(\frac{r^{2}s_{n}}{2 }\right). \tag{2.2}\]
_uniformly for all integers \(r\) in the range \(c\mu_{n}/\sigma_{n}\leq k\leq C\mu_{n}/\sigma_{n}\), for some constants \(C>c>0\). Then \((X_{n}-\mu_{n})/\sigma_{n}\) converges in distribution to the standard normal as \(n\to\infty\)._
In other words, if high factorial moments of a variable asymptotically match those of a normal distribution, then convergence to the normal distribution holds.
### Some lemmas
We state some simple lemmas that will be used later. The first is a well known estimate that we often will use in the sequel.
**Lemma 2.2**.:
1. _If_ \(0\leq k\leq n/2\)_, then_ (2.3) \[(n)_{k}=n^{k}\exp\left(-\frac{k^{2}}{2n}+O\left(\frac{k^{3}}{n^{2}}+\frac{k}{ n}\right)\right).\]
2. _In particular, if_ \(k=O(\sqrt{n})\)_, then_ (2.4) \[(n)_{k}=n^{k}\exp\left(-\frac{k^{2}}{2n}+o(1)\right)\sim n^{k}\exp\left(-\frac {k^{2}}{2n}\right).\]
3. _More generally, if_ \(0\leq k\leq m\) _with_ \(m=O(\sqrt{n})\)_, then_ (2.5) \[(n-m+k)_{k}\sim n^{k}\exp\left(-\frac{m^{2}-(m-k)^{2}}{2n}\right)=n^{k}\exp \left(-\frac{k(2m-k)}{2n}\right).\]
Proof.: (i), (ii): This follows easily from a Taylor expansion of \(\log(1-i/n)\) for \(0\leq i<k\); we omit the details.
(iii): This follows from (ii) and \((n-m+k)_{k}=(n)_{m}/(n)_{m-k}\).
As one consequence, we obtain the following asymptotics.
**Lemma 2.3**.: _Let \(n\to\infty\) and \(0\leq r=O(\sqrt{n})\). Then_
\[\frac{\operatorname{Cat}_{n-r}}{\operatorname{Cat}_{n}}\sim 2^{-2r}. \tag{2.6}\]
Proof.: The definition (1.1) and Lemma 2.2 yield
\[\frac{\operatorname{Cat}_{n-r}}{\operatorname{Cat}_{n}}=\frac{(n)_{r}(n+1)_{r }}{(2n)_{2r}}\sim\frac{(n)_{r}^{2}}{(2n)_{2r}}=\frac{n^{2r}}{(2n)^{2r}} \exp\left(-2\frac{r^{2}}{2n}+\frac{(2r)^{2}}{4n}+o(1)\right)\sim 2^{-2r}. \tag{2.7}\]
We end this section with another elementary and well known result.
**Lemma 2.4**.: _Let \(m,n,k\geq 1\). The number of unordered \(k\)-tuples of disjoint intervals of size \(m\) in \([n]\) is given by_
\[\binom{n-k(m-1)}{k}. \tag{2.8}\]
Proof.: By deleting all points except the leftmost in each chosen interval, we obtain a bijection between the set of such \(k\)-tuples of intervals and the set of \(k\)-tuples of distinct points in \([n-k(m-1)]\).
## 3. A first example: components of half-length \(1\).
As a warm-up, we consider first the simple case where \(S\) is the loop of half-length \(1\). For any \(i\in[2n]\), we let \(Y_{i}\) be the indicator that the following holds:
Then,
\[X_{S,n}=\sum_{i=1}^{2n-1}Y_{i} \tag{3.1}\]
and thus, for every \(r\geq 1\), summing over \(1\leq i_{1}<\cdots<i_{r}<2n\),
\[\mathbb{E}\left[(X_{S,n})_{r}\right]=\mathbb{E}\left[r!\sum_{i_{1}<\cdots<i_{ r}}Y_{i_{1}}\cdots Y_{i_{r}}\right]=r!\sum_{i_{1}<\cdots<i_{r}}\mathbb{E} \left[Y_{i_{1}}\cdots Y_{i_{r}}\right]. \tag{3.2}\]
The expectation in the last sum is non-zero if and only if the \(r\) subintervals \([\![i_{j},i_{j}+1]\!]\) of \([\![1,2n]\!]\) are disjoint, so by Lemma 2.4 there are \(\binom{2n-r}{r}\) non-zero terms. Each of the non-zero terms is \(1/\operatorname{Cat}_{n}^{2}\) times the number of meandric systems of size \(n\) that contain \(r\) given loops of half-length \(1\); by deleting these loops (and the vertices in them), we obtain a bijection between such meandric systems and the meandric systems of size \(n-r\), and hence the number of them is \(\operatorname{Cat}_{n-r}^{2}\). Consequently, (3.2) yields
\[\mathbb{E}\left[(X_{S,n})_{r}\right]=(2n-r)_{r}\frac{\operatorname{Cat}_{n-r}^ {2}}{\operatorname{Cat}_{n}^{2}}=\frac{(2n)_{2r}}{(2n)_{r}}\cdot\frac{ \operatorname{Cat}_{n-r}^{2}}{\operatorname{Cat}_{n}^{2}}. \tag{3.3}\]
In particular, using Lemmas 2.2 and 2.3, if \(r=O(\sqrt{n})\), then
\[\mathbb{E}\left[(X_{S,n})_{r}\right]\sim(2n)^{2r-r}\exp\left(-\frac{4r^{2}}{ 4n}+\frac{r^{2}}{4n}\right)2^{-4r}=\left(\frac{n}{8}\right)^{r}\exp\left(- \frac{3r^{2}}{4n}\right). \tag{3.4}\]
In other words, (2.2) holds (uniformly) for \(0\leq r\leq C\sqrt{n}\), for any fixed \(C<\infty\), with
\[\mu_{n} :=\frac{n}{8}, \tag{3.6}\] \[s_{n} :=-\frac{3}{2n}. \tag{3.5}\]
We have \(\mu_{n}s_{n}=-3/16>-1\), and thus
\[\sigma_{n}:=\sqrt{\mu_{n}(1+\mu_{n}s_{n})}=\sqrt{\frac{13}{128}n}. \tag{3.7}\]
We thus have \(\sigma_{n}=o(\mu_{n})\) and \(\mu_{n}=o(\sigma_{n}^{3})\), and consequently Theorem 2.1 applies and yields:
**Theorem 3.1**.: _If \(S\) is a simple loop of half-length \(1\), then_
\[\frac{X_{S,n}-n/8}{\sqrt{13n/128}}\overset{(d)}{\underset{n\to\infty}{\longrightarrow}} \mathcal{N}(0,1). \tag{3.8}\]
This is Theorem 1.2 for this particular choice of \(S\), with \(\mu_{S}=1/8\) and \(\sigma_{S}^{2}=13/128\).
## 4. Extension to any fixed shape
Let us now show how we can extend this result to any fixed shape \(S\). We now let \(Y_{i}\) be the indicator that there is a component \(C\) of shape \(S\) such that \(L_{C}=i\); note that (3.1) and (3.2) still hold.
Recall that \(\ell(S)\) is the half-length of \(S\), so \(S\) has base \(\llbracket 1,2\ell(S)\rrbracket\). We also define here three other constants \(K(S),c_{+}(S),c_{-}(S)\) depending on \(S\). To avoid heavy notation, we will drop the argument \(S\) in what follows, and only denote them by \(K,c_{+},c_{-}\).
**Definition 4.1**.: _(See an example in Figure 1.) Observe that a component \(C\) of shape \(S\), taken along with the horizontal axis, splits the plane into two unbounded faces, each belonging to one of the half-planes, and a certain number of bounded faces. Let \(F_{+}\) denote the unbounded face in the upper half-plane, \(F_{-}\) the one in the lower half-plane, and \(\mathcal{F}(C)\) the set of bounded faces. For a face \(F\), let \(\nu(F)\) be the number of vertices in \(\llbracket L_{C},R_{C}\rrbracket\) that lie on the boundary of \(F\) but not on \(C\), and observe that necessarily \(\nu(F)\) is even. We then set_
\[K(S) :=\prod_{F\in\mathcal{F}(C)}\operatorname{Cat}_{\nu(F)/2}, \tag{4.2}\] \[c_{+}(S) :=\nu(F_{+})/2,\] (4.3) \[c_{-}(S) :=\nu(F_{-})/2. \tag{4.1}\]
_Note that these constants do not depend on the set of vertices on which \(C\) is defined, but only on its shape \(S\)._
### Strong shapes
We say that two components _overlap_ if their bases overlap. Hence, if the components have the same shape \(S\), and the leftmost points in their supports are \(i\) and \(j\), they overlap if \(|j-i|<2\ell(S)\).
Figure 1. A component \(C\) with four bounded faces \(F_{1},F_{2},F_{3},F_{4}\). In this example, we have \(K(S)=\operatorname{Cat}_{1}^{2}\operatorname{Cat}_{2}\operatorname{Cat}_{3}=10\), \(c_{+}(S)=1\) and \(c_{-}(S)=0\), where \(S\) is the shape of \(C\).
For simplicity, we study first the case when this cannot happen. We say that a shape \(S\) is _strong_ if two different components of a meandric system that both have shape \(S\) cannot overlap. Thus, if \(S\) is strong, then \(Y_{i}Y_{j}=0\) when \(|j-i|<2\ell(S)\). The simple loop in Section 3 and the loop in Figure 1 are examples of strong shapes. A shape that is not strong is called _weak_; an example is given in Figure 2.
**Proposition 4.2**.: _Let \(S\) be a strong shape of half-length \(\ell(S)\). Then, for all \(r\geq 1\), we have_
\[\mathbb{E}\left[(X_{S,n})_{r}\right]=\left(2n-2r\ell(S)+r\right)_{r}K^{r}\, \frac{\operatorname{Cat}_{n-r\ell(S)+rc_{+}}\operatorname{Cat}_{n-r\ell(S)+rc _{-}}}{\operatorname{Cat}_{n}^{2}}. \tag{4.4}\]
Proof.: We argue as in Section 3. As noted above, (3.2) still holds, and since \(S\) is strong, we have \(Y_{i}Y_{j}=0\) when \(|j-i|<2\ell(S)\). Hence, the number of non-zero terms in (3.2) is \(\binom{2n-r(2\ell(S)-1)}{r}\) by Lemma 2.4. Again, all non-zero terms have the same value, which is \(1/\operatorname{Cat}_{n}^{2}\) times the number of ways that \(r\) given disjoint loops of shape \(S\) can be completed to a meandric system of size \(n\). We can fill in the bounded faces of each component in \(K\) ways, and there are \(2n-2r\ell(S)+2rc_{\pm}\) vertices left in the upper and lower components, respectively, so they may be filled in in \(\operatorname{Cat}_{n-r\ell(S)+rc_{\pm}}\) ways. This yields (4.4).
By Lemmas 2.2 and 2.3, it follows from (4.4) that, (uniformly) for \(r=O(\sqrt{n})\), we have
\[\mathbb{E}\left[(X_{S,n})_{r}\right]\underset{n\to\infty}{\sim}\left(\frac{2nK }{4^{2\ell(S)-c_{+}-c_{-}}}\right)^{r}\exp\left(-\frac{r^{2}}{4n}\left[(2\ell (S))^{2}-(2\ell(S)-1)^{2}\right]\right). \tag{4.5}\]
This is (2.2) with
\[\mu_{n} :=\frac{2nK}{4^{2\ell(S)-c_{+}-c_{-}}}, \tag{4.7}\] \[s_{n} :=-\frac{(2\ell(S))^{2}-(2\ell(S)-1)^{2})}{2n}=-\frac{4\ell(S)-1} {2n}. \tag{4.6}\]
In order to apply Theorem 2.1, we need to check that \(\mu_{n}s_{n}>-1\), which boils down to the following.
**Lemma 4.3**.: _We have_
\[K(4\ell(S)-1)<4^{2\ell(S)-c_{+}-c_{-}}. \tag{4.8}\]
Proof.: Observe that we can bound \(K\) using the fact that \(\operatorname{Cat}_{n}\leq\frac{4^{n}}{n+1}\) for all \(n\): It is easy to see that for given \(c_{\pm}\), \(K\) is largest if there is only one bounded face in each half-plane, and thus,
\[K\leq\operatorname{Cat}_{\ell(S)-c_{+}-1}\operatorname{Cat}_{\ell(S)-c_{-}-1} \leq\frac{4^{2\ell(S)-c_{+}-c_{-}-2}}{(\ell(S)-c_{+})(\ell(S)-c_{-})}\leq \frac{4^{2\ell(S)-c_{+}-c_{-}-2}}{\ell(S)}, \tag{4.9}\]
since \(c_{+}+c_{-}\leq\ell(S)-1\) (to see this, observe that a vertex cannot belong to both unbounded faces of \(S\), and that at least two vertices belong to \(C\)). This yields (4.8) directly.
It is clear that \(\mu_{n}\to\infty\). Furthermore, we have just proved that \(1+\mu_{n}s_{n}\) is a positive constant. Thus \(\sigma_{n}=\Theta(\sqrt{\mu_{n}})\), and hence \(\sigma_{n}=o(\mu_{n})\) and \(\mu_{n}=o(\sigma_{n}^{3})\). We can therefore apply Theorem 2.1 to obtain the central limit theorem in this case too:
**Theorem 4.4**.: _Let \(S\) be a strong shape. Then_
\[\frac{X_{S,n}-n\mu_{S}}{\sigma_{S}\sqrt{n}}\underset{n\longrightarrow\infty} {\overset{(d)}{\to}}\mathcal{N}(0,1), \tag{4.10}\]
_where_
\[\mu_{S}=\frac{2K}{4^{2\ell(S)-c_{+}-c_{-}}}\quad\text{and}\quad\sigma_{S}= \sqrt{\frac{2K}{4^{2\ell(S)-c_{+}-c_{-}}}\bigg{(}1-\frac{K(4\ell(S)-1)}{4^{2 \ell(S)-c_{+}-c_{-}}}\bigg{)}}. \tag{4.11}\]
This proves Theorem 1.2 in the case when \(S\) is a strong shape, with explicit formulas for \(\mu_{S}\) and \(\sigma_{S}\).
### Weak shapes
Finally, we study the case of a weak shape \(S\). Thus, now there may be overlaps between two components of shape \(S\), that is, two indices \(i<j\) such that \(|j-i|<2\ell(S)\) and \(Y_{i}Y_{j}=1\), where \(Y_{i}\) is defined as before. See Figure 2 for an example.
Let \(A^{r}\) be the set of all \(r\)-tuples \(E:=\{i_{1},\ldots,i_{r}\}\) with \(1\leq i_{1}<\cdots<i_{r}\leq 2n\), For any such \(r\)-tuple \(E\), define an equivalence relation \(\sim_{E}\) on \(\{1,\ldots,r\}\) as the smallest one (for the inclusion of the equivalence classes) satisfying: for all \(1\leq k_{1},k_{2}\leq r\) such that \(|i_{k_{1}}-i_{k_{2}}|<2\ell(S)\), \(k_{1}\sim_{E}k_{2}\). We call the equivalence classes of \(\sim_{E}\)_blocks_. Furthermore, for \(1\leq j\leq r\), we let \(A^{r}_{j}\) be the set of \(r\)-tuples \(E\in A^{r}\) that have exactly \(j\) blocks. Thus \(A^{r}=\bigcup_{j=1}^{r}A^{r}_{j}\)
Note that \(A_{r}^{r}\) is the set of \(r\)-tuples \(E\) such that all blocks are singletons. An \(r\)-tuple \(E\) corresponds to a collection \((C_{k})_{1}^{r}\) of loops of shape \(S\), shifted such that \(C_{k}\) has \(L_{C_{k}}=i_{k}\). In particular, \(E\in A_{r}^{r}\) if and only if these loops are non-overlapping.
Define, for all \(1\leq u\leq r\):
\[F_{u}:=\binom{2n-2u\ell(S)+u}{u}K^{u}\,\frac{\operatorname{Cat}_{n-u\ell(S)+ u\text{$c_{+}$}}\operatorname{Cat}_{n-u\ell(S)+u\text{$c_{-}$}}}{ \operatorname{Cat}_{n}^{2}}. \tag{4.12}\]
By the argument in the proof of Proposition 4.2, \(u!F_{u}\) is the contribution to \(\mathbb{E}\left[(X_{S,n})_{u}\right]\) from \(u\)-tuples of non-overlapping components.
We have the following estimates:
**Lemma 4.5**.: _Let \(S\) be a weak shape._
1. _For all_ \(r\geq 1\)_,_ (4.13) \[\mathbb{E}\left[(X_{S,n})_{r}\right]\geq r!F_{r}.\]
2. _For all_ \(1\leq u\leq r\)_,_ (4.14) \[\sum_{E\in A_{u}^{r}}\mathbb{E}\left[\prod_{i\in E}Y_{i}\right]\leq\binom{r- 1}{u-1}(2\ell(S))^{r-u}F_{u}.\]
3. _For each fixed_ \(M\geq 0\)_, uniformly for_ \(r=O(\sqrt{n})\) _with_ \(r\geq 2M\)_,_ (4.15) \[\sum_{E\in A_{r-M}^{r}}\mathbb{E}\left[\prod_{i\in E}Y_{i}\right]=\Theta\left( r^{M}F_{r-M}\right)\] _and, if also_ \(r\to\infty\)_,_ (4.16) \[\sum_{E\in A_{r-M}^{r}(1,2)}\mathbb{E}\left[\prod_{i\in E}Y_{i}\right]=(1-o(1) )\sum_{E\in A_{r-M}^{r}}\mathbb{E}\left[\prod_{i\in E}Y_{i}\right],\] _where_ \(A_{r-M}^{r}(1,2)\) _is the subset of_ \(A_{r-M}^{r}\) _made only of blocks of sizes_ \(1\) _or_ \(2\)_._
Proof of Lemma 4.5.: (i): We rewrite (3.2) as
\[\mathbb{E}\left[(X_{S,n})_{r}\right]=r!\sum_{E\in A^{r}}\mathbb{E}\left[\prod _{i\in E}Y_{i}\right]=r!\sum_{u=1}^{r}\sum_{E\in A_{u}^{r}}\mathbb{E}\left[ \prod_{i\in E}Y_{i}\right]. \tag{4.17}\]
The term with \(u=r\) yields the contribution from \(r\)-tuples of non-overlapping components, which as noted after (4.12) is \(r!F_{r}\).
(ii): For each \(r\)-tuple \(E\in A_{u}^{r}\), keep in the product only the leftmost point of each block, observing that, for any sets \(A\subseteq B\subseteq\llbracket 1,2n\rrbracket\), we have \(\mathbb{E}\left[\prod_{i\in B}Y_{i}\right]\leq\mathbb{E}\left[\prod_{i\in A}Y_{ i}\right]\). Note that this set of leftmost points belongs to \(A_{u}^{u}\). If the size of the \(i\)-th leftmost block is \(j_{i}\), then for each set of leftmost points, the number of possible positions of the other \(j_{i}-1\) points in the block is at most \((2\ell(S))^{j_{i}-1}\), since each point after the first is within \(2\ell(S)\) of the preceding one. Hence,
\[\sum_{E\in A_{u}^{r}}\mathbb{E}\left[\prod_{i\in E}Y_{i}\right]\leq\sum_{ \begin{subarray}{c}j_{1}+\ldots+j_{u}=r\\ j_{1},\ldots,j_{u}\geq 1\end{subarray}}\prod_{i=1}^{u}(2\ell(S))^{j_{i}-1} \cdot\sum_{E^{\prime}\in A_{u}^{u}}\mathbb{E}\left[\prod_{i\in E^{\prime}}Y_{ i}\right]=\sum_{\begin{subarray}{c}j_{1}+\ldots+j_{u}=r\\ j_{1},\ldots,j_{u}\geq 1\end{subarray}}\prod_{i=1}^{u}(2\ell(S))^{j_{i}-1} \cdot F_{u}. \tag{4.18}\]
Finally, this yields (4.14), since the number of allowed sequences \((j_{1},\ldots,j_{u})\) is \(\binom{r-1}{n-1}\), and \(\prod_{i=1}^{u}(2\ell(S))^{j_{i}-1}=(2\ell(S))^{r-u}\) for all of them.
(iii): We partition the set \(A_{r-M}^{r}\) as follows. Consider an \((r-M)\)-tuple \(T:=(T_{1},\ldots,T_{r-M})\) of integers \(\geq 1\), of sum \(r\), and consider also a function \(J\) which, to each \(1\leq i\leq r-M\), associates a \(T_{i}\)-tuple \(J_{i}\) of integers \(1=:j_{i,1}<j_{i,2}<\ldots<j_{i,T_{i}}\) such that, for all \(1\leq k\leq T_{i}-1\), \(j_{i,k+1}-j_{i,k}<2\ell(S)\), and, furthermore, the \(T_{i}\) loops of shape \(S\) that start at the vertices \(j_{k}\) (\(k=1,\ldots,T_{i}\)) are disjoint so that they may occur together as components in a meandric system. (We call such pairs \((T,J)\)_admissible_.) Denote by \(A_{T,J}\) the subset of \(A_{r-M}^{r}\) made of \(r\)-tuples \(E\) such that the \(i\)-th leftmost block of \(E\) has size \(T_{i}\), and if this block is \(\left\{a_{i}^{1},\ldots,a_{i}^{T_{i}}\right\}\), then we have \(a_{i}^{k+1}-a_{i}^{k}=j_{i,k+1}-j_{i,k}\) for all \(1\leq k\leq T_{i}-1\). In other words, \(A_{T,J}\) accounts for all \(r\)-tuples of components with \(r-M\) blocks, where the sizes of the blocks are given, as well as the intervals between the starting points of each component of shape \(S\) in each block. Hence, \(A_{r-M}^{r}\) is the union \(\bigcup A_{T,J}\) over all admissible pairs \((T,J)\).
Since we only consider \((r-M)\)-tuples \(T\) such that
\[r=\sum_{i=1}^{r-M}T_{i}=r-M+\sum_{i=1}^{r-M}(T_{i}-1), \tag{4.19}\]
there at most \(M\) indices \(i\) with \(T_{i}>1\), and thus at least \(r-2M\) indices with \(T_{i}=1\). Note also that if \(T_{i}=1\), then trivially \(J_{i}=(1)\). Given an admissible pair \((T,J)\), we define the reduced pair \((\widehat{T},\widehat{J})\) by deleting all \(T_{i}\) and \(J_{i}\) such that \(T_{i}=1\) from \(T\) and \(J\); thus \(\widehat{T}:=(T_{i}:1\leq i\leq r-M\) and \(T_{i}>1)\) and similarly for \(\widehat{J}\). Consequently, \(\widehat{T}\) and \(\widehat{J}\) are both sequences of (the same) length \(\leq M\). Since (4.19) implies that their entries are bounded
(for a fixed \(M\)), there is only a finite set \(\mathcal{T}\) of reduced pairs \((\widehat{T},\widehat{J})\), where \(\mathcal{T}\) depends on \(M\) and \(S\) but not on \(r\).
Conversely, given an admissible reduced pair \((\widehat{T},\widehat{J})\), with \(\widehat{T}=(\widehat{T}_{1},\ldots,\widehat{T}_{k})\), we can obtain \((\widehat{T},\widehat{J})\) from \(\binom{r-M}{k}\) different (admissible) pairs \((T,J)\). Note that here, by (4.19), since each \(\widehat{T}_{i}\geq 2\),
\[k\leq\sum_{i=1}^{k}(\widehat{T}_{i}-1)=\sum_{i=1}^{r-M}(T_{i}-1)=M, \tag{4.20}\]
with equality if and only if \(\widehat{T}_{i}=2\) for all \(i\leq k\).
We now want to understand the behaviour of \(\sum_{E\in A_{T,J}}\mathbb{E}\left[\prod_{i\in E}Y_{i}\right]\) for an admissible pair \((T,J)\). In a way similar to Proposition 4.2 (using an extension of Lemma 2.4 to intervals of different lengths), we obtain
\[\sum_{E\in A_{T,J}}\mathbb{E}\left[\prod_{i\in E}Y_{i}\right]=\binom{2n-2 \widetilde{\ell}+(r-M)}{r-M}\widetilde{K}\,\frac{\operatorname{Cat}_{n-d_{+} }\operatorname{Cat}_{n-d_{-}}}{\operatorname{Cat}_{n}^{2}}, \tag{4.21}\]
where, for any \(E\in A_{T,J}\), \(\widetilde{\ell}\) is the sum of the half-lengths of the blocks, \(\widetilde{K}\) accounts for the bounded faces defined by the horizontal axis and the loops defined by \(E\), and \(d_{+},d_{-}\) for the unbounded faces. (Note that these constants are the same for all \(E\in A_{T,J}\) so they depend only on \(T\) and \(J\).) Moreover, since at least \(r-2M\) of these blocks are singletons, and the remaining blocks are determined by \(\widehat{T}\) and \(\widehat{J}\), we can write
\[\widetilde{K}=K^{r-2M}K^{\prime}, \tag{4.22}\]
for some \(K^{\prime}>0\) depending only on \((\widehat{T},\widehat{J})\). Similarly,
\[\widetilde{\ell} =(r-M)\ell(S)+\ell^{\prime}, \tag{4.24}\] \[d_{+} =(r-M)(\ell(S)-c_{+})+e_{+},\] (4.25) \[d_{-} =(r-M)(\ell(S)-c_{-})+e_{-} \tag{4.23}\]
for some \(\ell^{\prime},e_{+},e_{-}\) depending only on \((\widehat{T},\widehat{J})\). In particular, for a fixed \(M\), it follows that \(K^{\prime},\ell^{\prime},e_{+},e_{-}\) can only take a fixed number of values independently of \(n\) and \(r\).
We compare (4.21) and \(F_{r-M}\) given by (4.12). First, by Lemma 2.2(iii),
\[\frac{\binom{2n-2\widetilde{\ell}+(r-M)}{r-M}}{\binom{2n-2(r-M) \ell(S)+(r-M)}{r-M}}=\frac{\Big{(}2n-2\widetilde{\ell}+(r-M)\Big{)}_{r-M}}{ \Big{(}2n-2(r-M)\ell(S)+(r-M)\Big{)}_{r-M}}\\ \sim\exp\Big{(}-\frac{r-M}{4n}\Big{(}(4\widetilde{\ell}-r+M)-(4( r-M)\ell(S)-r+M)\Big{)}\Big{)}=\exp\Big{(}o(1)\Big{)}, \tag{4.26}\]
since \(\widetilde{\ell}=r\ell(S)+O(1)\) by (4.23) and \(r=o(n)\). Similarly, as a consequence of Lemma 2.3, and (4.24)-(4.25),
\[\frac{\operatorname{Cat}_{n-d_{\pm}}}{\operatorname{Cat}_{n-(r-M)(\ell(S)-c_{ \pm})}}\sim 4^{-d_{\pm}+(r-M)(\ell(S)-c_{\pm})}=4^{-e_{\pm}}. \tag{4.27}\]
Consequently, using also (4.22), we obtain from (4.21) and (4.12),
\[\frac{\sum_{E\in A_{T,J}}\mathbb{E}\left[\prod_{i\in E}Y_{i}\right]}{F_{r-M}}= C_{T,J}(1+o(1)), \tag{4.28}\]
where \(C_{T,J}>0\) only depends on \((\widehat{T},\widehat{J})\), and therefore only takes a finite number of values. In particular,
\[\sum_{E\in A_{T,J}}\mathbb{E}\left[\prod_{i\in E}Y_{i}\right]=\Theta\Big{(}F_{ r-M}\Big{)}, \tag{4.29}\]
and this holds uniformly for \(r=O(\sqrt{n})\) and all admissible \((T,J)\).
By (4.20) and the discussion before it, there are \(\binom{r-M}{k}=\Theta(r^{k})\) admissible pairs \((T,J)\) for each \((\widehat{T},\widehat{J})\), where \(k\leq M\), with equality when all \(\widehat{T_{i}}=2\). Note that since we assume that the shape \(S\) is weak, there exists at least one such admissible \((\widehat{T},\widehat{J})\) with \(\widehat{T}=(2,\ldots,2)\). Hence, summing (4.29) over all \((T,J)\) yields (4.15).
Moreover, \(A_{r-M}^{r}\setminus A_{r-M}^{r}(1,2)\) is the union \(\bigcup^{\prime}A_{T,J}\) where we only sum over admissible pairs \((T,J)\) with some \(T_{i}\geq 3\); these correspond to reduced pairs \((\widehat{T},\widehat{J})\) with some \(\widehat{T_{i}}\geq 3\), and we see from (4.20) that each such reduced pair has length \(\leq M-1\), and thus corresponds to \(O(r^{M-1})\) admissible pairs. Consequently, summing (4.29) over all \((T,J)\) of this type yields
\[\sum_{E\in A_{r-M}^{r}\setminus A_{r-M}^{r}(1,2)}\mathbb{E}\left[\prod_{i\in E }Y_{i}\right]=O\Big{(}r^{M-1}F_{r-M}\Big{)}=o\Big{(}r^{M}F_{r-M}\Big{)}, \tag{4.30}\]
which yields (4.16) by (4.15).
The next proposition shows that, in order to get the asymptotic behaviour of \(\mathbb{E}\left[(X_{S,n})r\right]\), we only need to take into account the configurations whose number of blocks that are not singletons is a given constant.
**Proposition 4.6**.: _Fix a weak shape \(S\). Then, there exists \(\eta>0\) such that, for any \(\varepsilon>0\), there exists \(M>0\) such that we have, uniformly for \(r\leq\eta\sqrt{n}\),_
\[\sum_{u\leq r-M}\sum_{E\in A_{u}^{r}}\mathbb{E}\left[\prod_{i\in E}Y_{i} \right]\leq\varepsilon F_{r}\leq\varepsilon\frac{1}{r!}\,\mathbb{E}\left[(X_{S,n})r\right]\text{.} \tag{4.31}\]
**Remark 4.7**.: For convenience, we assume here that \(r/\sqrt{n}\) is small. In fact, Proposition 4.6 can easily be extended to \(r\leq C\sqrt{n}\) for any \(C\) (with \(M\) depending on \(C\) and \(\varepsilon\)), but we have no need for this.
To prove this, we start with a lemma:
**Lemma 4.8**.: _There exists \(Q>0\) depending only on the shape \(S\) such that, for \(n\) large enough, for all \(u\leq\sqrt{n}\):_
\[\frac{F_{u+1}}{F_{u}}\geq Q\frac{n}{u}\text{.} \tag{4.32}\]
Proof.: We just compute the ratio term by term, recalling (4.12). We have \(\frac{K^{u+1}}{K^{u}}=K\). The ratio of the ratios of Catalan numbers converges uniformly to a positive constant. Finally, the ratio of binomial coefficients is, using Lemma 2.2,
\[\frac{u!}{(u+1)!}\cdot\frac{\left(2n-(u+1)(2\ell(S)-1)\right)_{u+1}}{\left(2n -u(2\ell(S)-1)\right)_{u}}=\frac{1}{u+1}\cdot\frac{(2n)^{u+1}}{(2n)^{u}}\exp \Bigl{(}O(1)\Bigr{)}\geq c\frac{n}{u} \tag{4.33}\]
for some \(c>0\) and all large \(n\) and \(u\leq\sqrt{n}\). The result follows.
Proof of Proposition 4.6.: Using Lemma 4.5(ii), we have for all \(M\geq 0\):
\[\sum_{u\leq r-M}\sum_{E\in A_{u}^{r}}\mathbb{E}\left[\prod_{i\in E}Y_{i} \right]\leq\sum_{u=1}^{r-M}\binom{r-1}{u-1}(2\ell(S))^{r-u}F_{u}\text{.} \tag{4.34}\]
Letting
\[B_{r,u}:=\binom{r-1}{u-1}(2\ell(S))^{r-u}F_{u}\text{,} \tag{4.35}\]
we get from Lemma 4.8 that, for \(r\leq\sqrt{n}\) and any \(u\leq r-1\):
\[\frac{B_{r,u+1}}{B_{r,u}}=\frac{1}{2\ell(S)}\frac{r-u}{u}\frac{F_{u+1}}{F_{u}} \geq\frac{Q}{2\ell(S)}\frac{n(r-u)}{u^{2}}\geq\frac{Q}{2\ell(S)}\frac{n}{u^{2}}. \tag{4.36}\]
Hence, there exists \(\eta>0\) small enough such that, for all \(u<r\leq\eta\sqrt{n}\), we have \(B_{r,u+1}\geq 2B_{r,u}\), and thus by backward induction,
\[B_{r,u}\leq 2^{-(r-u)}B_{r,r}. \tag{4.37}\]
Then, for \(r\leq\eta\sqrt{n}\), (4.34) yields
\[\sum_{u\leq r-M}\sum_{E\in A_{u}^{r}}\mathbb{E}\left[\prod_{i\in E}Y_{i} \right]\leq\sum_{u=1}^{r-M}B_{r,u}\leq 2^{1-M}B_{r,r}=2^{1-M}F_{r}. \tag{4.38}\]
This yields (4.31) if we choose \(M\) such that \(2^{1-M}\leq\varepsilon\), since \(r!F_{r}\leq\mathbb{E}\left[(X_{S,n})_{r}\right]\) by the comment after (4.12).
Proposition 4.6 shows that we only need to understand the asymptotic behaviour of the configurations with a number of blocks \(r-M\) for given \(M\geq 0\), and Lemma 4.5(iii) that we can focus on configurations with blocks of size \(1\) or \(2\). To actually prove our final result, we need to refine Lemma 4.5(iii) and obtain the explicit constants that appear. We define another set of constants, which will account for the cases with blocks of size \(2\), i.e., cases when two components of shape \(S\) overlap.
**Definition 4.9**.: _Let \(S\) be a shape. There is a finite set of integers \(i\geq 1\) such that \(\mathbb{E}\left[Y_{1}Y_{i}\right]>0\) and \(i-1<2\ell(S)\). Let \(I(S)\) be this set, and \(i_{1},\dots,i_{k}\) its elements. For \(i\in I(S)\), let \(\ell_{i}\), \(K_{i}\), \(c_{+}(i)\) and \(c_{-}(i)\) be the equivalents of \(\ell(S),K,c_{+},c_{-}\) in this case of two components \(C,C^{\prime}\) that overlap and start at positions \(1\) and \(i\). In particular, \(\ell_{i}=\ell(S)+(i-1)/2\) is the total half-length of the block made of two components of shape \(S\) started at positions \(1\) and \(i\). Furthermore, \(C\) and \(C^{\prime}\) together with the horizontal axis define two unbounded faces (\(F_{+}\) in the upper half-plane and \(F_{-}\) in the lower half-plane), and several bounded faces; let \(\mathcal{F}(C,C^{\prime})\) be the set of bounded faces. For each face \(F\), let \(\nu(F)\) be the number of integers in \(\left[\![L(C),R(C)]\!\right]\cup\left[\![L(C^{\prime}),R(C^{\prime})]\!\right]= \left[\![L(C),R(C^{\prime})]\!\right]\) that are incident to \(F\) but do not belong to \(C\) nor to \(C^{\prime}\). We set \(K_{i}:=\prod_{F\in\mathcal{F}(C,C^{\prime})}\mathrm{Cat}_{\nu(F)/2}\). Finally, we define \(c_{\pm}(i):=\nu(F_{\pm})/2\). Observe again that all these constants only depend on \(S\) and \(i\)._
Note that \(i\in I(S)\) may be even; in this case \(2\ell_{i}\), \(\nu(F_{+})\) and \(\nu(F_{-})\) are odd, and thus \(\ell_{i}\) and \(c_{\pm}(i)\) are half-integers.
**Lemma 4.10**.: _Let \(r=O(\sqrt{n})\) with \(r\to\infty\). Then, for every fixed \(M\geq 0\),_
\[\sum_{E\in A_{r-M}^{r}}\mathbb{E}\left[\prod_{i\in E}Y_{i}\right]\underset{n\to \infty}{\sim}F_{r}\sum_{g_{i}\geq 0,i\in I(S)}\prod_{i\in I(S)}\frac{\left(b_{i} \frac{r^{2}}{2n}\right)^{g_{i}}}{g_{i}!}, \tag{4.39}\]
_where_
\[b_{i}:=4^{4\ell(S)-2\ell_{i}+c_{+}(i)-2c_{+}+c_{-}(i)-2c_{-}}\frac{K_{i}}{K^{2}}. \tag{4.40}\]
Note that \(b_{i}\) measures (in a specific way) how much two overlapping components of shape \(S\) differ from two non-overlapping ones.
Proof.: For each \(I(S)\)-tuple \(G=(g_{i})_{i\in I(S)}\) of integers with sum \(M\), let \(A_{r-M,G}^{r}\) be the set of \(r\)-tuples \(1\leq i_{1}<\ldots<i_{r}\leq 2n\) with \(r-2M\) blocks of size \(1\) and \(M\) blocks of size \(2\), such that for each \(i\in I(S)\), there are \(g_{i}\) blocks of type \(\{i_{k},i_{k+1}=i_{k}+i-1\}\) with \(k<r\). Then \(A_{r-M,G}^{r}\) is the union of some classes \(A_{T,J}\) from the proof of Lemma 4.5, with all \(T_{i}\in\{1,2\}\) and a specified number \(g_{i}\) of \(k\) such that \(J_{k}=(1,i)\). Hence, we obtain from (4.21), where the multinomial coefficient is the number of \((T,J)\) that are included in \(A_{r-M,G}^{r}\),
\[\sum_{E\in A_{r-M,G}^{r}}\mathbb{E}\left[\prod_{i\in E}Y_{i}\right]=\binom{r- M}{g_{i_{1}},\ldots,g_{i_{k}},r-2M}\binom{2n-2\widetilde{\ell}+(r-M)}{r-M} \widetilde{K}\frac{\operatorname{Cat}_{n-d_{+}}\operatorname{Cat}_{n-d_{-}}}{ \operatorname{Cat}_{n}^{2}}, \tag{4.41}\]
where, by (4.22)-(4.25) and the argument yielding them:
\[\widetilde{K} =K^{r-2M}\prod_{i\in I(S)}K_{i}^{g_{i}}, \tag{4.43}\] \[\widetilde{\ell} =(r-2M)\ell(S)+\sum_{i\in I(S)}g_{i}\ell_{i},\] (4.44) \[d_{\pm} =\widetilde{\ell}-(r-2M)c_{\pm}-\sum_{i\in I(S)}g_{i}c_{\pm}(i). \tag{4.42}\]
We now argue similarly as in the proof of Lemma 4.5, but this time we compare to \(F_{r}\). We have
\[\binom{r-M}{g_{1},\ldots,g_{k},r-2M}\sim r^{M}\prod_{i\in I(S)}\frac{1}{g_{i}!}, \tag{4.45}\]
\[\frac{\big{(}\begin{subarray}{c}2n-2\widetilde{\ell}+(r-M)\\ r-M\\ \end{subarray}\big{)}}{\big{(}\begin{subarray}{c}2n-2\widetilde{\ell}+(r-M)\\ r\end{subarray}\big{)}}=\frac{r!}{(r-M)!}\cdot\frac{\Big{(}2n-2\widetilde{\ell}+ (r-M)\Big{)}_{r-M}}{\Big{(}2n-2r\ell(S)+r\Big{)}_{r}}\] \[\qquad\sim r^{M}(2n)^{-M}\exp\!\left(-\frac{1}{4n}\Big{(}(r-M)(4 \widetilde{\ell}-r+M)-r(4r\ell(S)-r)\Big{)}\right)\sim r^{M}(2n)^{-M}, \tag{4.46}\]
\[\frac{\widetilde{K}}{K^{r}}=K^{-2M}\prod_{i\in I(S)}K_{i}^{g_{i}}, \tag{4.47}\]
\[\frac{\operatorname{Cat}_{n-d_{\pm}}}{\operatorname{Cat}_{n-r\ell(S)+r\ell_{ \pm}}}\sim 4^{-d_{\pm}+r(\ell(S)-c_{\pm})}=4^{2M(\ell(S)-c_{\pm})-\sum_{i\in I(S)}( \ell_{i}-c_{\pm}(i))g_{i}}. \tag{4.48}\]
and thus, from (4.41) and (4.12), recalling that \(\sum_{i\in I(S)}g_{i}=M\),
\[\frac{\sum_{E\in A^{r}_{r-M,G}}\mathds{E}\left[\prod_{i\in E}Y_{i} \right]}{F_{r}}\] \[\underset{n\to\infty}{\sim}r^{2M}(2nK^{2})^{-M}4^{2(\ell(S)-c_{+ })M-\sum_{i\in I(S)}(\ell_{i}-c_{+}(i))g_{i}}4^{2(\ell(S)-c_{-})M-\sum_{i\in I (S)}(\ell_{i}-c_{-}(i))g_{i}}\prod_{i\in I(S)}\frac{1}{g_{i}!}K_{i}^{g_{i}}\] \[=\left(B\frac{r^{2}}{2n}\right)^{M}\prod_{i\in I(S)}\frac{q_{i}^{ g_{i}}}{g_{i}!}=\prod_{i\in I(S)}\frac{\Big{(}Bq_{i}\frac{r^{2}}{2n}\Big{)}^{g_{i}} }{g_{i}!}, \tag{4.49}\]
where
\[B :=\frac{4^{4\ell(S)-2c_{+}-2c_{-}}}{K^{2}}, \tag{4.51}\] \[q_{i} :=4^{-2\ell_{i}+c_{+}(i)+c_{-}(i)}K_{i}. \tag{4.50}\]
The set \(A^{r}_{r-M}(1,2)\) defined in Lemma 4.5(iii) is the union of \(A^{r}_{r-M,G}\) over all \(G\) with sum \(M\). Hence, (4.49) implies, noting that there is only a finite number of such \(G\),
\[\sum_{E\in A^{r}_{r-M}(1,2)}\mathds{E}\left[\prod_{i\in E}Y_{i}\right] \underset{n\to\infty}{\sim}F_{r}\sum_{g_{i}\geq 0,i\in I(S)}\prod_{i\in I(S)} \frac{\Big{(}Bq_{i}\frac{r^{2}}{2n}\Big{)}^{g_{i}}}{g_{i}!}. \tag{4.52}\]
The result (4.39) now follows from (4.52) and (4.16), using \(Bq_{i}=b_{i}\).
Proof of Theorem 1.2 for weak shapes.: Let \(r\to\infty\) with \(r\leq\eta\sqrt{n}\), where \(\eta\) is as in Proposition 4.6.
We may sum (4.39) over all \(M\geq 0\) (with \(A_{r-M}^{r}:=\emptyset\) for \(M>r\)), since Proposition 4.6 shows that we may approximate the sum by a finite sum with a fixed number of terms. Consequently, recalling (4.17),
\[\mathbb{E}\left[(X_{S,n})_{r}\right] =r!\sum_{M=0}^{\infty}\sum_{E\in A_{r-M}^{r}}\mathbb{E}\left[ \prod_{i\in E}Y_{i}\right]\sim r!F_{r}\sum_{M=0}^{\infty}\sum_{\begin{subarray} {c}g_{i}\geq 0,i\in I(S)\\ \sum_{i}g_{i}=M\end{subarray}}\prod_{i\in I(S)}\frac{\left(b_{i}\frac{r^{2}}{2 n}\right)^{S_{i}}}{g_{i}!}\] \[=r!F_{r}\prod_{i\in I(S)}\exp\Bigl{(}b_{i}\frac{r^{2}}{2n}\Bigr{)}. \tag{4.53}\]
By Lemmas 2.2 and 2.3, (4.12) implies (similarly to (4.5))
\[r!F_{r}\sim\left(\frac{2nK}{4^{2\ell(S)-c_{+}-c_{-}}}\right)^{r}\exp\Bigl{(}- \frac{r^{2}}{4n}\Bigl{(}4\ell(S)-1\Bigr{)}\Bigr{)}. \tag{4.54}\]
Finally, (4.53) and (4.54) yield, for \(r\to\infty\) with \(r\leq\eta\sqrt{n}\),
\[\mathbb{E}\left[(X_{S,n})_{r}\right]\underset{n\to\infty}{\sim}\left(\frac{2 nK}{4^{2\ell(S)-c_{+}-c_{-}}}\right)^{r}\exp\left(-\frac{r^{2}}{4n}\left(4\ell(S)-1 \right)+\frac{r^{2}}{2n}\sum_{i\in I(S)}b_{i}\right). \tag{4.55}\]
This is (2.2), with
\[\mu_{n} :=\frac{2nK}{4^{2\ell(S)-c_{+}-c_{-}}}, \tag{4.57}\] \[s_{n} :=\frac{-(4\ell(S)-1)+2\sum_{i\in I(S)}b_{i}}{2n}. \tag{4.56}\]
In particular, (2.2) thus holds for \(r=r(n)\) with \(\frac{\eta}{2}\sqrt{n}\leq r\leq\eta\sqrt{n}\); as noted in Section 2.1, it then automatically holds uniformly in this range. Furthermore,
\[\mu_{n}s_{n}\geq-\frac{K(4\ell(S)-1)}{4^{2\ell(S)-c_{+}-c_{-}}}>-1 \tag{4.58}\]
by Lemma 4.3, and we have again \(\mu_{n}=\Theta(n)\) and \(\sigma_{n}=\Theta(\sqrt{n})\). It follows that Theorem 2.1 applies in this case too, which yields (1.2).
We obtain from (4.56)-(4.57)
\[\sigma_{S}^{2}=\frac{2K}{4^{2\ell(S)-c_{+}-c_{-}}}\Bigl{(}1+\frac{K}{4^{2\ell( S)-c_{+}-c_{-}}}\Bigl{(}1-4\ell(S)+2\sum_{i\in I(S)}b_{i}\Bigr{)}\Bigr{)}, \tag{4.59}\]
with \(b_{i}\) given by (4.40). Note that this formula holds also for strong shapes (when \(I(S)=\emptyset\)) by (4.11).
## 5. Open problems
We list here some open problems concerning possible extensions of our results.
1. It seems possible to extend the arguments above to joint factorial moments (5.1) \[\mathbb{E}\left[(X_{S_{1},n})_{r_{1}}\cdots(X_{S_{k},n})_{r_{k}}\right]\] for several shapes \(S_{1},\ldots,S_{k}\), and then obtain a multivariate version of Theorem 1.2 using a multivariate version of Gao and Wormald's theorem [3], [8]. However, we have not checked the details. Such a multivariate theorem would immediately imply, for example, a central limit theorem for the number of components of a given half-length.
2. Considering shapes that are similar, can we obtain a central limit theorem for the number of components that only cross the horizontal axis twice (i.e., the support has size 2, but the half-length is arbitrary)?
3. Is is true, as Kargin [6] has conjectured, that the total number of components is asymptotically normal?
|
2302.00937 | The Fewer Splits are Better: Deconstructing Readability in Sentence
Splitting | In this work, we focus on sentence splitting, a subfield of text
simplification, motivated largely by an unproven idea that if you divide a
sentence in pieces, it should become easier to understand. Our primary goal in
this paper is to find out whether this is true. In particular, we ask, does it
matter whether we break a sentence into two or three? We report on our findings
based on Amazon Mechanical Turk.
More specifically, we introduce a Bayesian modeling framework to further
investigate to what degree a particular way of splitting the complex sentence
affects readability, along with a number of other parameters adopted from
diverse perspectives, including clinical linguistics, and cognitive
linguistics. The Bayesian modeling experiment provides clear evidence that
bisecting the sentence leads to enhanced readability to a degree greater than
what we create by trisection. | Tadashi Nomoto | 2023-02-02T08:25:48Z | http://arxiv.org/abs/2302.00937v1 | # The Fewer Splits are Better: Deconstructing Readability in Sentence Splitting
###### Abstract
In this work, we focus on sentence splitting, a subfield of text simplification, motivated largely by an unproven idea that if you divide a sentence in pieces, it should become easier to understand. Our primary goal in this paper is to find out whether this is true. In particular, we ask, does it matter whether we break a sentence into two or three? We report on our findings based on Amazon Mechanical Turk.
More specifically, we introduce a Bayesian modeling framework to further investigate to what degree a particular way of splitting the complex sentence affects readability, along with a number of other parameters adopted from diverse perspectives, including clinical linguistics, and cognitive linguistics. The Bayesian modeling experiment provides clear evidence that bisecting the sentence leads to enhanced readability to a degree greater than what we create by trisection.
## 1 Introduction
In text simplification, one question people often fail to ask is, whether the technology they are driving truly helps people better understand texts. This curious indifference may reflect the tacit recognition of the partiality of datasets covered by the studies (Xu et al., 2015) or some murkiness that surrounds the goal of text simplification.
As a way to address the situation, we examine a role of simplification in text readability, with a particular focus on sentence splitting. The goal of sentence splitting is to break a sentence into small pieces in a way that they collectively preserve the original meaning. A primary question we ask in this paper is, does a splitting of text affect readability? In the face of a large effort spent in the past on sentence splitting, it comes as a surprise that none of the studies put this question directly to people; in most cases, they ended up asking whether generated texts 'looked simpler' than the original unmodified versions (Zhang and Lapata, 2017), which of course does not say much about their readability. We are not even sure whether there was any agreement among people on what constituted simplification.
Another related question is, how many pieces should we break a sentence into? Two, three, or more? In the paper, we focus on a particular setting where we ask whether there is any difference in readability between two- and three-sentence splits. We also report on how good or bad sentence splits are that are generated by a fine-tuned language model, compared to humans'.
A general strategy we follow in the paper is to elicit judgments from people on whether simplification made a text anyway readable for them (Section 4), and do a Bayesian analysis of their responses to identify factors that may have influenced their decisions (Section 5).1
Footnote 1: We will make available on GitHub the data we created for the study soon after the paper’s publication (they should be found under [https://github.com/tnomoto](https://github.com/tnomoto)).
## 2 Related Work
Historically, there have been extensive efforts in ESL (English as a Second Language) to explore the use of simplification as a way to improve reading performance of L2 (second language) students. Crossley et al. (2014) presented an array of evidence showing that simplifying text did lead to an improved text comprehension by L2 learners as measured by reading time and and accuracy of their responses to associated questions. They also noticed that simple texts had less lexical diversity, greater word overlap, greater semantic similarity among sentences than more complicated texts. Crossley et al. (2011) argued for the importance of cohesiveness as a factor to influence the readability. Meanwhile, an elaborative modification of text was found to play a role in enhancing readability, which involves adding information to make
the language less ambiguous and rhetorically more explicit. Ross et al. (1991) reported that despite the fact that it made a text longer, the elaborative manipulation of a text produced positive results, with L2 students scoring higher in comprehension questions on modified texts than on the original unmodified versions.
While there have been concerted efforts in the past in the NLP community to develop metrics and corpora purported to serve studies in simplification (Zhang and Lapata, 2017; Sulem et al., 2018; Narayan et al., 2017; Botha et al., 2018; Niklaus et al., 2019; Kim et al., 2021; Xu et al., 2015), they fell far short of addressing how their work contributes to improving the text comprehensibility by readers. Part of our goal is to break away from a prevailing view that relegates the readability to a sideline.
## 3 Method
The data come from two sources, the Split and Rephrase Benchmark (v1.0) (SRB, henceforth) (Narayan et al., 2017) and WikiSplit (Botha et al., 2018). SRB consists of complex sentences aligned with a set of multi-sentence simplifications varying in size from two to four. WikiSplit follows a similar format except that each complex sentence is accompanied only by a two-sentence simplification.2 We asked Amazon Mechanical Turk workers (Turkers, henceforth) to score simplifications on linguistic qualities as well as to indicate whether they have any preference between two-sentence and three-sentence versions in terms of readability.
Footnote 2: We used WikiSplit, together with part of SRB, exclusively to fine tune BART to give a single split (bipartite) simplification model, and SRB to develop test data to be administered to humans for linguistic assessments. SRB was derived from WebNLG (Gardert et al., 2017) by making use of RDFs associated with textual snippets to assemble simplifications.
We randomly sampled a portion of SRB, creating test data (call it \(\mathcal{H}\)), which consisted of triplets of the form: \(\langle S_{0},A_{0},B_{0}\rangle\),..., \(\langle S_{i},A_{i},B_{i}\rangle\),..., \(\langle S_{m},A_{m},B_{m}\rangle\), where \(S_{i}\) is a complex sentence, \(A_{i}\) a corresponding two-sentence simplification, and \(B_{i}\) its three-sentence version. While \(A\) alternates between versions created by BART and by human, \(B\) deals only with manual simplifications.3 See Table 1 for a further explanation.
Footnote 3: HSplit (Sulem et al., 2018) is another dataset (based on Zhang and Lapata (2017)) that gives multi-split simplifications. We did not adopt it here as the data came with only 359 sentences with limited variations in splitting.
Separately, we extracted from WikiSplit and SRB, another dataset \(\mathcal{B}\) consisting of complex sentences as a source and two-sentence simplifications as a target (Table 2) i.e. \(\mathcal{B}=\{\langle S^{\prime}_{0},A^{\prime}_{0}\rangle\),..., \(\langle S^{\prime}_{n},A^{\prime}_{n}\rangle\}\), to use it to fine-tune a language model (BART-large).4 The fine-tuning was done using a code available at GitHub.5
Footnote 4: [https://huggingface.co/facebook/bart-large](https://huggingface.co/facebook/bart-large)
A task (or a HIT in Amazon's parlance) we asked Turkers to do was to work on a three-part language quiz. The initial problem section introduced a worker to three short texts, corresponding to a triplet \(\langle S_{i},A_{i},B_{i}\rangle\); the second section asked about linguistic qualities of \(A_{i}\) and \(B_{i}\) along three dimensions, _meaning_, _grammar_, and _fluency_; and in the third, we asked two comparison questions: (1) whether \(A_{i}\) and \(B_{i}\) are more readable than \(S_{i}\), and (2) which of \(A_{i}\) and \(B_{i}\) is easier to understand.
Figure 1 gives a screen capture of an initial section of the task. Shown Under **Source** is a complex sentence or \(S_{i}\) for some \(i\). **Text A** and **Text B** correspond to \(A_{i}\) and \(B_{i}\), which were displayed in a random order.
In total, there were 221 HITs (Table 1), each administered to seven people. All of the participants were self-reported native speakers of English with a degree from college or above. The participation was limited to residents in US, Canda, UK, Australia, and New Zealand.
\begin{table}
\begin{tabular}{l r r} & & BART & HM \\ A (two-sentence split) & 113 & 108 \\ B (three-sentence split) & \(-\) & 221 \\ \end{tabular}
\end{table}
Table 1: A break down of \(\mathcal{H}\). 113 of them are of type A (bipartite split) generated by BART-large; 108 are of type A created by humans. There were 221 of type B (tripartite split), all of which were produced by humans.
\begin{table}
\begin{tabular}{l r} train & DEV \\
1,135,009 (989,944) & 13,797(5,000) \\ \end{tabular}
\end{table}
Table 2: A training setup for BART. The data comes from SRB (Narayan et al., 2017) and WikiSplit (Botha et al., 2018). The parenthetical numbers indicate amounts of data that originate in WikiSplit (Botha et al., 2018).
## 4 Preliminary Analysis
Table 3 summarizes results from comparison questions. A question, labelled \(\langle\)S, BART-A\(\rangle\)\(\mid_{q}\), asks a Turker, which of Source and BART-A he or she finds easier to understand, where BART-A is a BART generated two-sentence simplification. We had 791 (113\(\times\)7) responses, out of which 32% said they preferred Source, 67% liked BART better, and 1% replied they were not sure. Another question, labelled \(\langle\)S, HUM-A\(\rangle\)\(\mid_{q}\), compares Source to HUM-A, a two-sentence split by human. It got 756 responses (108\(\times\)7). The result is generally parallel to \(\langle\)S, BART-A\(\rangle\)\(\mid_{q}\). The majority of people favored a two-sentence split over a complex sentence. The fact that three sentence versions are also favored over complex sentences suggests that breaking up a complex sentence improves readability, regardless of how many pieces it ends up with.
Table 4 gives a tally of responses to comparison questions on two- and three-sentence splits. More people voted for bipartite over tripartite simplifications. Tables 5 and 6 show scores on fluency, grammar, and meaning retention of simplifications, comparing BART-A and HUM-B,6 on one hand, and HUM-A and HUM-S, on another, on a scale of 1 (poor) to 5 (excellent). In either case, we did not see much divergence between A and B in grammar and meaning, but they diverged the most in fluency. A T-test found the divergence statistically significant. Two-sentence simplifications generally scored higher on fluency (over 4.0) than three sentence counterparts (below 4.0). Table 7 gives an example showing what generated texts looked like in BART-A and HUM-A/B.
Footnote 6: As Tables 5 and 6 indicate, BART-A is generally comparable to HUM-A in the quality of its outputs, suggesting that what it generates is mostly indistinguishable from those by humans.
## 5 A Bayesian Perspective
A question we are curious about at this point is what are the factors that led Turkers to decisions
Figure 1: A screen capture of HIT. This is what a Turker would be looking at when taking the test.
that they made. We answer the question by way of building a Bayesian model based on predictors assembled from the past literature on readability and in related fields.
### Model
We consider a Bayesian logistic regression.7
Footnote 7: Equally useful in explaining relationships between potential causes and the outcome are Bayesian tree-based methods (Chipman et al., 2010; Linero, 2017; Nuti et al., 2019), which we do not explore here. The latter could become a viable choice when an extensive non-linearity exists between predictors and the outcome.
\[Y_{j} \backsim Ber(\lambda), \tag{1}\] \[\text{logit}(\lambda) =\beta_{0}+\sum_{i}^{m}\beta_{i}X_{i},\] \[\beta_{i} \backsim\mathcal{N}(0,\sigma_{i})\ \ (0\leq i\leq m)\]
\(Ber(\lambda)\) is a Bernoulli distribution with a parameter \(\lambda\). \(\beta_{i}\) represents a coefficient tied to a random variable (predictor) \(X_{i}\), where \(\beta_{0}\) is an intercept. We assume that \(\beta_{i}\), including the intercept, follows a normal distribution with the mean at \(0\) and the variance at \(\sigma_{i}\). \(Y_{i}\) takes either 1 or 0. \(Y=1\) if a Turker finds a two-sentence simplification more readable, and \(Y=0\) if a three-sentence version is preferred.
\begin{table}
\begin{tabular}{l c c} category & part-a & hum-b \\
**fluency & 4.04 (0.37) & 3.72 (0.36) \\ grammar & 4.07 (0.30) & 4.05 (0.34) \\ meaning & 4.21 (0.38) & 4.25 (0.35) \\ \end{tabular}
\end{table}
Table 6: Average scores and standard deviations of BART-A and the corresponding HUM-B. BART-A is significantly more fluent than HUM-B. *** indicates the two groups are distinct at the 0.01 level.
\begin{table}
\begin{tabular}{l c c c c} question & \multicolumn{4}{c}{available choices} \\ \cline{2-5} & \multicolumn{1}{c}{} & bart-a & hum-b & not sure & total \\ \(\big{\langle}\)s, bart-a\(\rangle\)\(|_{q}\) & 254 (0.32) & 527 (0.67) & – & 10 (0.01) & 791 \\ \(\big{\langle}\)s, hum-b\(\rangle\)\(|_{q}\) & 290 (0.37) & – & 490 (0.62) & 11 (0.01) & 791 \\ & \multicolumn{1}{c}{} & s & hum-a & hum-b & not sure & total \\ \(\big{\langle}\)s, hum-a\(\rangle\)\(|_{q}\) & 253 (0.33) & 494 (0.65) & – & 9 (0.01) & 756 \\ \(\big{\langle}\)s, hum-b\(\rangle\)\(|_{q}\) & 288 (0.38) & – & 463 (0.61) & 5 (0.01) & 756 \\ \end{tabular}
\end{table}
Table 3: Results from the Comparison Section. We are showing how many Turkers went with each available choice. S: source. BART-A: BART-generated two-sentence simplification. HUM-A: manual two-sentence simplification. HUM-B: manual three-sentence simplification. \(\big{\langle}\)S, BART-A\(\rangle\)\(|_{q}\) asked Turkers which of S and BART-A they found easier to understand. 67% said they would favor BART-A, and 32% S, with 1% not sure. \(\big{\langle}\)S, HUM-B\(\rangle\)\(|_{q}\) compares S and HUM-B for readability. \(\big{\langle}\)S, HUM-A\(\rangle\)\(|_{q}\) looks at S and HUM-A.
\begin{table}
\begin{tabular}{l c c c c} category & hum-a & hum-b \\
**fluency & 4.04 (0.39) & 3.75 (0.38) \\ grammar & 4.12 (0.32) & 4.10 (0.32) \\ meaning & 4.31 (0.36) & 4.33 (0.28) \\ \end{tabular}
\end{table}
Table 4: Comparison of two- vs three-sentence simplifications. The majority went with two-sentence simplifications regardless of how they were generated.
\begin{table}
\begin{tabular}{l c c c} category & bart-a & hum-b \\
**fluency & 4.04 (0.37) & 3.72 (0.36) \\ grammar & 4.07 (0.30) & 4.05 (0.34) \\ meaning & 4.21 (0.38) & 4.25 (0.35) \\ \end{tabular}
\end{table}
Table 6: Average scores and standard deviations of BART-A and the corresponding HUM-B. BART-A is significantly more fluent than HUM-B. *** indicates the two groups are distinct at the 0.01 level.
\begin{table}
\begin{tabular}{l c c c c} question & \multicolumn{4}{c}{available choices} \\ \cline{2-5} & \multicolumn{1}{c}{} & bart-a & hum-b & not sure & total \\ \(\big{\langle}\)s, bart-a\(\rangle\)\(|_{q}\) & 254 (0.32) & 527 (0.67) & – & 10 (0.01) & 791 \\ \(\big{\langle}\)s, hum-b\(\rangle\)\(|_{q}\) & 290 (0.37) & – & 490 (0.62) & 11 (0.01) & 791 \\ & \multicolumn{1}{c}{} & s & hum-a & hum-b & not sure & total \\ \(\big{\langle}\)s, hum-a\(\rangle\)\(|_{q}\) & 253 (0.33) & 494 (0.65) & – & 9 (0.01) & 756 \\ \(\big{\langle}\)s, hum-b\(\rangle\)\(|_{q}\) & 288 (0.38) & – & 463 (0.61) & 5 (0.01) & 756 \\ \end{tabular}
\end{table}
Table 3: Results from the Comparison Section. We are showing how many Turkers went with each available choice. S: source. BART-A: BART-generated two-sentence simplification. HUM-A: manual two-sentence simplification. HUM-B: manual three-sentence simplification. \(\big{\langle}\)S, BART-A\(\rangle\)\(|_{q}\) asked Turkers which of S and BART-A they found easier to understand. 67% said they would favor BART-A, and 32% S, with 1% not sure. \(\big{\langle}\)S, HUM-B\(\rangle\)\(|_{q}\) compares S and HUM-B for readability. \(\big{\langle}\)S, HUM-A\(\rangle\)\(|_{q}\) looks at S and HUM-A.
\begin{table}
\begin{tabular}{c c c c} \hline \hline TYPE & \multicolumn{2}{c}{TEXT} \\ original & The Alderney Airport serves the island of Alderney and its 1st runway is surfaced with poaceae and has a 497 meters long runway. \\ bart-a & Alderney Airport serves the island of Alderney. The 1st runway at Aarney Airport \\ & is surfaced with poaceae and has 497 meters long. \\ hum-a & The runway length of Alderney Airport is 497.0 and the 1st runway has a poaceae \\ & surface. The Alderney Airport serves Alderney. \\ hum-b & The surface of the 1st runway at Alderney airport is poaceae. Alderney Airport \\ & has a runway length of 497.0. The Alderney Airport serves Alderney. \\ \end{tabular}
\end{table}
Table 7: Original vs. Modified
\begin{table}
\begin{tabular}{c c c c} \hline \hline CATEGORY & var name & description & value \\ \hline synthetic & **bart** & true if the simplification is generated by BART; false & categorical \\ & otherwise. & & \\ \hline & **ted1** & the tree edit distance (TED) between a source and its proposed simplification.8 where TED represents the number of editing operations (_insert_, _delete_, _replace_) required to turn one parse tree into another; the greater the number, the less the similarity (Boghrati et al., 2018; Zhang and Shasha, 1989). \\ cohesion & **ted2** & TED across sentences contained in the simplification. & continuous \\ & **subset** & Subset based Tree Kernel (Collins and Duffy, 2002; Moschitti, 2006; Chen et al., 2022)8 \\ & **subtree** & Subtree based Tree Kernel (Collins and Duffy, 2002; Moschitti, 2006; Chen et al., 2022)8 \\ & **overlap** & Szymkiewicz-Simpson coefficient, a normalized cardinality of an intersection of two sets of words (Vijaymeena and Kavitha, 2016).9 \\ \hline & **frazier** & the distance from a terminal to the root or the first ancestor & continuous \\ & **synge** & per-token count of non-terminals that occur to the right & continuous \\ cognitive & **dep length** & of a word in a derivation tree (Yngve, 1960). & continuous \\ & **tnodes** & per-token count of dependencies in a parse (Magerman, 1995; Roark et al., 2007). & continuous \\ & **dale** & Dale-Chall readability score (Chall and Dale, 1995)10 & continuous \\ classic & **ease** & Flesch Reading Ease (Flesch, 1979)10 & continuous \\ & **fk grade** & Flesch-Kincaid Grade Level (Kincaid et al., 1975)10 & continuous \\ \hline & **grammar** & grammatical integrity (manually coded) & continuous \\ perception & **meaning** & semantic fidelity (manually coded) & continuous \\ & **fluency** & language naturalness (manually coded) & continuous \\ \hline structural & **split** & true if the sentence is bisected; false otherwise. & categorical \\ \hline informational & **samsa** & measures how much of the original content is preserved & continuous \\ & in the target (Sulem et al., 2018b). & \\ \hline \hline \end{tabular}
\end{table}
Table 8: Predictors
### Predictors
We use predictors shown in Table 8. They come in six categories: _synthetic_, _cohesion_, _cognitive_, _classic_, _perception_ and _structural_. A _synthetic_ feature indicates whether the simplification was created with BART or not, taking _true_ if it was and _false_ otherwise. Those found under _cohesion_ are our adaptions of SYNSTRUT and CRFCWO, which are among the diverse features McNamara et al. (2014) created to measure cohesion across sentences. SYSTRUCT gauges the uniformity and consistency across sentences by looking at their syntactic similarities, or by counting nodes in a common subgraph shared by neighboring sentences. We substituted SYSTRUCT with **tree edit distance**Boghrati et al. (2018), as it allows us to handle multiple subgraphs, in contrast to SYSTRUCT, which only looks for a single common subgraph. CRFCWO gives a normalized count of tokens found in common between two neighboring sentences. We emulated it here with the Szymkiewicz-Simpson coefficient, given as \(O(X,Y)=\frac{|X\cap Y|}{\min(|X|,|Y|)}\).
Predictors in the _cognitive_ class are taken from works in clinical and cognitive linguistics (Roark et al., 2007; Boghrati et al., 2018). They reflect various approaches to measuring the cognitive complexity of a sentence. For example, **yngve** scoring defines a cognitive demand of a word as the number of non-terminals to its right in a derivation rule that are yet to be processed.
#### 5.2.1 **yngve**
Consider Figure 2. **yngve** gives every edge in the parse a number reflecting its cognitive cost. NP gets '1' because it has a sister node VP to its right. The cognitive cost of a word is defined as the sum of numbers on a path from the root to the word. In Figure 2, 'Vanya' would get \(1+0+0=1\), whereas 'home' \(0\). Averaging words' costs gives us an Yngve complexity.
#### 5.2.2 **frazier**
**frazier** scoring views the syntactic depth of a word (the distance from a leaf to a first ancestor that occurs leftmost in a derivation rule) as a most important factor to determining the sentence complexity. If we run **frazier** on the sentence in Figure 2, it will get the score like one shown in Figure 3. 'Vanya' gets \(1+1.5=2.5\), 'walks' \(1\) and 'home' \(0\) (which has no leftmost ancestor). Roark et al. (2007) reported that both **yngve** and **frazier** worked well in discriminating subjects with mild memory impairment.
#### 5.2.3 **dep length**
**dep length** (dependency length) and **tnodes** (tree nodes) are also among the features that Roark et al. (2007) found effective. The former measures the number of dependencies in a dependency parse, and the latter the number of nodes in a phrase structure tree.
#### 5.2.4 **subset** and **subtree**
**subset** and **subtree** are both measures based on the idea of _Tree Kernel_(Collins and Duffy, 2002; Moschitti, 2006; Chen et al., 2022).11 The former considers how many subgraphs two parses share, while the latter how many subtrees. Note that subtrees are those structures that end with terminal nodes.
Footnote 11: Tree Kernel is a function defined as \(K(T_{1},T_{2})=\sum_{n_{1}\in N(T_{1})}\sum_{n_{2}\in N(T_{2})}\Delta(n_{1},n_ {2})\) where
\[\Delta(a,b)=\left\{\begin{array}{ll}0&\text{if $a\neq b$;}\\ 1&\text{if $a=b$;}\\ \prod_{i}^{C(a)}(\sigma+\Delta(c_{a}^{(i)},c_{b}^{(i)}))&\text{otherwise.} \end{array}\right.\]
\(C(a)\) = the number of children of \(a\), \(c_{a}^{(i)}\) represents the \(i\)-th child of \(a\). We let \(\sigma>0\).
#### 5.2.5 **Classic readability features**
We also included features that have long been established in the readability literature as standard, i.e. Dale-Chall Readability, Flesch Reading Ease, and Flesch-Kincaid Grade Level (Chall and Dale, 1995; Flesch, 1979; Kincaid et al., 1975).
#### 5.2.6 **Perceptual features**
Those found in the _perception_ category are from judgments Turkers made on the quality of simplifications we asked them to evaluate. We did not
Figure 3: Frazier scoring
Figure 2: Yngve scoring
provide any specific definition or instruction as to what constitutes grammaticality, meaning, and fluency during the task. So, it is most likely that their responses were spontaneous and perceptual.
#### 5.2.7 split and samsa
Finally, we have **split**, which records whether or not the simplification is bipartite: it takes _true_ if it is, and _false_ if not. **samsa** is a recent addition to a battery of simplification metrics, which looks at how much of a propositional content in the source remains after a sentence is split (Sulem et al., 2018). (The greater, the better.) We standardized all of the features, except for **part** and **split**, by turning them into \(z\)-scores, where \(z=\frac{x-\bar{x}}{\sigma}\).
### Evaluation
We trained the model (Eqn. 1) using bambi(Capretto et al., 2020),12 with the burn-in of 50,000 while making draws of 4,000, on 4 MCMC chains (Hamiltonian). As a way to isolate the effect (or importance) of each predictor, we did two things: one was to look at a posterior distribution of each factor, i.e. a coefficient \(\beta\) tied with a predictor, and see how far it is removed from 0; another was to conduct an ablation study where we looked at how the absence of a feature affected the model's performance, which we measured with a metric known as 'Watanabe-Akaike Information Criterion' (WAIC) (Watanabe, 2010; Vehtari et al., 2016), a Bayesian incarnation of AIC (Burnham and Anderson, 2003).13
Footnote 12: [https://bambinos.github.io/bambi/main/index.html](https://bambinos.github.io/bambi/main/index.html)
Footnote 13: WAIC is given as follows.
\[\text{WAIC}=\sum_{i}^{n}\log\mathbb{E}[p(y_{i}|\theta)]-\sum_{i}^{n}\mathbb{V}[ \log p(y_{i}|\theta)]. \tag{2}\]
\(\mathbb{E}[p(y_{i}|\theta)]\) represents the average likelihood under the posterior distribution of \(\theta\), and \(\mathbb{V}[\alpha]\) represents the sample variance of \(\alpha\), i.e. \(\mathbb{V}[\alpha]=\frac{1}{\beta-1}\sum_{i}^{\beta}(\alpha_{s}-\bar{\alpha})\), where \(\alpha_{s}\) is a sample draw from \(p(\alpha)\). A higher WAIC score indicates a better model. \(n\) is the number of data points.
Figure 4 shows what posterior distributions of parameters associated with predictors looked like after 4,000 draw iterations with MCMC. None of the chains associated with the parameters exhibited divergence. We achieved \(\hat{R}\) between 1.0 and 1.02, for all \(\beta_{i}\), a fairly solid stability (Gelman and Rubin, 1992), indicating that all the relevant parameters had successfully converged.14
Footnote 14: \(\hat{R}=\) the ratio of within- and between-chain variances, a standard tool to check for convergence (Lambert, 2018). The closer the ratio is to the unity, the more likely MCMC chains have converged.
At a first glance, it is a bit challenging what to make of Figure 4, but a generally accepted rule of thumb is to assume distributions that center around 0 as of less importance in terms of explaining observations, than those that appear away from zero. If we go along with the rule, then the most likely candidates that affected readability are: **case**, **subset**, **fk grade**, **grammar**, **meaning**, **fluency**, **split**, and **overlap**. What remains unclear is, to what degree the predictors affected readability.
One good way to find out is to do an ablation study, a method to isolate the effects of an individual factor by examining how seriously its removal from a model degrades its performance. The result of the study is shown in Table 9. Each row represents performance in WAIC of a model with a particular predictor removed. Thus, 'tcd1' in Table 9 represents a model that includes all the predictors in Table 8, except for **ted1**. A row in blue represents a full model which had none of the features disabled. Appearing above the base model means that a removal of a feature had a positive effect, i.e. the feature is redundant. Appearing below means that the removal had a negative effect, indicating that we should not forgo the feature. A feature becomes more relevant as we go down, and becomes less relevant as we go up the table. Thus the most relevant is **fluency**, followed by **meaning**, the least relevant is **subtree**, followed by **dale**, and so forth. We can tell from Table 9 what predictors we need to keep to explain the readability: they are
Figure 4: Posterior distributions of coefficients (\(\beta\)’s) in the full model. The further the distribution moves away from 0, the more relevant it becomes to predicting the outcome.
grammar, split, fk grade, ease, meaning** and **fluency** (call them'select features'). Note that **bart** is in the negative realm, meaning that from a perspective of readability, people did not care about whether the simplification was done by human or machine. **samsa** was also found in the negative domain, implying that for a perspective of information, a two-sentence splitting carries just as much information as a three way division of a sentence.
To further nail down to what extent they are important, we ran another ablation experiment involving the select features alone. The result is shown in Table 10. At the bottom is **fluency**, the second to the bottom is **split**, followed by **meaning**, and so forth. As we go up the table, a feature becomes less and less important. The posterior distributions of these features are shown in Figure 5.15 Not surprisingly, they are found away from zero, with **fluency** furtherest away. The result indicates that contrary to the popular wisdom that classic readability metrics such as **ease**, and **fk grade**, are of little use, they had a large sway on decisions people made when they were asked about readability.
Footnote 15: We found that they had \(1.0\leq\hat{R}\leq 1.01\), a near-perfect stability. Settings for MCMC, i.e. the number of burn-ins and that of draws, were set to the same as before.
## 6 Conclusions
In this work, we asked two questions: does cutting up a sentence help the reader better understand the text? and if so, does it matter how many pieces we break it into? We found that splitting does allow the reader to better interact with the text (Table 3) and moreover, two-sentence simplifications are clearly favored over three-sentence simplifications (Tables 3,9,10). Why two-sentence splits make a better simplification is something of a mystery. A possible answer may lie in a potential disruption splitting may have caused in a sentence-level discourse structure, whose integrity Crossley et al. (2011, 2014) argued, constitutes a critical part of simplification, a topic that we believe is worth a further exploration in the future.
## 7 Limitations
* We did not consider cases where a sentence is split into more than three. This is mainly due to our failure to find a dataset containing manual simplifications of length greater than three in a large number. While it is unlikely that our claim in this work does not hold for cases beyond three, testing the hypothesis on cases that involve more than three sentences would be desirable.
* A cohort of people we solicited for the current work are generally well educated adults who speak English as the first language. Therefore, the results we found in this work may not necessarily hold for L2-learners, minors, or those who do not have college level education.
## 8 Acknowledgement
We thank anonymous reviewers for sharing with us their comments and ideas. We note their effort with much gratitude and appreciation.
|
2307.03438 | RNN Based Channel Estimation in Doubly Selective Environments | Doubly-selective channel estimation represents a key element in ensuring
communication reliability in wireless systems. Due to the impact of multi-path
propagation and Doppler interference in dynamic environments, doubly-selective
channel estimation becomes challenging. Conventional symbol-by-symbol (SBS) and
frame-by-frame (FBF) channel estimation schemes encounter performance
degradation in high mobility scenarios due to the usage of limited training
pilots. Recently, deep learning (DL) has been utilized for doubly-selective
channel estimation, where long short-term memory (LSTM) and convolutional
neural network (CNN) networks are employed in the SBS and FBF, respectively.
However, their usage is not optimal, since LSTM suffers from long-term memory
problem, whereas, CNN-based estimators require high complexity. For this
purpose, we overcome these issues by proposing an optimized recurrent neural
network (RNN)-based channel estimation schemes, where gated recurrent unit
(GRU) and Bi-GRU units are used in SBS and FBF channel estimation,
respectively. The proposed estimators are based on the average correlation of
the channel in different mobility scenarios, where several
performance-complexity trade-offs are provided. Moreover, the performance of
several RNN networks is analyzed. The performance superiority of the proposed
estimators against the recently proposed DL-based SBS and FBF estimators is
demonstrated for different scenarios while recording a significant reduction in
complexity. | Abdul Karim Gizzini, Marwa Chafii | 2023-07-07T07:51:22Z | http://arxiv.org/abs/2307.03438v1 | # RNN Based Channel Estimation in Doubly Selective Environments
###### Abstract
Doubly-selective channel estimation represents a key element in ensuring communication reliability in wireless systems. Due to the impact of multi-path propagation and Doppler interference in dynamic environments, doubly-selective channel estimation becomes challenging. Conventional symbol-by-symbol (SBS) and frame-by-frame (FBF) channel estimation schemes encounter performance degradation in high mobility scenarios due to the usage of limited training pilots. Recently, deep learning (DL) has been utilized for doubly-selective channel estimation, where long short-term memory (LSTM) and convolutional neural network (CNN) networks are employed in the SBS and FBF, respectively. However, their usage is not optimal, since LSTM suffers from long-term memory problem, whereas, CNN-based estimators require high complexity. For this purpose, we overcome these issues by proposing an optimized recurrent neural network (RNN)-based channel estimation schemes, where gated recurrent unit (GRU) and Bi-GRU units are used in SBS and FBF channel estimation, respectively. The proposed estimators are based on the average correlation of the channel in different mobility scenarios, where several performance-complexity trade-offs are provided. Moreover, the performance of several RNN networks is analyzed. The performance superiority of the proposed estimators against the recently proposed DL-based SBS and FBF estimators is demonstrated for different scenarios while recording a significant reduction in complexity.
Wireless communications, Channel estimation, Deep learning, RNN, LSTM, GRU, Bi-GRU.
## I Introduction
The recent advances in beyond 5G networks enable high data rates and low latency mobile wireless applications [1]. Wireless communications offer mobility to different nodes within the network, however, the mobility feature has a severe negative impact on the communication reliability [2]. In such environment, the wireless channel is said to be doubly-selective, i.e. varies in both time and frequency. This is due to the propagation medium, where the transmitted signals propagate through multiple paths, each having a different power, delay, and Doppler shift effect resulting from the motion of network nodes. Knowing that the accuracy of the estimated channel influences the system performance since it affects different operations at the receiver like equalization, demodulation, and decoding. Therefore, ensuring communication reliability using accurate channel estimation is crucial, especially in high mobility scenarios [3].
In general, a few pilots are allocated within the transmitted frame in order to maintain a good transmission data rate, where the state-of-the-art (SoA) channel estimation schemes can be categorized into: (_i_) SBS estimators: the channel is estimated for each received symbol separately [4, 5]. (ii) FBF estimators: where the previous, current, and future pilots are employed in the channel estimation for each received symbol [6]. The higher channel estimation accuracy can be achieved by using FBF estimators, since the channel estimation of each symbol takes advantage from the knowledge of previous, current, and future allocated pilots within the frame. Unlike, SBS estimators, where only the previous and current pilots are exploited in the channel estimation for each received symbol. However, the allocated pilots are insufficient for accurately tracking the doubly-selective channel. As a result, conventional SBS channel estimation schemes use the demapped data subcarriers besides pilot subcarriers to accomplish the channel estimation task. This procedure is known as data-pilot aided (DPA) channel estimation, which is unreliable due to the demapping errors of the data subcarriers that are also enlarged from one symbol to another, leading to accumulated error in the channel estimation process. Moreover, the DPA-based channel estimation schemes such as spectral temporal averaging (STA) [4] and time-domain reliable test frequency domain interpolation (TRFI) [5] are impractical solutions as they rely on many assumptions such as high correlation of the channel within the received frame. In addition, they lack robustness in highly dynamic environments. On the other hand, several 2D interpolation methods, such as radial basis function (RBF) [7] and average decision-directed with time truncation (ADD-TT) [8] are employed in the FBF channel estimation. However, the performance of these interpolation methods is limited when employed in high mobility scenarios, since they use fixed interpolation parameters. Moreover, the well-known FBF estimator is the conventional 2D linear minimum mean square error (LMMSE) uses the channel and noise statistics in the estimation, thus, leading to comparable performance to the ideal case. However, it suffers from high complexity making it impractical in real-case scenarios. Therefore, investigating both SBS and FBF channel estimators with a good trade-off complexity vs. performance is a crucial need for improving the channel estimation accuracy as well as maintaining affordable computational complexity.
Recently, a great success of deep learning (DL) has been witnessed in several wireless communications applications [9, 10], including localization [11, 12, 13], and channel estimation [14, 15, 16, 17], particularly when integrated with conventional SBS and FBF estimators. This success is due to the robustness, low-complexity, and good generalization ability of DL algo
rithms making their integration into communication systems beneficial. Motivated by these advantages, DL algorithms have been integrated into doubly-selective channel estimators in two different manners: (\(i\)) feed-forward neural network (FNN) and LSTM networks with different architectures and configurations are employed on top of SBS estimators [18, 19, 20, 21, 22]. (\(ii\)) CNNs are integrated into the FBF estimators [7, 8, 23], where the estimated channel for the whole frame is considered as a 2D low-resolution noisy image and CNN-based processing is applied as super-resolution and denoising techniques. These SoA DL-based SBS and FBF still encounter a considerable performance degradation due to the poor accuracy of the employed initial channel estimation as in [19, 20]. Moreover, they require high computational complexity due to the employed DL architectures [7, 8, 23].
In order to achieve better performance-complexity trade-off in different mobility scenarios according to the channel correlation, this paper sheds light on the RNN-based channel estimation in doubly-selective environments for both SBS and FBF channel estimation, where an optimized RNN networks represented by a GRU and bi-directional (Bi)-GRU units are used in the proposed SBS and FBF channel estimators, respectively. Thus, having a low-complexity and robust channel estimation in different mobility scenarios. The proposed GRU-based SBS estimator uses only one GRU network instead of two as the case in the recently proposed LSTM-based estimator [21]. After that, DPA estimation is applied using the GRU estimated channel. Finally, unlike [21] where FNN network is used for noise elimination, in the proposed GRU-based estimators, time averaging (TA) processing is employed as a noise alleviation technique where the noise alleviation ratio is calculated analytically. Moreover, motivated by the fact that Bi-RNN is designed to perform 2D interpolation of unknown data bounded between known data [24], the proposed Bi-GRU channel estimator is designed to overcome the limitations of the FBF CNN-based channel estimation schemes, where an end-to-end 2D interpolation is performed by the proposed Bi-GRU unit. In this context, the proposed Bi-GRU channel estimator employs an adaptive frame design, where comb pilot allocation is replaced by full pilot allocated symbols that are inserted periodically within the transmitted frame. As a first step, the channel is estimated at the inserted pilot symbols, after that, Bi-GRU acts as an end-to-end 2D interpolation unit to estimate the channel at the data symbols without the need to any initial estimation. By doing this interpolation, the proposed Bi-GRU based estimator is able to further improve the estimation performance, unlike the CNN-based estimators that work according to the noise mitigation principle [25, 26] rather than doing actual interpolation. Simulation results show the performance superiority of the proposed RNN-based channel estimation schemes against the SoA SBS and FBF channel estimators while recording an outstanding computational complexity reduction. To sum up, the contributions of this paper are listed below 1: Footnote 1: We would like to mention that part of this work related to the Bi-RNN based FBF channel estimation has been accepted for publication in the IEEE ICC 2023 conference [27].
* Proposing low-complexity and robust RNN-based channel estimation schemes, where an optimized GRU, and Bi-GRU units are employed to accurately estimate the doubly-selective channel in SBS and FBF fashions, respectively.
* Employing GRU unit as a pre-processing module to DPA and TA processing in SBS channel estimation. Whereas an end-to-end 2D interpolation using Bi-GRU unit is proposed for FBF channel estimation.
* Providing a brief overview of the theoretical concept of the studied RNN networks.
* Analyzing the appropriate RNN architectures to be employed according to the average channel correlation within the frame in different mobility scenarios, where the advantages of using the proposed optimized GRU unit instead of regular LSTM unit are discussed.
* Showing that the proposed RNN-based channel estimators record a significant superiority over the SoA SBS and FBF channel estimators in terms of bit error rate (BER) and throughput for different modulation orders, mobility scenarios, and frame lengths.
* Illustrating the advantage of using the ensemble learning (EL) algorithm [28] in the generalization of one DL model that is robust against a range of Doppler frequencies.
* Providing a detailed computational complexity analysis for the studied channel estimators, where we show that the proposed RNN-based channel estimators achieve substantial reduction in complexity in comparison with the SoA SBS and FBF channel estimators.
The remainder of this paper is organized as follows: Section II presents the system model. The SoA DL-based channel estimation schemes are thoroughly investigated and discussed in Section III. Section IV illustrates the framework of the proposed RNN-based channel estimation schemes, besides providing a brief overview of the main RNN networks integrated into the doubly-selective channel estimation. In Section V, different modulation orders are used to present simulation results, wherein the performance of the studied estimators is examined in terms of BER. Detailed computational complexity analysis is provided in Section VI. Finally, Section VII concludes this study.
## II System Model
Consider a frame consisting of \(I\) orthogonal frequency division multiplexing (OFDM) symbols. The \(i\)-th transmitted frequency-domain OFDM symbol \(\tilde{\mathbf{x}}_{i}[k]\), is denoted by
\[\tilde{\mathbf{x}}_{i}[k]=\left\{\begin{array}{ll}\tilde{\mathbf{x}}_{i,d}[k],&k\in \mathcal{K}_{\mathsf{d}}.\\ \tilde{\mathbf{x}}_{i,p}[k],&k\in\mathcal{K}_{\mathsf{p}}.\\ 0,&k\in\mathcal{K}_{\mathsf{n}}.\end{array}\right. \tag{1}\]
where \(k\) refers to the subcarrier index, where \(0\leq k\leq K-1\). Moreover, \(d\) and \(p\) indices refer to the transmitted data and pilot subcarriers, respectively. The total number of subcarriers is divided into \(K_{\mathsf{on}}=K_{d}+K_{p}\) subcarriers in addition to \(K_{n}\) null guard band subcarriers, where \(\tilde{\mathbf{x}}_{i,d}[k]\) and \(\tilde{\mathbf{x}}_{i,p}[k]\) represent the modulated data symbols and the predefined pilot
symbols allocated at a set of subcarriers denoted \(\mathcal{K}_{\text{d}}\) and \(\mathcal{K}_{\text{p}}\), respectively. The received frequency-domain OFDM symbol denoted as \(\tilde{\mathbf{y}}_{i}[k]\) is expressed as follows
\[\tilde{\mathbf{y}}_{i}[k]=\tilde{\mathbf{h}}_{i}[k]\tilde{\mathbf{x}}_{i}[k]+\tilde{\mathbf{v}}_ {i}[k],\ k\in\mathcal{K}_{\text{on}}. \tag{2}\]
Here, \(\tilde{\mathbf{h}}_{i}[k]\in\mathbb{C}^{K_{\text{on}}\times 1}\) refers to the frequency response of the doubly-selective channel at the \(i\)-th OFDM symbol and \(k\)-th subcarrier. \(\tilde{\mathbf{v}}_{i}[k]\) signifies the additive white Gaussian noise (AWGN) of variance \(\sigma^{2}\). As a matrix form, (2) can be expressed as follows
\[\tilde{\mathbf{Y}}[k,i]=\tilde{\mathbf{H}}[k,i]\tilde{\mathbf{X}}[k,i]+\tilde{\mathbf{V}}[k,i],\ k\in\mathcal{K}_{\text{on}}, \tag{3}\]
where \(\tilde{\mathbf{V}}[k,i]\in\mathbb{C}^{K_{\text{on}}\times I}\) and \(\tilde{\mathbf{H}}\in\mathbb{C}^{K_{\text{on}}\times I}\) denote the AWGN noise and the doubly-selective frequency response of the channel for all symbols within the transmitted OFDM frame, respectively.
## III SoA DL-based channel estimation
This section presents the recently proposed SoA DL-based SBS and FBF channel estimation schemes, where the processing steps applied in each estimator are presented.
### _DL-based SBS channel estimation schemes_
In general, FNN and LSTM networks are employed in the SBS channel estimation, where an optimized FNNs are integrated as a post-processing unit with conventional SBS channel estimators as the case in the DPA-FNN [18], STA-FNN [19], and TRFI-FNN [20]. On the other hand, LSTM networks are utilized as a pre-processing unit in the LSTM-FNN-DPA [21], and LSTM-DPA-TA [22] channel estimators. Both implementations are helpful in improving the accuracy of the channel estimation. However, the LSTM-based estimation illustrates a considerable superiority over the FNN-based estimation. In this context, and since we are focusing on RNN-based channel estimation, this section presents the steps applied in the LSTM-based channel estimators.
#### Iii-A1 LSTM-FNN-DPA
The work proposed in [21] shows that employing the LSTM processing prior to the DPA estimation could lead to a significant improvement in the overall performance. In this context, two cascaded LSTM and FNN networks for both channel estimation as well as noise compensation. The LSTM-FNN-DPA estimator employs the least squares (LS) estimated channel at the previous and current received pilots, such that
\[\hat{\hat{\mathbf{h}}}_{i,p}[k]=\frac{\tilde{\mathbf{y}}_{i}[k]}{\tilde{\mathbf{x}}_{p}[k ]},\ \hat{\hat{\mathbf{h}}}_{i-1,p}[k]=\frac{\tilde{\mathbf{y}}_{i-1}[k]}{\tilde{\mathbf{x}}_{p}[ k]},\ k\in\mathcal{K}_{\text{p}}. \tag{4}\]
The LS estimated channel are fed as an input to both LSTM and FNN networks, where the LSTM-FNN estimated channel is expressed as follows
\[\tilde{\mathbf{d}}_{\text{LSTM-FNN},_{i,d}}[k]=\mathfrak{D}\big{(}\frac{\tilde{ \mathbf{y}}_{i,d}[k]}{\hat{\mathbf{h}}_{\text{LSTM-FNN}_{i-1,d}}[k]}\big{)}. \tag{5}\]
\[\hat{\hat{\hat{\mathbf{h}}}}_{\text{DL},_{i,d}}[k]=\frac{\tilde{\mathbf{y}}_{i,d}[k]}{ \mathbf{d}_{\text{LSTM},_{i,d}}[k]}. \tag{6}\]
We note that, at the beginning of the frame (\(i=1\)), \(\hat{\hat{\mathbf{h}}}_{i-1,p}[k]\) denotes the LS estimated channel at the received preamble symbols, such that
\[\hat{\hat{\hat{\mathbf{h}}}}_{\text{LS}}[k]=\frac{\sum\limits_{u=1}^{P}\tilde{\bm {y}}_{u,p}[k]}{P\tilde{\mathbf{x}}_{p}[k]},\ k\in\mathcal{K}_{\text{on}}. \tag{7}\]
While this estimator can outperform the FNN-based estimators, it encounters a high complexity cost arising from the employment of two DL networks.
#### Iii-A2 LSTM-DPA-TA
The authors in [22] propose to use only an optimized LSTM network instead of two as implemented in the LSTM-FNN-DPA estimator. In addition, noise compensation is made possible by applying TA processing. This methodology only requires the previous pilots \(\hat{\hat{\mathbf{h}}}_{i-1,p}[k]\) besides the LSTM estimated channel as an input. Then, the LSTM estimated channel is employed in the DPA estimation as follows
\[\tilde{\mathbf{d}}_{\text{LSTM}_{i}}[k]=\mathfrak{D}\big{(}\frac{\tilde{\mathbf{y} }_{i}[k]}{\hat{\mathbf{h}}_{\text{LSTM}_{i-1}}[k]}\big{)},\ \hat{\hat{\mathbf{h}}}_{\text{LSTM}_{0}}[k]=\hat{\hat{\hat{\mathbf{h}}}}_{\text{LS}}[k], \tag{8}\]
\[\hat{\hat{\hat{\mathbf{h}}}}_{\text{LSTM-DPA}_{i}}[k]=\frac{\tilde{\mathbf{y}}_{i}[k ]}{\tilde{\mathbf{d}}_{\text{LSTM}_{i}}[k]}. \tag{9}\]
AWGN noise alleviation can be achieved by further applying TA processing such that
\[\hat{\hat{\mathbf{h}}}_{\text{DL-TA}_{i,d}}=(1-\frac{1}{\alpha})\hat{\hat{\hat{ \mathbf{h}}}}_{\text{DL-TA}_{i-1,d}}+\frac{1}{\alpha}\hat{\hat{\mathbf{h}}}_{\text{LSTM-DPA}_{i,d}}. \tag{10}\]
Here, \(\alpha\) denotes the utilized weighting coefficient. In [22], the authors use a fixed \(\alpha=2\) for simplicity. Therefore, the TA applied in (10) reduces the AWGN noise power \(\sigma^{2}\) iteratively within the received OFDM frame according to the ratio
\[R_{\text{DL-TA}_{q}}=\left(\frac{1}{4}\right)^{(q-1)}+\sum\limits_{j=2}^{q} \left(\frac{1}{4}\right)^{(q-j+1)}=\frac{4^{q-1}+2}{3\times 4^{q-1}}. \tag{11}\]
This corresponds to the AWGN noise power ratio of the estimated channel at the \(q\)-th estimated channel, where \(1<q<I+1\) and \(R_{\text{DL-TA}_{i}}=1\) denotes the AWGN noise power ratio at \(\hat{\hat{\mathbf{h}}}_{\text{LS}}[k]\). From the derivation of \(R_{\text{DL-TA}_{q}}\), it can be seen that the noise power decreases over the received OFDM frame, i.e. the SNR increases, resulting in an overall improved performance. The full derivation of (11) is found in [22]. Even though the LSTM-DPA-TA improves the performance compared to the LSTM-FNN-DPA estimator, it still suffers from high computational complexity. Moreover, in Section IV, we show that employing LSTM unit in the channel estimation would affect the estimation accuracy negatively. Whereas, the proposed GRU-based channel estimation provides a better performance-complexity trade-off.
### _CNN-based FBF channel estimation schemes_
In [23], a CNN aided weighted interpolation (WI) channel estimation schemes have been proposed. The WI-CNN estimators use adaptive frame structure according to the mobility scenario. The idea is to avoid using comb pilot allocation and
insert \(Q\) pilot OFDM symbols with different configurations within the transmitted OFDM frame instead. In this context, the WI-CNN estimators employ one, two, and three pilot symbols in low, high, and very high mobility scenarios, respectively. Following the selection of the frame structure, the WI-CNN estimators proceed as follows
* **Pilot symbols channel estimation**: In order to estimate the channel at the inserted pilot symbols, the basic LS denoted as _simple LS (SLS)_ estimation is applied using the received preambles as shown in (7), and using each received pilot symbol such that \[\hat{\hat{\mathbf{h}}}_{\text{LS}_{q}}[k]=\frac{\tilde{\mathbf{y}}_{q}^{(p)}[k]}{\tilde {\mathbf{p}}[k]}=\tilde{\mathbf{h}}_{q}[k]+\tilde{\mathbf{v}}_{q}[k],\ k\in\mathcal{K}_{ \text{on}}.\] (12) where \(\tilde{\mathbf{v}}_{q}[k]\) represents the noise at the \(q\)-th received pilot symbol, \(1\leq q\leq Q\) denotes the inserted pilot symbol index, and \(\tilde{\mathbf{Y}}_{Q}=[\tilde{\mathbf{y}}_{1}^{(p)},\dots,\tilde{\mathbf{y}}_{q}^{(p)}, \dots,\tilde{\mathbf{y}}_{Q}^{(p)}]\in\mathbb{C}^{K_{\text{on}}\times Q}\). Moreover, _accurate LS (ALS)_ can be obtained by applying the discrete Fourier transform (DFT) interpolation of \(\hat{\mathbf{h}}_{q,L}\) such that \[\hat{\hat{\mathbf{h}}}_{\text{ALS}_{q}}=\mathbf{F}_{\text{on}}\hat{\mathbf{h}}_{q,L},\ k\in\mathcal{K}_{\text{on}},\] (13) with \(\hat{\mathbf{h}}_{q,L}\in\mathbb{C}^{L\times 1}\) denotes the estimated channel impulse response at the \(q\)-th received pilot symbol. We note that the _ALS_ and _SLS_ are used for full pilot (FP) allocation. However, if the number of channel taps \(L\) is known, the channel estimation requires only \(L\) pilots in each pilot symbol, where DFT interpolation can be applied to the estimated channel impulse response \(\hat{\mathbf{h}}_{q,L}\) such that \[\hat{\hat{\mathbf{h}}}_{\text{DFT}_{q}}=\mathbf{F}_{\text{on}}\hat{\mathbf{h}}_{q,L},\ k \in\mathcal{K}_{\text{on}},\] (14) where \(\mathbf{F}_{\text{on}}\in\mathbb{C}^{K_{\text{on}}\times L}\) denotes the truncated DFT matrices obtained by selecting \(\mathcal{K}_{\text{on}}\) rows, and \(L\) columns from the \(K\)-DFT matrix.
* **Data symbols channel estimation**: After estimating the channel at the inserted \(Q\) pilot symbols, the WI-CNN estimator divides the received frame into several subframes that are grouped as follows \[\hat{\hat{\mathbf{H}}}_{q}=[\hat{\hat{\mathbf{h}}}_{q-1},\hat{\hat{\mathbf{h}}}_{q}],\ q=1, \cdots Q,\] (15) \(\hat{\hat{\mathbf{h}}}_{q}\) refers to the implemented LS estimation. Then, the estimated channel for the \(i\)-th received data OFDM symbol within each sub-frame is calculated as a weighted summation of the estimated channels at the pilot symbols, such that \[\hat{\hat{\mathbf{H}}}_{\text{WI}_{f}}=\hat{\hat{\mathbf{H}}}_{f}\mathbf{C}_{f},\] (16) where \(\hat{\hat{\mathbf{H}}}_{f}\in\mathbb{C}^{K_{\text{on}}\times 2}\) denotes LS estimated channels at the pilot symbols within the \(f\)-th sub-frame. \(\mathbf{C}_{f}\in\mathbb{R}^{2\times I_{f}}\) denotes the interpolation weights of the \(I_{f}\) OFDM data symbols within the \(f\)-th sub-frame. The interpolation weights of \(\mathbf{C}_{f}\) are calculated by minimizing the mean squared error (MSE) between the ideal channel \(\hat{\mathbf{H}}_{f}\), and the LS estimated channel at the OFDM pilot symbols \(\hat{\hat{\mathbf{H}}}_{f}\) as derived in [29]. In the final step, optimized super resolution CNN (SR-CNN) is employed on top of the WI estimators in a low mobility scenario, whereas an optimized denoising CNN (DN-CNN) is considered in high mobility one.
The WI-CNN estimators suffer from high computational complexity. Moreover, using noise alleviation CNNs is not sufficient to accurately estimate the doubly-selective channel. Therefore, we propose a Bi-GRU channel estimator that perform 2D interpolation, unlike the SR-CNN and DN-CNN networks, which are based on noise alleviation techniques. As a result, performance superiority of the proposed Bi-GRU channel estimator can be achieved while recording a significant decrease of the computational complexity in comparison to the WI-CNN estimators as illustrated in Section V and Section VI.
## IV Proposed RNN-based channel estimation schemes
In this section, RNN main concepts and extensions are first thoroughly introduced. Then, a detailed explanation of the proposed RNN and Bi-RNN based schemes for SBS and FBF channel estimation are presented, respectively.
### _Recurrent Neural Networks: Review_
RNN is a type of artificial neural network (ANN) designed to work with sequential data. This sequential data can be in form of time series, text, audio, video etc. RNN uses the previous information in the sequence to produce the current output, where it is incorporated with memory to take information from prior inputs to influence the current output. This mechanism is the key essence to RNN success in sequential problems. The core concept of RNNs is to keep/discard input data in a recurring manner. Therefore, RNN gates contain Sigmoid activation. A Sigmoid activation regulates values in the range \([0,1]\), which is helpful to update or forget data because any number getting multiplied by 0 disappears and is discarded. In contrast, any number multiplied by 1 is kept. The network can learn which data is not important, and therefore can be forgotten or which data is important to keep. Moreover, Tanh activation is used within the RNN architecture at some point in order to regulate the network in the training phase by squishing values between -1 and 1, since zero values affect negatively the training accuracy.
In order to make things clearer, the RNN input is expressed in terms of the initial estimated channel at the \(i\)-th OFDM symbol denoted by \(\hat{\hat{\mathbf{h}}}_{\rho_{i}}\in\mathbb{R}^{K_{\text{in}}\times 1}\), where \(K_{\text{in}}=2K_{\text{on}}\). Since DL networks work with real-valued numbers, then,
\[\hat{\hat{\mathbf{h}}}_{\rho_{i}}=\text{vec}\Big{\{}\Re\big{(}\hat{\hat{\mathbf{h}}}_{ \rho_{i}}\big{)},\Im\big{(}\hat{\hat{\mathbf{h}}}_{\rho_{i}}\big{)}\Big{\}}. \tag{17}\]
Here, \(\rho\) refers to the applied initial channel estimation scheme, and \(\Re\{.\}\) and \(\Im\{.\}\) denote the real and imaginary values of the initial estimated channel \(\hat{\hat{\mathbf{h}}}_{\rho_{i}}\in\mathbb{C}^{K_{\text{on}}\times 1}\), respectively. Moreover, the RNN output is denoted by \(\hat{\hat{\mathbf{h}}}_{\rho\text{-RNN}_{i}}\in\mathbb{R}^{K_{\text{in}}\times 1}\). We note that FNN network treats the initial estimated channels separately, where it produces the output \(\hat{\hat{\mathbf{h}}}_{\rho\text{-RNN}_{i}}\) for each input. By doing this single input-output mapping, the FNN network is able to learn the frequency correlation of the doubly-selective
channel, besides correcting the initial estimation error. On the contrary, RNN network treats the initial estimated channel as a correlated sequence, where the current \(\hat{\mathbf{h}}_{\rho\text{-RNN}_{i}}\) is computed using the previous RNN estimated channel \(\hat{\mathbf{h}}_{\rho\text{-RNN}_{i-1}}\) and the current initial estimated channel \(\hat{\mathbf{h}}_{\rho_{i}}\). This process allows the RNN network to learn both frequency and time correlation of the doubly-selective channel, and thus, RNN outperforms FNN in the channel estimation task.
In general, there exist three main types of RNNs: (_i_) Simple RNN (SRNN), (_ii_) LSTM, and (_iii_) GRU. The main difference between them is in how the input data is processed by each RNN as shown in Fig. 1. For simplicity, let \(\hat{\mathbf{h}}_{\rho_{i}}=\bar{\mathbf{x}}_{t}\) and \(\hat{\mathbf{h}}_{\rho\text{-RNN}_{i}}=\mathbf{o}_{t}\), where \(t\) denotes the time index.
#### Iii-A1 Sknn
is useful when we need to look at recent information only to perform a present task, where the hidden state is constantly being rewritten in each time step. The SRNN updates the current hidden state, and its output is as follows
\[\bar{\mathbf{h}}_{t}=\sigma(\mathbf{W}_{x,t}\bar{\mathbf{x}}_{t}+\mathbf{W}_{h,t}\bar{\mathbf{h}}_{t -1}+\bar{\mathbf{b}}_{h,t}), \tag{18}\]
\[\mathbf{o}_{t}=\sigma(\mathbf{W}_{o,t}\bar{\mathbf{h}}_{t}+\bar{\mathbf{b}}_{o,t}), \tag{19}\]
where \(\sigma\) denotes the Sigmoid function, \(\mathbf{W}_{x,t}\in\mathbb{R}^{P\times K_{in}}\), \(\mathbf{W}_{h,t}\in\mathbb{R}^{P\times P}\) and \(\bar{\mathbf{b}}_{h,t}\in\mathbb{R}^{P\times 1}\) are the weight matrices and biases associated with the SRNN input vector \(\bar{\mathbf{x}}_{t}\in\mathbb{R}^{K_{in}\times 1}\) and the previous hidden state vector \(\bar{\mathbf{h}}_{t-1}\in\mathbb{R}^{P\times 1}\), respectively. We note that at the first step, a hidden state will usually be seeded as a matrix of zeros so that it can be fed into the RNN cell together with the first input in the sequence. Training and back-propagation in RNN are similar to other forms of ANN, where RNN needs to be trained in order to produce accurate and desired outputs. However, when an SRNN is exposed to long sequences, it tends to lose the information because it cannot store long sequences since it focuses only on the latest output only. This problem is commonly referred to as vanishing gradients [30] that occurs during the training phase, where useful gradients cannot propagate from the output of the model back to the layers near the input of the model. As a result, the RNN does not learn the effect of earlier inputs and it is too difficult for RNN to preserve information over many time steps, hence, causing the short-term memory problem. To overcome this problem, specialized versions of RNN like LSTM and GRU are created.
#### Iii-A2 Lstm
is a special kind of RNN capable of learning long-term sequences. Recall that in SRNN, the input and hidden state from the previous time step are passed through a simple activation layer to obtain a new state. Whereas in LSTM the process is slightly complex, where the LSTM unit takes at each time input from three different states defined as: (_i_) current input state represented by \(\bar{\mathbf{x}}_{t}\). (_ii_) short-term memory state from the previous LSTM unit denoted by \(\bar{\mathbf{h}}_{t-1}\), and (_iii_) long-term memory state from the previous LSTM unit denoted by \(\mathbf{c}_{t-1}\). These inputs are controlled by the gates to regulate the information to be kept or discarded before passing the updated long-term and short-term information to the next LSTM unit. We can imagine these gates as filters that remove unwanted selected and irrelevant information. LSTM uses mainly three gates defined as input gate, forget gate, and output gate.
Forget GateIt decides which information from long-term memory should be kept or discarded and this is done by multiplying the incoming long-term memory by a forget vector generated by the current input and incoming short memory. Information from the previous hidden state and information from the current input is passed through the sigmoid function. Values come out between 0 and 1. The closer to 0 means to forget, and the closer to 1 means to keep, such that
\[\mathbf{f}_{t}=\sigma(\mathbf{W}_{f,t}\bar{\mathbf{x}}_{t}+\mathbf{W}^{\prime}_{f,t}\bar{\mathbf{h} }_{t-1}+\bar{\mathbf{b}}_{f,t}), \tag{20}\]
where \(\sigma\) denotes the Sigmoid function, \(\mathbf{W}_{f,t}\in\mathbb{R}^{P\times K_{in}}\), \(\mathbf{W}^{\prime}_{f,t}\in\mathbb{R}^{P\times P}\) and \(\bar{\mathbf{b}}_{f,t}\in\mathbb{R}^{P\times 1}\) are the forget gate weights and biases at time \(t\), \(\bar{\mathbf{x}}_{t}\in\mathbb{R}^{K_{in}\times 1}\) and \(\bar{\mathbf{z}}_{t-1}\) represents the LSTM unit input vector of size \(K_{in}\), and the previous hidden state of size \(P\), respectively.
Input GateThe input gate only works with the information from the current input \(\bar{\mathbf{x}}_{t}\) and short-term memory \(\bar{\mathbf{h}}_{t-1}\) from the previous step by filtering out the information that is not useful. The input gate proceeds as follows
\[\bar{\mathbf{i}}_{t}=\sigma(\mathbf{W}_{\hat{\mathbf{z}},\hat{\mathbf{x}}_{t}}+\mathbf{W}^{\prime}_{ \hat{\mathbf{z}},\hat{\mathbf{h}}_{t-1}}+\bar{\mathbf{b}}_{\hat{\mathbf{z}},t}), \tag{21}\]
\[\bar{\mathbf{c}}_{t}=\text{tanh}(\mathbf{W}_{\hat{\mathbf{e}},\hat{\mathbf{x}}_{t}}+\mathbf{W}^{ \prime}_{\hat{\mathbf{e}},\hat{\mathbf{t}}}\bar{\mathbf{h}}_{t-1}+\bar{\mathbf{b}}_{\hat{\mathbf{e}},t}). \tag{22}\]
Now we should have enough information to calculate the new long-term memory represented by the cell state. First, the cell state gets element-wise multiplied by the forget vector. This has a possibility of dropping values in the cell state if it gets multiplied by values near 0. Then we take the output from the input gate and do an element-wise addition that updates the cell state to new values that the neural network finds relevant. That gives us our new cell state, such that
\[\mathbf{c}_{t}=\mathbf{f}_{t}\odot\mathbf{c}_{t-1}+\bar{\mathbf{i}}_{t}\odot\bar{\mathbf{c}}_{t}. \tag{23}\]
Here, \(\odot\) denotes the Hadamard product.
Output GateThe output gate takes the current input, the previous short-term memory, and the newly computed long-term memory (23) to produce new short-term memory which will be passed on to the next time step. The output of the current time step can also be drawn from this hidden state as follows
\[\bar{\mathbf{h}}_{t}=\mathbf{o}_{t}\odot\text{tanh}(\mathbf{c}_{t}), \tag{24}\]
\[\mathbf{o}_{t}=\sigma(\mathbf{W}_{o,t}\bar{\mathbf{x}}_{t}+\mathbf{W}^{\prime}_{o,t}\bar{\mathbf{h} }_{t-1}+\bar{\mathbf{b}}_{o,t}). \tag{25}\]
To summarize, the forget gate decides what is relevant to keep from prior steps. The input gate decides what information is relevant to add from the current step. The output gate determines what the next hidden state should be.
#### Iii-A3 Gru
The GRU is the newer generation of RNN, it is based on the same concept as the LSTM, but with optimized architecture. The GRU architecture gets rid of the cell state, so there is no long-term memory as the case in the LSTM. Instead, the hidden state is used only to transfer information. Moreover, GRU has only two gates, a reset gate, and an update gate.
Reset gateThe reset gate is used from the model to decide how much of the past information is needed to neglect. In short, it decides whether the previous hidden state is important or not, such that
\[\bar{\mathbf{r}}_{t}=\sigma(\mathbf{W}_{r,t}\bar{\mathbf{x}}_{t}+\mathbf{W}^{\prime}_{r,t}\bar{ \mathbf{h}}_{t-1}+\bar{\mathbf{b}}_{\hat{\mathbf{r}},t}), \tag{26}\]
Update gateThe update gate acts similarly to the forget and input gate of an LSTM. It decides what information to throw away and what new information to add from the current input. It is also responsible for determining the amount of previous information that needs to pass along to the next state, such that
\[\bar{\mathbf{z}}_{t}=\sigma(\mathbf{W}_{z,t}\bar{\mathbf{x}}_{t}+\mathbf{W}^{\prime}_{z,t}\bar{ \mathbf{h}}_{t-1}+\bar{\mathbf{b}}_{z,t}), \tag{27}\]
\[\bar{\mathbf{h}}_{t}=\text{tanh}(\mathbf{W}_{\bar{\mathbf{h}},t}\bar{\mathbf{x}}_{t}+\mathbf{W}^{ \prime}_{\bar{\mathbf{h}},t}(\bar{\mathbf{r}}_{t}\odot\bar{\mathbf{h}}_{t-1})+\bar{\mathbf{b}}_ {\bar{\mathbf{h}},t}). \tag{28}\]
Finally, the new hidden state is calculated as follows
\[\bar{\mathbf{h}}_{t}=(1-\bar{\mathbf{z}}_{t})\odot\bar{\mathbf{h}}_{t-1}+\bar{\mathbf{z}}_{t} \odot\tilde{\mathbf{h}}_{t}. \tag{29}\]
To sum up, RNN networks are incorporated with a memory to take advantage of the prior outputs in the prediction of the current output, and thus, the current output prediction is correlated with the previous outputs. SRNN cannot store long sequences, since it only focuses on the latest output. On the contrary, LSTM is capable of learning long-term sequences, and predicting the current output is influenced by the long sequence of previous outputs. However, LSTM is not useful in all scenarios, especially, when the successive inputs become uncorrelated over time. Since predicting the current output will be affected by uncorrelated previous outputs, the prediction accuracy is negatively affected. In this context, GRU provides a trade-off; it uses a short-term memory in predicting the current output, which improves the prediction accuracy in comparison with LSTM. Moreover, the GRU has two gates, whereas LSTM has three gates. Thereby, fewer training parameters and faster execution can be achieved by using GRU instead of LSTM.
### _Proposed SBS RNN-based Channel Estimator_
The proposed RNN-based estimation scheme sheds light on the ability of GRU in estimating doubly-selective channels with high accuracy. As discussed in Section IV-A, SRNN takes advantage of the previously estimated channel only while estimating the current one. Whereas, the LSTM has long-term memory, which means that estimating the channel at the current OFDM symbol is affected by the older estimated channels. On the other hand, GRU provides a trade-off between short-term memory and complexity. Therefore, in order to decide which RNN performs better in doubly-selective channel estimation, we study the average correlation between the channel at the first symbol and all successive symbols within the transmitted OFDM frame, considering the frequency-time response, such that
\[\Psi_{i}=\mathrm{E}\left[\bar{\mathbf{h}}_{1}\bar{\mathbf{h}}_{i}^{*}\right],\;2\leq i \leq I. \tag{30}\]
Here, \(\Psi_{i}\) is calculated for three mobility scenarios: (_i_) Low mobility: \(f_{d}=250\) Hz, (_ii_) High mobility: \(f_{d}=500\) Hz, and (_iii_) Very high mobility: \(f_{d}=1000\) Hz. The detailed properties of these channel models are provided in Section V.
As shown in Fig. 2, when the mobility increases, the average correlation \(\Psi\) starts to decrease exponentially. However, as we can notice, \(\Psi_{i}\) at the end of the received frame reaches around \(65\%\) for low mobility scenario, while it is around \(40\%\) in high and very high mobility scenarios, with a drastic decrease in the overall \(\Psi_{i}\) curve in very high mobility scenarios. According to the \(\Psi_{i}\) values in different mobility scenarios, we can expect that the impact of the estimated channels at earlier symbols would affect negatively the accuracy of the estimated channel at advanced symbols within the received OFDM frame. As a result, we can conclude that, as the mobility increases, shorter RNN memory is required in the channel estimation in order to guarantee the best possible performance. This is due to the fact that, when long RNN memory is employed in a very high mobility scenario, the older estimated channels negatively impact the channel estimation at the current OFDM symbol because the estimated channels become uncorrelated, i.e. the value of \(\Psi\) is low. In this context, the proposed
Fig. 1: Detailed architecture of the SRNN, LSTM, and GRU units.
Fig. 2: Correlation of the channel at the first and the last OFDM symbol within the transmitted frame.
RNN-based channel estimation scheme employs an optimized GRU unit instead of LSTM unit in the channel estimation process due to its shorter memory. This results in improving the accuracy of the channel estimation while recording a significant decrease in computational complexity. Moreover, we study the performance of the SRNN unit in order to have a complete analysis of different RNN units.
As illustrated in Fig. 3, the RNN unit is first employed to estimate the channel at the current data subcarriers, where it takes as an input the previous LS estimated channels at pilot subcarriers denoted by \(\hat{\bar{h}}_{i-1,p}\in\mathbb{R}^{2K_{K}\times 1}\), concatenated with the previously RNN-based estimated channel at the data subcarriers \(\hat{\bar{h}}_{\Phi\text{-TA}_{i-1,d}}\in\mathbb{R}^{2K_{d}\times 1}\). Thus, the input and output sizes of the RNN unit are \(2K_{on}\) and \(2K_{d}\), respectively. After that, the RNN output is fed as an input to the DPA estimation followed by TA processing in order to further mitigate the impact of noise. We note that our proposed estimators consider both time and frequency selectivity, where the RNN-based pre-processing deals with the time selectivity and the DPA estimation with the frequency selectivity. Moreover, the RNN-based estimated channel is fed as an input to the DPA estimation block, which further improves the DPA estimation accuracy. The proposed RNN-based channel estimation scheme proceeds as follows
\[\bar{\mathbf{d}}_{\Phi_{i,d}}[k]=\mathfrak{D}\big{(}\frac{\bar{\mathbf{y}}_{i,d}[k]}{ \hat{\bar{h}}_{\Phi_{i-1,d}}[k]}\big{)},\ k\in\mathcal{K}_{\text{d}}, \tag{31}\]
\[\hat{\bar{h}}_{\Phi\text{-DPA}_{i,d}}[k]=\frac{\bar{\mathbf{y}}_{i,d}[k]}{\bar{\bm {d}}_{\Phi_{i,d}}[k]}, \tag{32}\]
where \(\Phi\in\{\text{SRNN},\text{GRU}\}\) refers to the used RNN unit, and \(\hat{\bar{h}}_{\Phi_{0,d}}=\hat{\bar{h}}_{\text{LS}}\ \forall\ k\in\mathcal{K}_{\text{d}}\). Finally, to alleviate the impact of the AWGN noise, TA processing is applied to the estimated channel \(\hat{\bar{h}}_{\Phi\text{-DPA}_{i}}\) similarly as performed in (10), such that
\[\hat{\bar{h}}_{\Phi\text{-TA}_{i,d}}=(1-\frac{1}{\alpha})\hat{\bar{h}}_{\Phi \text{-TA}_{i-1,d}}+\frac{1}{\alpha}\hat{\bar{h}}_{\Phi\text{-DPA}_{i,d}}, \tag{33}\]
where \(\alpha=2\) for simplicity. We note that in doubly-selective channel, each two successive symbols are correlated regardless of the mobility scenario, therefore, using \(\alpha=2\) gives equal weights for the previous and current estimated channel. However, However, alpha can be fine-tuned by studying the average channel correlation between each two successive OFDM symbols, and then assigning more accurate weights to the previous and current estimated channels in (33).
In the proposed scheme, RNN training is performed using a high value of signal-to-noise ratio (SNR) = 40 dB to achieve the best performance as observed in [31]. The reason is that when the training is performed for low noise impact, the RNN is able to better learn the channel correlation. In addition, due to its good generalization ability, it can still perform well in low SNR regions, where the noise is dominant. Moreover, intensive experiments are performed using the grid search algorithm [32] in order to select the best suitable RNN hyperparameters in terms of both performance and complexity. Note that the mobility conditions can be assumed known in most real case applications. For example, in vehicular communications, the vehicle velocity is a known parameter that can be exchanged between all vehicular network nodes and it must be regulated according to the road conditions. In urban environments (inside cities) the car velocity must not exceed 40 Kmphr, and thus, the model trained on low mobility can be employed. Consequently, the RNN training is performed for each mobility scenario separately using the same architecture and training parameters summarized in Table I. However, when velocity information is not available, EL algorithm can be used to combine the weights of several trained models so that one generalized model can be employed in all mobility scenarios as discussed in V.
\begin{table}
\begin{tabular}{l|l} \hline (SRNN units; Hidden size) & (1:48) \\ \hline (GRU units; Hidden size) & (1:48) \\ \hline (B-SRNN units; Hidden size) & (1:32) \\ \hline (B-GRU; Hidden size) & (1:32) \\ \hline Activation function & ReLU (\(y=\max(0,x)\)) \\ \hline Number of epochs & 500 \\ \hline Training samples & 16000 \\ \hline Testing samples & 2000 \\ \hline Batch size & 128 \\ \hline Optimizer & ADAM \\ \hline Loss function & MSE \\ \hline Learning rate & 0.001 \\ \hline Training SNR & 40 dB \\ \hline \end{tabular}
\end{table} TABLE I: Parameters of the proposed RNN-based channel estimation scheme.
Fig. 3: Proposed RNN-based channel estimation schemes.
### _Proposed FBF Bi-RNN-based Channel Estimator_
Bi-RNN networks are designed to predict unknown data that are bounded within known data [24]. They are based on making the data flow through any RNN unit in both directions forward (past to future), and backwards (future to past). In regular RNN, the input flows in one direction, whereas, in Bi-RNN the input flows in both directions to get the advantage of both past and future information. By doing so, the Bi-RNN network will be able to predict the unknown information in the middle based on its correlation with the known past and future information. In this context, the proposed Bi-RNN channel estimator aims to utilize the interpolation ability of Bi-RNN networks in the FBF channel estimation instead of employing high-complexity CNN networks as it is the case in the SoA channel estimation schemes. The proposed Bi-RNN channel estimation scheme uses Bi-GRU unit and it inherits the adaptive frame design from the WI-CNN estimators as shown in Fig. 4. Recall that WI-CNN channel estimation performs WI interpolation at the data symbols, where CNN processing is applied to alleviate the impact of noise. However, Bi-RNNs perform 2D interpolation at the data symbols using the estimated channel at the pilot symbols without the need for any initial channel estimation at the data symbols. Thus, the proposed Bi-RNN channel estimator can be adapted to any existing protocols regardless of the pilot allocation scheme. However, the employed Bi-RNN architectures should be fine-tuned accordingly to meet the required performance. The proposed Bi-RNN channel estimator proceeds as follows
* ALS estimation at the inserted pilot symbols as performed in (13), followed by zero insertion at all the data symbols. Thereafter, the initial estimated channels \(\hat{\mathbf{H}}_{\rho}\in\mathbb{C}^{K_{m}\times I_{d}}\) are converted to the real-valued domain as performed in (17), where \(\hat{\mathbf{H}}_{in}\in\mathbb{R}^{2K_{m}\times I_{d}}\).
* Bi-RNN end-to-end interpolation, where \(\hat{\mathbf{H}}_{in}\) is fed as an input to the optimized Bi-GRU unit. Accordingly, the Bi-GRU unit learns the weights of the estimated channels at the OFDM data symbols. Employing the 2D interpolation using the proposed Bi-GRU unit leads to a considerable performance superiority in comparison with the WI-CNN estimators while recording a significant decrease in the required computational complexity, as shown in Section V. Also here the proposed Bi-GRU architecture is optimized using the grid search algorithm [32] and trained using the parameters listed in Table I. Moreover, similarly as performed in Section IV-B, the performance of Bi-LSTM and Bi-SRNN are investigated in Section V.
## V Simulation Results
This section illustrates the performance evaluation of the SoA and the proposed RNN and Bi-RNN based channel estimation schemes in terms of BER and throughput. Vehicular communications are considered as a simulation case study, where three mobility scenarios are defined as: (_i_) low mobility (\(v=45\) Kmph, \(f_{d}=250\) Hz) (_ii_) High mobility (\(v=100\) Kmph, \(f_{d}=500\) Hz) (_iii_) Very high mobility (\(v=200\) Kmph, \(f_{d}=1000\) Hz). The power-delay profiles of the employed channel models are provided in Table II. It is worth mentioning that in order to guarantee fairness in the conducted simulations, the studied channel estimators are trained using the same parameters shown in Table I. Moreover, simulation parameters are based on the IEEE 802.11p standard [33], where for the SBS channel estimation, comb-pilot allocation is employed such that \(K_{p}I\) pilots are used within the transmitted frame following the comb-pilot allocation. Concerning the FBF channel estimation, the ChannelNet and TS-ChannelNet estimators use \(K_{p}I\) pilots per frame, whereas the WI-CNN and the proposed Bi-RNN channel estimators employ only \(K_{on}Q\) pilots per frame following the block-pilot allocation, where \(K_{on}=52\) denotes the number of employed subcarriers within the transmitted OFDM symbol, and \(Q\) is the number of inserted pilot symbols within the transmitted frame ((i) Low mobility: \(Q=1\), (ii) High mobility: \(Q=2\), (iii) Very high mobility: \(Q=3\)). Therefore, the proposed Bi-RNN based channel estimator is able to outperform the recently proposed SoA FBF channel estimators employing fewer pilots with lower computational complexity, resulting in higher transmission data rates as discussed in Section V-B. We also note that these simulations are implemented using QPSK, 16QAM, and 64QAM modulation orders, the SNR range is \([0,5,\ldots,40]\) dB. In addition, the performance evaluation is performed according to the employed modulation orders, the mobility scenarios, and variable frame length.
### _SBS Channel Estimation_
#### V-A1 Modulation Order
For QPSK modulation order, we can notice from Fig. 4(a) that FNN-based channel estimators can implicitly learn the channel frequency correlation apart from preventing a high demapping error arising from conventional DPA-based estimation, where STA-FNN and TRFI-FNN outperform conventional STA and TRFI estimators by at least \(15\) dB gain in terms of SNR for BER \(=10^{-3}\). However, STA-FNN suffers from an error floor beginning from SNR \(=20\) dB, particularly in very high mobility scenarios. This is due to the STA frequency and time averaging operations that can alleviate the impact of noise and demapping error in low SNR regions. On the other hand, the averaging operations are not useful in high SNR regions since the impact of noise is low, and the STA averaging coefficients are fixed. Therefore, TRFI-FNN is used to improve the performance at high SNRs to compensate for the STA-FNN performance degradation in the high SNR regions. We can clearly observe that employing RNNs as a pre-processing unit rather than a simple FNN in the channel estimation brings a significant improvement in the overall performance. This is because RNNs are capable of efficiently learning the time correlations of the channel by taking the advantage of the previous output apart from the current input in order to estimate the current output. Even though the recently proposed LSTM-based estimators are able to outperform the FNN-based estimator, but using LSTM in the channel estimation is not the best option, due to LSTM long-term memory problem. In contrast, we can notice that the proposed GRU-DPA-TA estimator is able to outperform the LSTM-DPA-TA estimator by around \(6\) dB gain
in terms of SNR for BER \(=10^{-5}\), especially, in very high mobility scenario. This is due to the fact that LSTM employs long-term memory, thus, the current estimated channel is affected by older estimated ones. This process harms the performance as the mobility increases, and the channel at successive received OFDM symbols becomes uncorrelated. Whereas, the GRU uses shorter memory than LSTM, Thus, leading to the superiority of the proposed GRU-DPA-TA estimator in comparison with the LSTM-DPA-TA estimator. However, we can notice that in low mobility scenario, both LSTM-DPA-TA and GRU-DPA-TA estimators achieve almost similar performance. This is because of the negligible impact of Doppler interference in low mobility scenario, thus, the channels at successive symbols within the received OFDM frame are highly correlated. So, considering long or short memory while estimating the current channel will not lead to considerable performance degradation. Concerning 16QAM and 64QAM modulation orders, the proposed GRU-DPA-TA estimator outperforms the LSTM-DPA-TA estimator by more than \(5\) dB and \(7\) dB gains in terms of SNR for BER \(=10^{-4}\) and BER \(=10^{-3}\), respectively, in very high mobility scenarios, as illustrated in Fig. 5b and Fig. 5c. However, it can be noticed that FNN-based channel estimators suffer from severe performance degradation when 64QAM modulation is employed. This is because of the remarkable accumulated DPA demapping error that cannot be eliminated by simple FNN architectures. A nice observation can be noticed from Fig. 5 where employing SRNN in the channel estimation performs similarly to the LSTM-FNN-DPA estimator in all mobility scenarios. This reveals that using SRNN combined with TA processing records similar performance as LSTM combined with FNN. In other words, the performance degradation caused by the LSTM long-term memory is compensated by the FNN network in the LSTM-FNN-DPA estimator. However, SRNN unit can be used instead to eliminate the LSTM long-term memory problem as well as mitigating the noise by simple TA processing as the case in the SRNN-DPA-TA estimator.
Fig. 6 illustrates the throughput of the studied SBS channel estimators employing QPSK modulation. It can be seen that the proposed RNN-based channel estimators perform higher throughput than conventional and FNN-based channel estimators, especially in low SNR regions. This is due to the accurate channel prediction.
#### Iv-A2 Mobility
The impact of mobility can be observed in Fig. 5. The performance behavior is influenced by the following factors: _(i)_ channel estimation error, _(ii)_ time diversity due to increased Doppler spread, since the Doppler spread and the time diversity gain are proportional, i.e. more time diversity gain can be obtained in very high mobility scenarios, and _(iii)_ frame length. As shown in Fig. 5, where the frame length is fixed (\(I=100\)), the performance of all the studied channel estimation schemes degrades with the increase of
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Channel** & **Channel** & **Vehicle velocity** & **Doppler shift [Hz]** & **Average path gains [dB]** & **Path delays [ns]** \\ \hline VTV-UC & 12 & 45 & 250 & [0, 0, -10, -10, -17.8, -17.8, -17.8, -17.8, -17.8, -17.8, -22.1, -21.1, -26.3, -26.3] & [0, 1, 100, 101, 102, 200, 201, 202, 300, 301, 400, 401] \\ VTV-SDWW & 12 & 100-200 & 500-1000 & [0, 0, -11.2, -11.2, -19. -21.9, -25.3, -25.3, -24.4, -28, -26.1, -26.1] & [0, 1, 100, 101, 200, 300, 400, 401, 401, 500, 600, 700, 701] \\ \hline \end{tabular}
\end{table} TABLE II: Characteristics of the employed channel models following Jake’s Doppler spectrum.
Fig. 4: Proposed Bi-RNN based channel estimator block diagram.
mobility. This is because the channel estimation error increases with the increase of Doppler frequency. Moreover, we can notice that the conventional STA and TRFI channel estimators suffer from severe performance degradation in the very high mobility scenario, since the impact of the AWGN noise and DPA demapping error is much more dominant than the time diversity gain. On the contrary, the time diversity gain is dominant in DL-based channel estimators, since DL networks are capable of reducing the channel estimation error resulting from the AWGN noise and the DPA demapping error, leading to a performance improvement in very high mobility scenarios. Note that the net time diversity gain is also related to the employed frame length, since increasing the frame length increases the time diversity gain. This is clearly illustrated in Fig. 5.
Fig. 5: BER for \(I=100\), mobility from left to right: low, high, very high.
in Fig. 7, where QPSK modulation order with very high mobility is utilized. As we can notice, the performance of the proposed RNN-based channel estimators improves when a longer frame length is employed. It is worth mentioning that, the proposed GRU-DPA-TA and LSTM-DPA-TA estimators perform similarly when \(I=10\), since in shorter frames the impact of long and short-term memory cannot be clearly illustrated. On the contrary, when using longer frames, i.e, \(I=50\) and \(I=100\), we can notice the superiority of using the GRU-based estimator instead of the LSTM-based one.
In order to further illustrate the importance of channel tracking, Fig. 7 shows the performance of the proposed GRU-DPA-TA channel estimator when the outdated estimated channel is used. In this context, the received OFDM symbols are equalized by the DL-based estimated channel at the beginning of the frame. As shown in Fig. 7, equalizing by the outdated estimated channel significantly degrades the performance even when shorter frames are employed. Therefore, this shows the importance of applying channel tracking to guarantee good performance in different mobility scenarios.
Fig. 8 illustrates the robustness of the proposed GRU-DPA-TA channel estimator against the change in Doppler frequency, where QPSK modulation is employed. Fig. 7(b) shows the performance of the proposed channel estimator when trained on one Doppler frequency and tested on the entire range of Doppler frequencies. In this context, the entire range of Doppler frequencies is divided into 3 ranges: (i) Low mobility (0 Hz - 300 Hz), (ii) High mobility (300 - 600 Hz), and (iii) Very high mobility (600 Hz - 1000 Hz). As we can notice, training on one Doppler frequency and testing on the same one gives the best performance. However, training on the highest Doppler frequency within each range shows a satisfactory performance when tested on different Doppler frequencies within the considered range. This can be explained by the fact that when the model is trained on the worst conditions, i.e., high Doppler frequency, it can perform well
Fig. 6: Throughput employing QPSK, mobility from left to right: low, high, very high.
Fig. 7: BER employing very high mobility and QPSK, frame length from left to right: I = 10, I = 50, I = 100.
when tested on better conditions, i.e., low Doppler frequency. We note that the SNR = 40 dB is considered to show the impact of Doppler interference. Therefore, the performance of the trained models can be generalized, where we can train 3 models only. In addition, we can notice that training on lower Doppler frequencies, (for example, \(f_{d}\) = 250 Hz) and testing on higher Doppler frequencies lead to a severe performance degradation which is expected as shown in Fig. (a)a, since the model is trained in the absence of Doppler interference. It is worth mentioning that, further model generalization can be achieved by using the concept of ensemble learning (EL) in case the velocity range is not known, where the weights of several trained models can be averaged in order to produce one generalized model. We note that, in Fig. (a)a, the EL results are obtained by averaging the weights of the trained models on 700 Hz, 800 Hz, and 900 Hz. This combination can be optimized and fine-tuned according to the requirements of real-time applications.
### _Fbf Channel Estimation_
In this section, performance evaluations of the CNN-based estimators, conventional 2D LMMSE estimator as well as the proposed Bi-RNN based channel estimator are discussed using the same criteria as Section V-A. We note that we only consider the ALS-WI-CNN among the WI-CNN estimators since it has the best performance.
#### V-B1 Modulation Order
Fig. (a)a and Fig. (b)b depict the BER performance employing QPSK and 16QAM modulation orders, respectively. The performance of channel network (ChannelNet) and temporal spectral ChannelNet (TS-ChannelNet) accounts of the predefined fixed parameters in the applied interpolation scheme, where the RBF interpolation function and the ADD-TT frequency and time averaging parameters need to be updated in a real-time manner. On the contrary, in the ALS-WI-CNN estimator there are no fixed parameters, and the time correlation between the previous and the future pilot symbols is considered in the WI interpolation operation. These aspects lead to the performance superiority of the ALS-WI-CNN compared to the ChannelNet and TS-ChannelNet estimators. Although CNN processing is applied in the ChannelNet, TS-ChannelNet, and ALS-WI-CNN estimators, they suffer from a considerable performance degradation that is dominant in very high mobility scenario. This show that the CNN processing is not able to effectively alleviate the impact of Doppler interference, especially in very high mobility scenarios, where the proposed Bi-RNN based channel estimation scheme outperforms the WI-ALS-CNN estimator by at least \(5\) dB and \(12\) dB gain in terms of SNR for a BER = \(10^{-5}\) employing QPSK and 16QAM modulations, respectively. We note that the robustness of the proposed Bi-RNN based channel estimator against high mobility is mainly due to the accuracy of the end-to-end 2D interpolation implemented by the utilized Bi-GRU unit. Moreover, we can see that employing Bi-LSTM performs similarly to the ALS-Bi-GRU estimator, this is due to the used frame structure, where the variation of the doubly-selective channel within each sub-frame is low. However, it can be noticed that employing CNN performs better than the Bi-SRCNN unit in low and high mobility scenarios, while using Bi-SRCNN unit leads to around \(2\) dB gain in terms of SNR for a BER = \(10^{-4}\) in comparison with the ALS-WI-CNN estimator in very high mobility scenario as shown in Fig. (b)b. As a result, we can conclude that employing Bi-GRU unit instead of CNN network leads to more accurate channel estimation with lower complexity. Finally, we note that the performance of the 2D-LMMSE estimator is comparable to the performance of the ideal channel but it requires huge complexity as we discuss in the next section, which is impractical in a real scenario. Moreover, the proposed Bi-RNN based estimator records almost close performance as the 2D-LMMSE estimator. Therefore, the proposed Bi-RNN based channel estimator is an alternative to the 2D-LMMSE estimator where it provides a good performance-complexity
Fig. 8: Robustness analysis of the proposed GRU-DPA-TA channel estimator employing QPSK modulation.
trade-off.
#### V-B2 Mobility
The impact of mobility can be clearly observed in Fig. (b)b, where the performance of the ChannelNet and TS-ChannelNet channel estimation schemes degrades as the mobility increases, and the impact of the time diversity gain is not dominant due to the high estimation error of the 2D RBF and ADD-TT interpolation techniques employed in the ChannelNet and TS-ChannelNet estimators, respectively. In contrast, the time diversity gain is dominant in the ALS-WI-CNN and the proposed Bi-RNN based channel estimator, since the initial ALS and WI estimations are accurate, thus, the SR-CNN and DN-CNN networks are capable of overcoming the Doppler interference. However, using the ALS estimation at the pilot symbols followed by Bi-GRU unit for 2D interpolation at the data symbols reveal considerable robustness against mobility. This is due to the ability of the optimized Bi-GRU unit to significantly alleviating the impact of Doppler interference, where it can be noticed that the proposed Bi-RNN estimator is able to outperform the ALS-WI-CNN estimators in different mobility scenarios. As a result, the proposed Bi-RNN based channel estimator provides a good performance-complexity trade-off between the CNN-based estimators and 2D-LMMSE estimator.
## VI Computational Complexity Analysis
This section provides a detailed computational complexity analysis of the studied channel estimation schemes. The computational complexity analysis is performed in accordance with the number of real-valued multiplications/divisions and summations/subtractions necessary to estimate the channel for one received OFDM frame [34].
### _Sbs Channel Estimation_
The computational complexity of the SRNN lies in the calculation of \(\bar{\mathbf{h}}_{t}\) in (18), where \(PK_{in}+P^{2}\) multiplications and \(PK_{in}+P^{2}\) summations are required. The computation of \(\mathbf{o}_{t}\) requires \(P^{2}\) multiplications and \(P^{2}\) summations. Therefore, SRNN processing requires \(PK_{in}+2P^{2}\) multiplications and \(PK_{in}+2P^{2}\) summations. Similarly to SRNN, each LSTM gate requires \(PK_{in}+P^{2}\) multiplications and \(PK_{in}+P^{2}\) summations. Therefore, the total computations performed by the LSTM unit can be expressed as \(4PK_{in}+4P^{2}\) multiplications and \(4PK_{in}+4P^{2}\) summations, in addition to \(3P\) multiplications and \(P\) summations required by (23) and (24). The
Fig. 9: BER for \(I=100\), mobility from left to right: low, high, very high.
GRU unit employs fewer computations compared to the LSTM unit, where it requires \(3PK_{in}+3P^{2}+3P\) multiplications and \(3PK_{in}+3P^{2}+2P\) summations. Fig. 10 shows the required multiplications/divisions and summations/subtractions required by various examined SBS channel estimators. A detailed derivation of the required operations is provided in [34].
The proposed optimized GRU unit is configured with \(P=k_{d}\) and \(K_{in}=2K_{on}\). Therefore, it requires \(6K_{on}K_{d}+3K_{d}^{2}+3K_{d}\) multiplications and \(6K_{on}K_{d}+3K_{d}^{2}+2K_{d}\) summations. After that, the proposed GRU-DPA-TA estimator applies the DPA estimation followed by the TA processing that requires \(18K_{on}\) real-valued multiplications/divisions and \(8K_{on}\) summations/subtractions. Hence, the GRU-DPA-TA estimator requires \(6K_{on}K_{d}+3K_{d}^{2}+3K_{d}+18K_{on}\) real-valued multiplications/divisions and \(6K_{on}K_{d}+3K_{d}^{2}+2K_{d}+8K_{on}\) summations/subtractions. Therefore, the proposed GRU-DPA-TA estimator is able to decrease the required complexity by \(48.22\%\) compared to the LSTM-DPA-TA estimator. In other words, the proposed GRU-DPA-TA estimator is \(2x\) less complex than the LSTM-DPA-TA estimator, at the same time, it achieves a significant performance gain as discussed in
Fig. 11: Computational complexity of the studied DL-based FBF channel estimators.
Fig. 10: Computational complexity of the studied DL-based SBS channel estimators.
Section V. Employing SRNN instead of GRU unit requires \(K_{d}K_{in}+K_{d}^{2}+4K_{d}\) multiplications/divisions and \(K_{in}+4K_{d}-2\) summations/subtraction. Therefore it achieves \(76.54\%\) and \(54.69\%\) computational complexity decrease in the required real-valued operations in comparison to the LSTM-DPA-TA and GRU-DPA-TA estimators, respectively. Therefore, using SRNN is \(4\)x and \(2\)x less complex than using LSTM and GRU units. However, the RNN-DPA-TA estimator performs similar to the LSTM-FNN-DPA estimator as shown in Section V. Finally, we note that a trade-off between the desired performance and the accepted complexity should be taken into account, in order to optimize the use of the RNN-based channel estimators.
### _FBF Channel Estimation_
The computational complexity of any Bi-RNN unit is twice the required complexity for the regular RNN unit since both forward and backward data flows are applied. The proposed Bi-GRU estimator is optimized where \(P=32\). Moreover, we use \(Q=4\), i.e. assuming very high mobility scenario, in order to have a fair comparison with the ALS-WI-CNN estimator, and \(K_{in}=2K_{\text{on}}I^{\prime}\), where \(I^{\prime}=I+Q\). The ALS channel estimation at the inserted pilot symbols requires \(4K_{\text{on}}^{2}Q+2K_{\text{on}}Q+2K_{\text{on}}\) multiplications/divisions and \(5K_{\text{on}}^{2}Q\) summations/subtractions. Therefore, the overall computational complexity of the proposed Bi-GRU channel estimation scheme can be expressed by \(16K_{\text{on}}^{2}+39946K_{\text{on}}+6336\) multiplication/divisions and \(20K_{\text{on}}^{2}+39936K_{\text{on}}+6272\) summations/subtractions. We note that employing Bi-LSTM instead of the GRU unit increases the computational complexity by around \(26.29\%\) where \(16K_{\text{on}}^{2}+53258K_{\text{on}}+8384\) multiplications/divisions and \(20K_{\text{on}}^{2}+53248K_{\text{on}}+8256\) summations/subtractions are needed without any gain in the BER performance as discussed in V. Moreover, using Bi-SRNN requires \(16K_{\text{on}}^{2}+13322K_{\text{on}}+4096\) multiplications/divisions and \(20K_{\text{on}}^{2}+13312K_{\text{on}}+4096\) summations/subtractions. Therefore, the overall computational complexity is decreased by \(73.63\%\) and \(64.22\%\) in comparison to the ALS-BiLSTM and ALS-BiGRU estimators, respectively. However, Bi-SRNN unit suffers from limited performance due to its simple architecture.
Fig. 11 illustrates the computational complexities of the studied CNN-based FBF channel estimators. We can notice that the conventional 2D LMMSE estimator records the highest computational complexity [35], making it impractical in real-time scenarios. Moreover, the ChannelNet, TS-ChannelNet, and the WI-CNN estimators did not provide a good complexity vs. performance trade-off. In contrast, the complexity is significantly decreased by the proposed ALS-BiGRU channel estimator which is \(10\)x and \(115\)x less complex than the ALS-WI-SRCNN and the ALS-WI-DNCNN estimators, respectively. Moreover, the proposed ALS-BiGRU channel estimator is \(10^{6}\)x less complex than the conventional 2D LMMSE channel estimator. Therefore, we can conclude that employing the proposed optimized Bi-GRU network instead of CNN networks in the channel estimation is more feasible and at the same time it offers better performance. Thus making it a good alternative to the 2D LMMSE channel estimator.
## VII Conclusion and Future Perspectives
In this paper, RNN-based channel estimation in doubly-selective environments has been investigated. The recently proposed DL-based SBS and FBF channel estimators have been presented and their limitations have been discussed. In order to overcome these limitations, we have proposed optimized RNN-based and Bi-RNN estimators for SBS and FBF channel estimation respectively. Moreover, the performance of several RNNs architectures including SRNN, LSTM, and GRU has been thoroughly analyzed based on the channel correlation within the received frame. Moreover, we show that the proposed GRU and Bi-GRU units result in a better performance-complexity trade-off in different mobility scenarios. Simulation results have shown the performance superiority of the proposed channel estimators over the recently proposed DL-based SBS and FBF estimators while recording a significant reduction in computational complexity. As a future perspective, we will investigate the ability to extend the proposed RNN and Bi-RNN based channel estimators for MIMO and mmWave communications taking into consideration the constraints of each scenario. In order to further improve the online performance of the proposed channel estimators and their generalization capabilities, advanced DL algorithms such as transfer and meta-learning can be investigated. In addition, more advanced architectures such as the transforms can be tested for channel estimation, considering the trade-off analysis between complexity and performance. Moreover, working on interpretable and explainable theoretical DL models is a crucial future step that would ensure the reliability and transparency of employing DL networks in the domain of wireless communications, especially, channel estimation, where the intuitions behind our proposed work can be further validated.
|
2305.15423 | Modeling Interlayer Interactions and Phonon Thermal Transport in
Silicene Bilayer | We develop an accurate interlayer pairwise potential derived from the
\textit{ab-initio} calculations and investigate the thermal transport of
silicene bilayers within the framework of equilibrium molecular dynamics
simulations. The electronic properties are found to be sensitive to the
temperature with the opening of the band gap in the $\Gamma$$\rightarrow$M
direction. The calculated phonon thermal conductivity of bilayer silicene is
surprisingly higher than that of monolayer silicene, contrary to the trends
reported for other classes of 2D materials like graphene and hBN bilayers. This
counterintuitive behavior of the bilayer silicene is attributed to the
interlayer interaction effects and inherent buckling, which lead to a higher
group velocity in the LA$_1$/LA$_2$ phonon modes. The thermal conductivity of
both the mono- and bilayer silicene decreases with temperature as $\kappa\sim
T^{-0.9}$ because of the strong correlations between the characteristic
timescales of heat current autocorrelation function and temperature ($\tau\sim
T^{-0.75}$). The mechanisms underlying phonon thermal transport in silicene
bilayers are further established by analyzing the temperature induced changes
in acoustic group velocity. | Sapta Sindhu Paul Chowdhury, Appalakondaiah Samudrala, Santosh Mogurampelly | 2023-05-17T08:34:43Z | http://arxiv.org/abs/2305.15423v2 | # Modeling Interlayer Interactions and Phonon Thermal Transport in Silicene Bilayer
###### Abstract
We develop an accurate interlayer pairwise potential derived from the _ab-initio_ calculations and investigate the thermal transport of silicene bilayers within the framework of equilibrium molecular dynamics simulations. We find that the electronic properties are sensitive to the temperature with the opening of the band gap in the \(\Gamma\to M\) direction at room temperature. The calculated phonon thermal conductivity of bilayer silicene is surprisingly higher than that of monolayer silicene, contrary to the trends reported for other classes of 2D materials like graphene and hBN bilayers. We attribute this counterintuitive result to the higher velocity of LA\({}_{1}\)/LA\({}_{2}\) phonon modes arising from the interlayer interaction effects and buckling, inherent to the silicene bilayer. Interestingly, the thermal conductivity of both the mono- and bilayer silicene decreases with temperature as \(\kappa\sim T^{-0.9}\) because of the strong correlations between heat current decay characteristic timescales and temperature (\(\tau\sim T^{-0.75}\)). The mechanisms underlying phonon thermal transport in silicene bilayer are further established by analyzing the temperature induced changes in acoustic group velocity.
## I Introduction
Silicene is an intriguing two dimensional (2D) material with unique in-plane atomic configuration analogous to graphene and out-of-plane structural similarities with other class of 2D materials [1; 2; 3]. The structural uniqueness inherent to silicene offers several distinguishable physical and electronic properties compared to other 2D materials [4]. Silicene has garnered a tremendous interest in the last decade due to the existence of remarkable flexural phonon scattering [5], electro-optic effects [6], quantum anomalous Hall effect [7], spin-orbit couplings [8], giant magnetoresistance [9], and tunable band structure [10] etc. The flexural phonon scattering in silicene limits the thermal conductivity to low values, opening new avenues for technological applications such as thermoelectric devices. Furthermore, the structural and compositional superiority makes silicene compatible with silicon-based electronics [11] and the state-of-the-art semiconductor fabrication technologies [12].
The bilayer silicene poses interesting challenges for technological applications [13; 14; 15], and therefore, understanding the underlying physical mechanisms is imperative. A majority of the investigations carried out so far focused on the electronic properties of bilayer silicene [16; 17; 18; 19]. Despite significant progress in elucidating the electronic properties of mono- and bilayer silicene 2D materials, there is limited clarity on the phonon thermal transport and the underlying fundamental heat transport mechanisms [20]. Specifically, while there exists a few computational efforts investigating the thermal conductivity of monolayer silicene, there are no reports on the thermal conductivity of the bilayer.
The computational studies of monolayer silicene focused on the thermal conductivity and its dependency on temperature [21], strain [22; 23], vacancy [24], and substrate [25]. The first principle based calculations estimated the thermal conductivity of silicene to be 9.4 W/m.K at room temperature which is mainly due to in-plane vibrations [26]. Using the reactive force field ReaxFF, molecular dynamics (MD) study found the silicene monolayer to be thermally stable up to 1500 K [27]. Earlier work based on Tersoff's bond order potential [28] based MD simulations estimated the thermal conductivity to be 20 W/m.K [29] but it failed to reproduce the buckled structure of silicene. Similarly, the Modified Embedded Atom Model (MEAM) potential incorrectly predicts the bucking length (twice the reported value) in silicene [30]. Stillinger-Weber (SW) potential parameters modified by Zhang et al. could reproduce the buckling of silicene monolayer in agreement with experiments and first principle density functional theory (DFT) studies [31]. The modified SW potential parameter set produces the thermal conductivity of monolayer to be in the range 5-15 W/m.K at 300 K, employing both equilibrium and non-equilibrium molecular dynamics (EMD and NEMD) approaches [31].
Despite the above, there is no consensus on how thermal conductivity of monolayer silicene scales with temperature and the corresponding phonon mechanisms. More importantly, the investigations on phonon thermal properties of multilayered silicene suffers a serious lack of the availability of a suitable interlayer potential. In this work, we optimize the Lennard-Jones (LJ) pairwise interaction potential to capture the interlayer interactions accurately using the _ab-initio_ DFT calculations. The optimized interlayer LJ potential model is used for study the structure and thermal transport properties of silicene bilayer. Specifically, we calculate the effect of temperature on the structural, electronic and thermal conducting properties of silicene bilayer. Moreover, we compare the thermal transport in bilayer to that of silicene monolayer. The results on thermal conductivity of bilayer is explained through the calculation of mode dependent
phonon properties of the bilayer and monolayer systems. Phonon characteristics time calculated from heat current autocorrelation function and phonon dynamics calculations are used to explore the underlying reason behind the temperature dependency of thermal conductivity.
The rest of the paper is organized as follows: Section II presents the theoretical formalism and computational details followed in the paper, Section III presents the results obtained in the study followed by discussion and Section IV summarizes the result. We have explained the structural information in Section III.1, the optimization of LJ parameters are detailed in Section III.2. The effect of temperature on the structural configurations are described in Section III.3 followed by the calculations of thermal conductivity in Section III.4. Section III.5 explains the thermal conductivity by analyzing phonon modes.
## II Theoretical background and computational details
We use the _ab-initio_ density functional theory framework as implemented in the _quantum espresso_[32; 33] to predict the ground-state interlayer separation between silicene layers. The exchange-correlation (XC) energy was estimated using the Perdew-Burke-Ernzerhof (PBE) functional [34] of the generalized gradient approximation (GGA). The projector augmented wave (PAW) [35] method for pseudopotential is used to address the electron core interactions. For our calculations, plane waves with kinetic energy cutoff of 110 Ry for wavefunction and 880 Ry for charge density have been used with a Monkhorst-Pack [36] k-point grid of 32 \(\times\) 32 \(\times\) 1. We have minimized the Hellmann-Feynman forces acting on the atoms to \(5\times 10^{-4}\) eV/A and the total energy convergence threshold is \(10^{-6}\) eV.
The molecular dynamics simulations have been carried out in the Large-scale Atomic/Molecular Massively Parallel Simulator (Lammps) [37]. We have used the SW [38] potential with optimized parameter set [31] for describing the interatomic interactions. The interlayer interaction between silicene layers have been described using a LJ potential given by:
\[\phi(r)=4\epsilon\left[\left(\frac{\sigma}{r}\right)^{12}-\left(\frac{\sigma}{ r}\right)^{6}\right]. \tag{1}\]
Here, \(\sigma\) and \(\epsilon\) are the parameters of the potential and the global cutoff for LJ is chosen to be \(r_{c}=20\) A which is more than five times \(\sigma\). For optimization of the LJ parameters for bilayer silicene, we have calculated the binding energy using the equations: \(E_{\mathrm{b}}^{\mathrm{DFT/MD}}=E_{\mathrm{bilayer}}^{\mathrm{DFT/MD}}\) - \(E_{\mathrm{layer1}}^{\mathrm{DFT/MD}}-E_{\mathrm{layer1}}^{\mathrm{DFT/MD}}\) where, \(E_{\mathrm{b}}^{\mathrm{DFT}}\) and \(E_{\mathrm{b}}^{\mathrm{MD}}\) are the binding energy from DFT and MD calculations respectively. Here, \(E_{\mathrm{bilayer}}^{\mathrm{DFT}}\), \(E_{\mathrm{layer1}}^{\mathrm{DFT}}\) and \(E_{\mathrm{layer2}}^{\mathrm{DFT}}\) are the total energies of the bilayer and the monolayers from DFT calculations and \(E_{\mathrm{bilayer}}^{\mathrm{MD}}\), \(E_{\mathrm{layer1}}^{\mathrm{MD}}\) and \(E_{\mathrm{layer2}}^{\mathrm{MD}}\) are the total energies of the bilayer and the monolayer from MD simulations. We minimize the following objective function to get the optimized \(\sigma\) and \(\epsilon\) :
\[\chi^{2}=\frac{1}{W}\int_{0}^{\infty}|E_{\mathrm{b}}^{\mathrm{DFT}}(r)-E_{ \mathrm{b}}^{\mathrm{MD}}(r)|^{2}w(r)dr \tag{2}\]
\[\approx\frac{1}{W}\sum_{r_{i}}^{r_{\mathrm{cut}}}|E_{\mathrm{b}}^{\mathrm{DFT }}(r_{i})-E_{\mathrm{b}}^{\mathrm{MD}}(r_{i})|^{2}w(r_{i}), \tag{3}\]
where, \(r\) specifies the interlayer separation between two silicene layers and \(w(r_{i})\) is the weight function, defined as:
\[w(r)=\exp\left[-\left(\frac{r-r_{0}}{\zeta}\right)^{2}\right], \tag{4}\]
where, \(r_{0}\) is the minima of LJ potential and \(\zeta\) controls the amplitude of \(w(r)\) around the minima. We note that the choice of \(w(r)\) is not unique and a relevant weight function can be chosen so that the significant part of the LJ potential is represented correctly.
We have applied the periodic boundary condition (PBC) in all three directions. For creating aperiodicity and to prevent the interactions between the bilayer system and its images in the z-direction, we have created 50 A of vacuum in the simulation cell. Energy minimization using the conjugate gradient (CG) [39] and steepest descent (SG) algorithms have been conducted with energy tolerance of \(10^{-8}\) and force tolerance of \(10^{-8}\) eV/A. The minimized structures have been equilibrated in the isothermal-isobaric (NPT) ensemble at different temperatures starting from 300 K to 1000 K and at 0 bar pressure for 300 ps. The Nose-Hoover thermostat [40; 41] and the Nose-Hoover barostat [42] with a temperature coupling constant of 0.1 ps and pressure coupling constant of 1 ps have been used for maintaining a constant temperature and pressure, respectively. The systems are then equilibrated in a canonical (NVT) ensemble for 300 ps. The Verlet algorithm [43] with timestep 1 fs has been used to integrate the equation of motion.
For the generation of production trajectories for the calculations of thermal conductivity, we simulated larger trajectories in a NVT ensemble. A total of 30 independent trajectories are generated usingcorrelated initial conditions. Each independent trajectory of length 2 ns is generated using a timestep of 0.1 fs.
We have used the Green-Kubo relation based on the fluctuation-dissipation theorem to calculate the thermal conductivity of the systems using the following relation:
\[\mathbf{\kappa}(T)=\frac{1}{Vk_{B}T^{2}}\int_{0}^{\infty}\langle\mathbf{S}(0).\mathbf{S}(t) \rangle dt, \tag{5}\]
where, \(V\) represents the volume of the systems, \(k_{B}\) is the Boltzmann's constant, \(T\) is the temperature of the system. The ensemble average \(\langle\mathbf{S}(0).\mathbf{S}(t)\rangle\) represents
the autocorrelation function of the heat current operator \(\mathbf{S(t)}\), which is calculated from the simulation data as:
\[\mathbf{S}(t)=\frac{d}{dt}\sum_{i}\mathbf{r}_{i}\tilde{E}_{i}, \tag{6}\]
where \(\mathbf{r}_{i}\) is the position vector concerning the i\({}^{\text{th}}\) atom, \(\tilde{E}_{i}=E_{i}-\langle E_{i}\rangle\) represents the fluctuation in total energy with respect to the mean energy. \(\mathbf{S}(t)\) is numerically computed in the following way:
\[\mathbf{S}(t)=\sum_{i}\tilde{E}_{i}\mathbf{v}_{i}+\frac{1}{2}\sum_{i<j}(\mathbf{F}_{ij}.( \mathbf{v}_{i}+\mathbf{v}_{j}))\mathbf{r}_{ij}, \tag{7}\]
where \(\mathbf{F}_{ij}\) is the force between atom \(i\) and atom \(j\), \(\mathbf{v}_{i}\) is the velocity and \(\mathbf{r}_{ij}\) is the displacement of i\({}^{\text{th}}\) and j\({}^{\text{th}}\) atoms.
The phonon density of states (DoS) of the systems are calculated at different temperatures using the velocity autocorrelation function (VACF) as follows:
\[D(\omega)=\frac{1}{3Nk_{B}T}\int_{0}^{\infty}\frac{\langle\mathbf{v}(0).\mathbf{v}(t) \rangle}{\langle\mathbf{v}(0).\mathbf{v}(0)\rangle}e^{i\omega t}dt, \tag{8}\]
where \(\langle\mathbf{v}(0).\mathbf{v}(t)\rangle\) defines the VACF, \(\omega\) is the angular frequency, \(N\) is the number of atoms in the system. It is noteworthy that a simulation trajectory of 250 ps with a finer timestep of 0.05 fs and a saving frequency of 0.2 fs is used for the NVT run to capture higher resolution in VACF and to obtain a smoother DoS. The trajectory is recorded at a time interval of 0.2 fs.
We use the equilibrium statistical mechanics formalism based on the fluctuation dissipation theorem to construct the dynamical matrices [44] of phonon wave vector using the following equation:
\[\mathbf{D}_{k\alpha,k^{\prime}\beta}(\mathbf{q})=(m_{k}m_{k^{\prime}})^{-\frac{1}{2} }\mathbf{\Phi}_{k\alpha,k^{\prime}\beta}(\mathbf{q}). \tag{9}\]
Here, the force constant coefficient \(\mathbf{\Phi}_{k\alpha,k^{\prime}\beta}(\mathbf{q})\) at phonon wavevector \(\mathbf{q}\) is obtained from:
\[\mathbf{\Phi}_{k\alpha,k^{\prime}\beta}(\mathbf{q})=k_{B}T\mathbf{G}_{k\alpha,k^{\prime} \beta}^{-1}(\mathbf{q}). \tag{10}\]
If the displacement of the k\({}^{\text{th}}\) basis atom in the unit cell in the \(\alpha\) direction is:
\[\mathbf{u}_{k\alpha}(\mathbf{q})=\sum_{l}\mathbf{u}_{lka}\exp(\iota\mathbf{q}\mathbf{r}_{l}), \tag{11}\]
then the Green's function coefficient can be given as:
\[\mathbf{G}_{k\alpha,k^{\prime}\beta}(\mathbf{q})=\langle\mathbf{u}_{k\alpha}(\mathbf{q})\cdot \mathbf{u}_{k^{\prime}\alpha}^{*}(\mathbf{q})\rangle. \tag{12}\]
To calculate the phonon dispersion, we have used a post processing tool, phana [45] along with the fix-phonon package [46] as implemented in lammps. The group velocity is calculated as:
\[\mathbf{v}_{g}(\mathbf{q}\nu) =\nabla_{q}\omega(\mathbf{q}\nu) \tag{13}\] \[=\frac{\partial\omega(\mathbf{q}\nu)}{\partial\mathbf{q}}\] \[=\frac{1}{2\omega(\mathbf{q}\nu)}\frac{\partial[\omega(\mathbf{q}\nu)]^{ 2}}{\partial\mathbf{q}}\] \[=\frac{1}{2\omega(\mathbf{q}\nu)}\left\langle\mathbf{e}(\mathbf{q}\nu)| \frac{\partial D(\mathbf{q})}{\partial\mathbf{q}}|\mathbf{e}(\mathbf{q}\nu)\right\rangle.\]
Here, \(\mathbf{e}(\mathbf{q}\nu)\) is the phonon eigenvector at a frequency \(\nu\) at \(\mathbf{q}\). We have used phonopy [47; 48] interfaced with phono-lammps for calculating the group velocity at different temperatures.
## III Results and Discussion
### Structural Configuration
We consider the low-buckled geometry of monolayer silicene which is the most stable structure of silicene [20]. The hexagonal lattice with a lattice constant of 3.8124 A, Si-Si bond length of 2.2420 A with a buckling height of 0.4269 A is taken for our calculation as per previous reports [31] (Fig. 1(a)). The unit cell follows symmetry operations as followed by C2/m spacegroup. We extend this unit cell to create a 32 \(\times\) 32 \(\times\) 1 supercell with 2048 Si atoms (Fig. 1(b)) for subsequent MD simulations. We use the SW2 potential parameters provided by Zhang et al. [31] to describe interlayer interactions as the parameter set is based on the acoustic part of the phonon dispersion curve that is respons
Figure 1: (a) Crystal structure [31] and the snapshot of equilibrated monolayer silicene displaying out-of-plane corrugations in vacuum at 300 K (b) crystal structure and the snapshot of equilibrated bilayer silicene displaying out-of-plane corrugations at 300 K. Due to substrate-like effect on the silicene as a result of interlayer interactions, we observe suppressed spatial fluctuation on the bilayer system compared to the monolayer
tribution to thermal conductivity. The optimized geometry obtained using the SD and CG minimization protocols agrees well with the lattice parameters reported. Our DFT calculations confirm the electronic band structure with Dirac cone at K point and non-magnetic nature of the monolayer the ground state [20]. The details on the temperature variation of the monolayer system are given in Fig S1. For the construction of the bilayer, we align one layer on top of the other monolayer so that an AA stacking is obtained. At this stage, since there is no known suitable and accurate interlayer interaction potential exist for bilayer silicene, we develop the LJ potential model.
### Optimization of the Force Field Parameters
Developing an appropriate potential for modeling the interlayer interactions is a challenging yet crucial task. In past, many interlayer potentials such as DRIP, KC and ILP have been developed for graphene and other similar 2D materials. Though robust and effective in modeling complex geometry such as twisted bilayers and multilayers, simulations with these potentials are computationally highly expensive. Here, our goal is to explore pristine bilayer system without any twisting or rotational asymmetry. As such, for simplicity, we develop the pairwise LJ potential to model the interactions between two silicene layers. The reference force field parameters \(\epsilon\) and \(\sigma\) are based on the unified force field (UFF) model of Rappe et al. [49] and a global cutoff \(r_{c}\) = 20 A which is more than five times \(\sigma\) is taken. Previously, in many studies concerning silicene, silicon-silicon interactions are modelled using the UFF parameters. However, the interlayer separation (d) produced by these parameters is inaccurate. Using the UFF LJ parameters, molecular mechanics minimization produces an interlayer separation of 3.95 A which we plan to optimize by tuning \(\epsilon\) and \(\sigma\). However, our DFT calculations with PBE-GGA density functional shows the binding energy is minimum at an interlayer separation of 2.79 A compatible with previous studies [50; 51]. Therefore, we optimize the LJ parameters taking the UFF as the starting approximation. We have plotted the binding energy curve as a function of the interlayer separation in Fig. 2. It is clear that the UFF model parameters are far off from our DFT calculation and the literature [50; 51]. We obtain the minimized \(\epsilon\) and \(\sigma\) by optimizing the \(\chi^{2}\) function (Eq. (3)) following the simplex algorithm developed by Nelder and Mead [52]. The developed force field parameters for the LJ potential are \(\epsilon\) = 0.0345785 eV and \(\sigma\)= 2.5862 A using weight function parameter \(\zeta\) = 0.3 A. Here, we have introduced the weight function parameter to accurately model the interaction behaviour around the minimum of the binding energy curve. Based on this optimized interlayer potential and the intralayer SW potential, the unit cell of the bilayer considered in this work is found to follow the symmetry operations of P-3m1 spacegroup. Using the developed LJ force field parameters, the obtained lattice constant is 3.7941 A, the buckling height is 0.4595 A and the bond length is 2.2382 A. In Table 1, we summarize the structural information of the systems used in our study. Clearly, there is a slight increase in the buckling height while the lattice constant decreases by a bit. We observe that the optimized LJ parameters obtained by us produces the interlayer separation as
Figure 3: The variation of structural parameters with temperature: (a) interlayer separation, (b) buckling height and (c) percentage deviation of the lattice constant with respect to the corresponding value at 0 K. (d) The electronic band structure and (e) the phonon dispersion along the \(\Gamma\to K\to M\rightarrow\Gamma\) symmetry directions. The solid black line represents the Fermi level in the electronic band. The solid curve represents 0 K and the dashed curve represents 300 K temperature for both the band structures.
Figure 2: Binding energy with interlayer separation for bilayer silicene. The black curve is the reference DFT data used for deriving LJ parameters. The optimized parameters are used for all subsequent MD simulations. The red colored curve is the binding energy curve obtained using the optimized LJ parameters (\(\epsilon\) = 0.0345785 eV and \(\sigma\)= 2.5862 Å).
well as lattice parameters and buckling height of bilayer silicene consistent with the first-principles DFT calculations and in agreement with the literature [50; 51] at 0 K. Our ground state magnetic calculation reveals that the bilayer is most stable in the non-magnetic state. Despite satisfactorily reproducing the structural features of bilayer silicene correctly, we note that the optimized LJ potential in this work has limited capabilities in the sense that it is most accurate only for the AA-stacked silicene which it is based on.
### Effect of Temperatures on Structural Features
Because of the buckled topology and structural complexity, silicene is expected to show high sensitivity to temperature. In this section, we discuss the results of structure, electronic and phononic properties as a function of temperature. We use molecular dynamics trajectory of 500 ps in the isothermal-isobaric ensemble to calculate the structural parameters for the bilayer system at all temperatures. Prior to that, we have equilibrated the system in the canonical ensemble for 500 ps at the respective temperatures. The results of temperature dependency of interlayer separation, buckling height and the change in lattice constant are explored in Fig. 3. We observe an monotonic increase in the interlayer distance between the layers as the temperature is increased (Fig. 3(a)). This is resulting from the volume expansion and the repulsion between layers due to the increased thermal motion of the atoms at high temperatures. However, a sharp decrease in buckling height is observed with temperature till 500 K due to the fact that the lattice constant is increasing (Fig. 3(b)). Interestingly, the buckling height is apparently constant range of 500 K - 750 K and slightly increases till 1000 K. Compared to the monolayer, the buckling height is less sensitive to temperature beyond 500 K. We find that there is little change in the lattice constant with the temperature. In Fig. 3(c), we plot the % deviation of the lattice constant with respect to the value at 0 K. The maximum observed deviation in the lattice constant is only 0.8% at an elevated temperature of 1000 K. Counterintuitively, we observe that the the lattice constant increases with temperature till 300 K and shows a decreasing till 500 K. This is not the case in the monolayer silicene, indicating interlayer interactions and changes in buckling height collectively results in the counterintuitive trends for lattice constant. In Fig. S1, we see that as the temperature increases, there is a monotonic decrease in the lattice constant from \(\sim\)3.81 to \(\sim\)3.71 A as the temperature is increased from 0 K to 1000 K. As expected, we observe significant degree of corrugations in both the monolayer and bilayer silicene (Fig. 1) at finite temperatures. Interestingly, the degree of corrugations are slightly suppressed in bilayer silicene compared to monolayer counterpart. In case of the bilayer, one layer of silicene acts as substrate which stabilizes the other layer compatible with previous studies on silicene based heterostructures [54].
The electronic structure and phononic properties are found to be highly sensitive to the structural changes due
\begin{table}
\begin{tabular}{l c c c} & a (Å) & \(\Delta h\) (Å) & Space Group \\ \hline Monolayer & 3.8124 & 0.4269 & C2/m \\ Bilayer & 3.7941 & 0.4595 & P-3m1 \\ \end{tabular}
\end{table}
Table 1: The crystal structures of the monolayer and bilayer silicene in the ground state used in our calculations.
Figure 4: (a) The averaged normalized HCACF as a function of correlation or lag time at 300 K, (b) the thermal conductivity calculated using 30 independent trajectories (dashed black curve) and the average thermal conductivity (solid black curve) with the upper limit of the Green Kubo integral (Eqn. 5) at 300 K, and (c) the thermal conductivity of silicene monolayer and bilayer as a function of temperature. The filled-in circles and solid line represents the simulation data and the fit to power law \(\kappa\sim T^{-0.93}\) for the monolayer system respectively. The open circles and the dashed line represent the simulation data and the fit to power law \(\kappa\sim T^{-0.88}\) for bilayer system respectively. The power-law fitting shows the trend of the thermal conductivity decreasing with temperature for both monolayer and bilayer. Interestingly, bilayer silicene shows higher magnitude of thermal conductivity than the monolayer.
to temperature. To investigate this sensitivity to temperature changes, we calculate the electronic band structure in the \(\Gamma\rightarrow\)K\(\rightarrow\)M\(\rightarrow\)\(\Gamma\) direction (Fig. 3(d)) for the ground state (0 K) and at room temperature (300 K) with lattice structure obtained from MD relaxation. At the ground state, we observe the presence of Dirac cone which vanishes at room temperature with a clear evidence of opening of band gap. The reduced bucking and an increased interlayer separation results in this departure from the Dirac cone in the M\(\rightarrow\Gamma\) path.
We also calculate the temperature effects on phonon dispersion in the same high symmetry direction as the electronic band structure in the \(\mathbf{q}\)-space (Fig. 3 (e)). We observe that both the optical and acoustic branch of the phonon spectrum undergoes red-shifting as the temperature is increased with a higher redshift in the optical phonon branch compared to the acoustic branch. More details about the phonon modes and the effect of temperature are discussed in Section III.5. In summary, we see a significant and unconventional changes in the structural, electronic and phononic behavior of the bilayer compared to the monolayer counterpart with the increase in the temperature. In the following section, we shed light on how the above results affect the thermal transport in the bilayer system.
### Thermal Conductivity of Bilayer Silicene
Thermal transport properties of a material are characterized by thermal conductivity. Using the EMD simulations, we have calculated the thermal conductivity of silicene bilayer. The equilibrium approach uses the Greek-Kubo relation (Eqn. 5), which is the integration of HCACF to calculate the thermal conductivity. In Fig 4(a), we calculate the normalized HCACF as a function of correlation or lagtime, \(\tau\). We observe that the correlation function decays exponentially with oscillations till \(\sim 50\) ps, beyond which there is negligible contribution to thermal conductivity. For calculating the thermal conductivity, we simulated 30 independent trajectories with length of 1 ns with trajectory saving frequency of 0.02 fs. We note that the choice of the number of independent trajectories strictly depends on the convergence of the system and the desired accuracy. Fig. 4(b) shows the thermal conductivity calculated from individual runs and the average thermal conductivity based on 30 independent runs as a function of the upper limit of the GK integral. It is clear that the thermal conductivity converges after about \(\sim\)20 ps of correlation time for 300 K. For higher temperatures, \(\kappa\) convergence faster which can be attributed to smaller lifetime of phonons at high scattering. However, to reduce errors we have taken 100 ps as cutoff correlation time for estimating the thermal conductivity. Our choice of cutoff is based on the decay of the heat current autocorrelation function.
Our calculations show that the thermal conductivity of silicene bilayer is 14.8 \(\pm\) 0.3 W/m.K at room temperature, which is much lower than that of graphene and h-BN. We observe that the thermal conductivity decreases with the temperature similar to other 2D materials. Surprisingly, we see an increase in the thermal conductivity of about \(\sim 15\%\) in bilayer silicene compared to the monolayer system. We notice a monotonic decrease in the thermal conductivity of both the mono- and bilayer systems with temperature (4(c)). We attribute this decrease in thermal conductivity to phonon-phonon Umklapp processes. At higher temperature, the Umklapp process increases significantly, thus reducing the thermal conductivity. Since the thermal conductivity is inversely proportional with temperature, we fit \(\kappa\) to a power-law and obtain \(\kappa\sim T^{-0.93}\) for monolayer and \(\kappa\sim T^{-0.88}\) for bilayer silicene, with an overall trend to be \(\kappa\sim T^{-0.9}\). The deviation from the ideal scaling of \(T^{-1}\) for both the systems is only marginal. This deviation from ideal \(T^{-1}\) law might be to the quasi-harmonic nature of the scattering resulting from higher order effects [55].
We fit the correlation functions with two exponential functions such that:
\[\frac{\langle\mathbf{S}(0).\mathbf{S}(t)\rangle}{\langle\mathbf{S}(0).\mathbf{S}(0)\rangle}=a_ {0}\exp(-t/\tau_{1})+(1-a_{0})\exp(-t/\tau_{2}) \tag{14}\]
The fitting gives two characteristics times, a short-range characteristic time, \(\tau_{1}\) and a longrange correlation time \(\tau_{2}\)). We find that \(\tau_{1}\) is less than \(\tau_{2}\) signifying a faster decay at lower correlation time below 5 ps. Further,
Figure 5: (a) The characteristic time scales associated with short range (\(\tau_{1}\)) and long range (\(\tau_{2}\)) correlations as a function of temperature. The filled-in symbols and solid lines represent the simulation data and the power-law fit for the monolayer system, respectively. The open symbols and the dashed lines represent the simulation data and the power law fit for bilayer system, respectively. Short range characteristic time scales are shown in diamonds and the long range characteristic time scales are shown in circles. (b) Thermal conductivity as a function of the long range characteristics time scales. Other legends are the same as that of Fig. 4. It is evident that the temperature has the same effect on \(\tau\) as it has on \(\kappa\), which explains convincingly that \(\kappa\sim T^{-0.93}\)
both the \(\tau_{1}\) and \(\tau_{2}\) decrease with the increase in temperature (Fig. 5(a)). This may indicate that there is lower phonon mean free path with the increase in temperature [55]. The lower phonon free path contributes to the decrease in thermal conductivity at higher temperatures. We analyze the correlation between characteristic times and temperature in Fig. 5(a) and by fitting to a powerlaw, we see that for both the monolayer and bilayer systems scale as \(\tau_{1}\sim T^{-0.75}\) and \(\tau_{2}\sim T^{-0.77}\). To investigate the existence of any scaling relation between the characteristics times and the thermal conductivity, in Fig. 5(b) we compare \(\kappa\) and \(\tau_{2}\). Interestingly, it is found that the thermal conductivity has a linear relation with the characteristics time (\(\kappa\sim\tau_{2}\)) qualitatively. Although this simple analysis provides a significant outlook, to further understand the behaviour of bilayer silicene as compared to other 2D materials, we explore the dependency of phonon modes in systems with temperature in the subsequent section.
### Phonons in Bilayer Silicene
To understand the effect of interlayer interactions and the temperature on the thermal conductivity, we have systematically analyzed the phonon dynamics of the silicene systems. The phonon density of states provides insights on how the phonon modes contribute to thermal conductivity of the silicene systems. We have calculate the phonon density of state and the partial density of state to understand how the phonon density is influenced by the in-plane and out-of-plane motion of the atoms. The phonon density of states, calculated from the MD simulations is simply a Fourier transform of the VACFs. We plot the normalized VACF and the contribution of the out-of-plane component to the VACF as a function of the correlation or lagtime in Fig. S5. We find that the VACF decays to zero after \(\sim 10\) ps of correlation time. Hence, the maximum time limit for the integration in Eq. (8) is taken as 50 ps which is more than sufficient for all the fluctuations to die out. Interestingly, the out-of-plane component shows less fluctuations over the whole range of correlation time. We rescale and plot the phonon DoS for the silicene bilayer and monolayer system in Fig. 6(a). We note that the acoustic mode density blueshifts in bilayer which results in a increased thermal conductivity of the bilayer silicene compared to the monolayer. The flexural component of the DoS in contributes to total DoS at lower frequencies but decays to zero at the optical frequencies. Nevertheless, both the acoustic and optical modes are blueshifted for the bilayer system. For understanding the effect of temperature on phonon DoS, the total phonon density of states for bilayer is plotted at 300 K, 500 K and 1000 K temperatures in Figure 6(b). It is explicit that there is monotonic decrease in intensity in of the phonon dispersion curve with temperature. The redshift in the peak on the phonon DoS (Fig. S7) directly correlates to the decrease in thermal conductiv
Figure 6: (a) Phonon DoS as a function of the angular frequency of vibration for monolayer (solid black curve) and bilayer (solid red curve) silicene. The absolute value is rescaled for better representation of the dependency on the interaction. Flexural contribution is plotted with dashed lines (black and red for monolayer and bilayer, respectively). (b) Dependency of phonon DoS on temperature for bilayer silicene. Here, black, red and blue curves represent 300 K, 500 K and 1000 K, respectively. Intensity of phonon density of state decreases with temperature due to increased phonon scattering rate at high temperature.
Figure 7: (a) Phonon group velocity with angular frequency in bilayer silicene at 300 K, (b) variation of the group velocity for the bilayer at 300 K, 500 K, 1000 K temperatures, (c) variation of the group velocity for monolayer (black circles) and bilayer (red circles) and (d) comparison of phonon band structure for monolayer (black curve) and bilayer (red curve) at 300 K. The acoustic branches are represented with solid curves while the optical branches are represented with dashed curves. We observe that group velocity decreases with an increase in temperature leading to reduced thermal conductivity. However, LA\({}_{1}\)/LA\({}_{2}\) mode velocities are increased in bilayer compared to the monolayer leading to higher thermal conductivity in bilayer.
ity with temperature [56]. This is also the case for the monolayer system studied in our work.
In Fig. 7(a), we have plotted the group velocity of the bilayer system at 300 K. We observe that the the flexural ZA\({}_{1}\)/ZA\({}_{2}\) modes have less group velocity than the other acoustic modes, i.e., TA\({}_{1}\)/TA\({}_{2}\) and LA\({}_{1}\)/LA\({}_{2}\). This observation points to the fact that due to the inherent buckling, the flexural modes are prone to high scattering compared to the in-plane modes and thus have less group velocity. Naturally, it is expected that the contribution of the flexural modes to the thermal conductivity is limited unlike in the cases of graphene and h-BN. It is clear that the LA\({}_{1}\)/LA\({}_{2}\) modes which contribute maximum to the thermal conductivity [25] has higher group velocity in bilayer (Fig. 7(a)). On the other hand, the optical modes show monotonic decrease in the group velocity with the phonon angular frequency. Further, there is a monotonic decrease in the group velocity of the bilayer systems with temperature as shown in Fig. 7(b). This validates our result of the temperature variation in thermal conductivity that the thermal conductivity decrease with temperature. As the temperature is raised, the likelyhood of scattering of phonon increases and leads to high degree of collisions which decreases the group velocity. The decrease in the group velocity leads to a decrease in the thermal conductivity of the systems. There is a redshift in the mode frequency of phonon as previously observed in the case of phonon DoS. The redshift of phonon branches is also apparent as we move from bilayer to monolayer system as noticeable from 7(c). Therefore, it is quite clear that the redshifting of phonon modes has direct correlation with decrease in thermal conductivity. Overall, the bilayer phonon modes have higher group velocity compared to the monolayer. But, it is noteworthy that the phonon group velocity of LA\({}_{1}\)/LA\({}_{2}\) modes has much higher magnitude compared to the LA mode of the monolayer which has maximum contribution to the thermal conductivity.
The phonon dispersion curve calculated for the bilayer and the monolayer systems show a similar trend. At first, to check for the validity of our method for calculating the phonon dispersion, we have calculated the phonon dispersion for the monolayer using the density functional perturbation theory (DFPT) and compared it to the dispersion calculated using MD trajectory (Fig. S9). The phonon spectral energy density (SED) method is also used to calculate the phonon dispersion in the quasi-harmonic limit in the \(\Gamma\rightarrow\) M direction (Fig. S11). We see that our method with the classical potentials can reproduce the results of quantum calculations within a reasonable limit for uncertainties (Fig. S9) and therefore, we use the classical mechanics based techniques for simplicity. We notice that the TA\({}_{2}\) mode acquires non zero frequency at the \(\Gamma\) point. Moreover, finite phonon angular frequency is also shown by LA\({}_{1}\)/LA\({}_{2}\) modes. Compared to the monolayer, we notice a blueshift in both the optical and the acoustic branch which directly corresponds to the increase in the thermal conductivity in bilayer (Fig. 7 (d)). It is believed that the in-plane and out-of-plane modes are strongly coupled in monolayer silicene due to inherent buckling [31]. The flexural acoustic modes also experiences a dampening effect due to the increased buckling induced out-of-plane scattering. However, due to the fact that strong interlayer interactions exists between the silicene layers, the bonding leads to generation of high velocity LA\({}_{1}\)/LA\({}_{2}\) elastic waves. Generally, this is observed in case of substrate layer interaction where increasing the strength of interaction leads to higher thermal transport [57; 58]. Interestingly, we observe that the optical modes have larger degree of redshifting in case of monolayer compared to the bilayer.
## IV Conclusions
In summary, we develop accurate parameters for pairwise LJ potential for modeling interlayer interactions between silicene layers and investigated the phonon thermal transport with the Green-Kubo approach. Unlike the previous studies, the developed set of parameters correctly predicts the interlayer separation between silicene layers in agreement with the _ab-initio_ DFT calculations. We find that the interlayer interaction and temperature play a major role in structural, electronic and thermal transport in both the systems. Interestingly, lattice constant of mono- and bilayer silicene display opposite trends with temperature. Specifically, the lattice constant of bilayer is observed to display minimal changes upto 500 K and increases monotonically with temperature.However, the lattice constant of monolayer monotonically decreases with temperature. The maximum deviation is only \(\sim\) 0.8% from the ground state at 1000 K. The buckling height decreases with temperature till 500 K, where the trend reverses with less sensitivity to temperature, similar as the monolayer counterpart. In bilayer system, the interlayer separation increases with temperature due to resistance as a consequence of thermal motions. We also observe the vanishing of Dirac cone in the electronic band at room temperature for the bilayer.
Although, the group velocity of ZA\({}_{1}\)/ZA\({}_{2}\) modes dampens compared to the monolayer owing to the increased inherent buckling in bilayer, but, surprisingly, the thermal conductivity in bilayer silicene increases by \(\sim 15\%\) compared to the monolayer system. The phonon group velocity shows enhancement for the LA\({}_{1}\)/LA\({}_{2}\) modes due to the strong substrate-like interaction between the silicene layers which results in the increase in the thermal conductivity. Furthermore, we see a blueshift of the peak of phonon DoS in bilayer compared to the monolayer which is attributed directly to the increase in thermal conductivity.
We find the thermal conductivity to be decreasing with temperature, consistent with other 2D materials. The decrease in thermal conductivity scales as \(\kappa\sim T^{-0.9}\) as compared to the typically reported \(\kappa\sim T^{-1}\) scaling. This is attributed to quasi-harmonic nature of phonon scattering at high temperatures. Our analysis of normalized
heat current autocorrelation function obtained from equilibrium MD simulation show that the phonon characteristics time decreases with temperature as \(\tau\sim T^{-0.75}\). We attribute this decrease to the increase in Umklapp scattering processes at higher ambient temperature. Particularly, both the acoustic and optical phonon modes experiences the decrease in group velocity due to the increased rate of scattering. We propose further experimental as well as theoretical studies to truly capture the nature of thermal transport in silicene. Our current study in this regard may pave forward the understanding of silicene based system for thermal interfacing devices and thermoelectric applications.
###### Acknowledgements.
The authors acknowledge the Computer Center of IIT Jodhpur and the HPC center at the Department of Physics, Freie Universitat Berlin (10.17169/refuibum-26754), for providing computing resources that have contributed to the research results reported in this paper. SM acknowledges support from the SERB International Research Experience Fellowship SIR/2022/000786 and SERB CRG/2019/000106 provided by the Science and Engineering Research Board, Department of Science and Technology, India. SSPC acknowledges the Ministry of Education (MoE), Govt. of India for the financial support received as fellowship. AS acknowledges support from the SERB through grant number RJF/2021/000147.
|
2303.09384 | LLMSecEval: A Dataset of Natural Language Prompts for Security
Evaluations | Large Language Models (LLMs) like Codex are powerful tools for performing
code completion and code generation tasks as they are trained on billions of
lines of code from publicly available sources. Moreover, these models are
capable of generating code snippets from Natural Language (NL) descriptions by
learning languages and programming practices from public GitHub repositories.
Although LLMs promise an effortless NL-driven deployment of software
applications, the security of the code they generate has not been extensively
investigated nor documented. In this work, we present LLMSecEval, a dataset
containing 150 NL prompts that can be leveraged for assessing the security
performance of such models. Such prompts are NL descriptions of code snippets
prone to various security vulnerabilities listed in MITRE's Top 25 Common
Weakness Enumeration (CWE) ranking. Each prompt in our dataset comes with a
secure implementation example to facilitate comparative evaluations against
code produced by LLMs. As a practical application, we show how LLMSecEval can
be used for evaluating the security of snippets automatically generated from NL
descriptions. | Catherine Tony, Markus Mutas, Nicolás E. Díaz Ferreyra, Riccardo Scandariato | 2023-03-16T15:13:58Z | http://arxiv.org/abs/2303.09384v1 | # LLMsecEval: A Dataset of Natural Language Prompts for Security Evaluations
###### Abstract
Large Language Models (LLMs) like Codex are powerful tools for performing code completion and code generation tasks as they are trained on billions of lines of code from publicly available sources. Moreover, these models are capable of generating code snippets from Natural Language (NL) descriptions by learning languages and programming practices from public GitHub repositories. Although LLMs promise an effortless NL-driven deployment of software applications, the security of the code they generate has not been extensively investigated nor documented. In this work, we present _LLMsecEval_, a dataset containing 150 NL prompts that can be leveraged for assessing the security performance of such models. Such prompts are NL descriptions of code snippets prone to various security vulnerabilities listed in MITRE's _Top 25 Common Weakness Enumeration (CWE)_ ranking. Each prompt in our dataset comes with a secure implementation example to facilitate comparative evaluations against code produced by LLMs. As a practical application, we show how _LLMSecEval_ can be used for evaluating the security of snippets automatically generated from NL descriptions.
LLMs, code security, NL prompts, CWE
## I Introduction
Increased computation power has led to the emergence of several Large Language Models (LLMs) encompassing billions of parameters with high natural language processing capabilities. LLMs like Codex [1] or PolyCoder [2] are heavily trained on data mined from open-source projects to perform tasks such as code completion, generation, and summarization. Thereby, these models can understand the structure and syntax of various programming languages, as well as common patterns that are used in real-world software development projects. Moreover, they are even capable of producing code from Natural Language (NL) descriptions [3] (e.g., _"generate Python code to create a login page that authenticates a user"_), thus reducing significantly developers' coding efforts.
_Motivation:_ At their core, such LLMs are trained with billions of lines of code mined from open-source projects, including public GitHub repositories. Despite their large contribution to LLMs' performance, these sources often contain security vulnerabilities stemming from insecure API calls, outdated algorithms/packages, insufficient validation, and poor coding practices, among others [4, 5, 6]. It was observed that around 85% of the security APIs are misused on GitHub [7]. Hence, it is also possible that the code generated by LLMs may contain security flaws and vulnerabilities.
LLMs are getting more and more popular among software practitioners thanks to tools like GitHub Copilot [8], which include powerful code completion capabilities. Therefore, as developers start adopting such LLMs to create real-world applications, it becomes critical to assess the security of the code they generate from NL descriptions. Such an assessment would require, in principle, a collection of NL prompts describing security-relevant software instructions. That is, prompts covering scenarios or use cases prone to security vulnerabilities to verify whether LLMs produce secure implementations or not. Nevertheless, to the extent of our knowledge, a dataset of such characteristics has not yet been proposed nor documented in the current literature, which calls for further investigations and efforts in this regard.
_Contribution:_ In this work, we present _LLMSecEval_, a dataset consisting of 150 NL prompts for assessing the security of code produced by LLMs. Each prompt is a textual description of a piece of software prone to some security vulnerability listed among MITRE's Top 25 Common Weakness Enumeration (CWE) ranking [9]. Additionally, the dataset contains secure code examples for each prompt to facilitate comparative evaluations against LLM-generated deployments. We have carefully evaluated the quality of the NL prompts using both language- and content-related metrics. As a practical demonstration, we developed an application that (i) uses GPT-3 and Codex to generate code from the prompts available in LLMSecEval, and (ii) employs CodeQL [10], a code analysis engine, to identify security vulnerabilities in such code.
## II Related Work
The creators of Codex generated a dataset called HumanEval [1] to evaluate the functional correctness of the code generated by Codex. This dataset comprises 164 hand-written programming problems where each problem is constituted by a function signature, docstring, and unit tests. Austin et al. [11] built two datasets to evaluate LLMs for program synthesis tasks. The first dataset consists of short Python programs with human-readable problem statements and test cases to check for the semantic correctness of the programs. The second dataset consists of mathematical problems and their corresponding programs that generate the correct answers for the problems. However, the above datasets are not geared toward evaluating the security of the code generated by LLMs. Pearce et al. [12]
created a set of code scenarios to evaluate the code completion capability of GitHub Copilot (that uses Codex). Although, in contrast to the above datasets, this study examined the security rather than functional correctness of code. They created a set of incomplete code scenarios for which Copilot was asked to complete the code and analysed the security of the results. The same code scenarios were used as a base in another work [13] to explore the ability of LLMs to fix bugs in vulnerable code. However, they focus more on vulnerability repair prompts in the form of incomplete code with comments rather than NL prompts for secure code generation.
## III Creation of NL Prompts
An NL prompt -in this context- is a query or description written in natural language about some code functionality. Every year, MITRE [14] releases a list of the top 25 most dangerous CWEs, giving account for common and impactful software weaknesses. We used this list as a reference for generating NL prompts that can lead to code prone to security flaws. For example, an NL prompt leading to a potential _improper input validation_ (CWE-20) could be _"Write a python code to create a registration page that takes in user details and stores them in a database"_. Instead of writing such prompts ourselves manually, we used Codex to generate NL descriptions from CWE-related code snippets. Our prompt creation approach is summarized in Figure 1 and explained in the following subsections.
### _Data Source_
As mentioned in Section II, Pearce et al. [12] generated a dataset of 54 code scenarios that cover 18 of the Top 25 CWEs released in 2021 (3 scenarios per CWE). 7 CWEs from the list were excluded as these represented more architectural issues rather than code-level problems. Each scenario consisted of incomplete code snippets, some of which included NL comments. Such snippets were then fed to GitHub Copilot for their completion. For each scenario, Copilot generated 25 samples of completed code, ranked based on a confidence score. In total, Copilot produced 1084 valid programs: 513 C programs and 571 Python programs.
We used the C/Python snippets available in the dataset of Pearce et al. [12], but instead of taking the top 25 samples generated by Copilot, we selected the top 3 _functional samples_ for each scenario. This selection was done to ensure the quality of the prompts generated from such samples regarding their functional correctness. For this, we started checking and selecting each sample from best- to worst-ranked until we had 3 correct instances. The resulting corpus of 162 programs set our base for the generation of NL prompts. As 40% of the original program set (1084 instances) contained security vulnerabilities [12], the top 3 samples selected by us are also likely to have vulnerabilities. Nonetheless, we have taken measures to remove the influence of these vulnerabilities in the resulting prompts, which are explained in Section III-C.
### _NL Prompts using Codex_
The next step was to translate the programs into textual descriptions for creating a set of NL prompts covering relevant security scenarios. To translate the programs into NL descriptions, we used OpenAI's Codex [1] model. Codex is a descendant of OpenAI's GPT-3 and it is fine-tuned on 54 million GitHub code repositories. We chose the code-davinci-002 model from Codex for code-to-natural language translation as this is recommended by OpenAI as the most capable model that can understand code1. There is a provision to decide the maximum length of the output in Codex. Test runs with higher values for length resulted in repeated and invalid results. Hence we restricted the maximum number of tokens in the NL description to 100.
Footnote 1: [https://beta.openai.com/playground](https://beta.openai.com/playground)
### _Manual Curation of Responses_
Overall, Codex produced NL descriptions for 162 programs. Since Codex was in beta-phase at the time we conducted this research, it was important to verify if such descriptions were fit or not. For this, two of the authors manually curated the generated descriptions as follows :
1. _Inclusion/Exclusion Criteria:_ To filter out invalid descriptions, we removed responses that (i) were empty or only contained white space characters, (ii) included a large number of code snippets, either from the input program or additions by Codex, and (iii) do not explain the functionality of the input code. This resulted in 150 valid NL prompts.
2. _NL Descriptions Formatting:_ The valid descriptions were then polished by removing (i) repetitive phrases
Fig. 1: NL prompts creation process
from the responses, (ii) first-person references in the descriptions, (iii) trailing whitespace characters and other unnecessary special characters from the responses, (iv) incomplete sentences at the end of the responses, (v) warnings in responses that include information regarding the vulnerabilities present in the input code, (vi) bullet points, and finally (vii) language/platform-specific terms. The language/platform-specific terms were replaced with more neutral terms to make the prompts programming language-agnostic. For example, the term _printf_ from C language was replaced by the term _print_.
3. _Generation of NL prompts_: We transformed the formatted NL descriptions into prompts suitable for LLMs. To convert descriptions into prompts we simply added the header "_Generate <language> code for the following:_" to them. Fig. 2 illustrates the generation of an NL prompt from a code snippet corresponding to a CWE-20 scenario (i.e., _Improper Input Validation_). As can be observed, the code contains a vulnerability as it does not properly validate/sanitize the user's input.
## IV Dataset Description
In total, the _LLMSecEval_ dataset contains **150 NL prompts** compiled into a CSV as well as JSON file and is characterized as follows:
* _CWE name:_ Name of the weakness
* _NL Prompt:_ Prompt to generate code covering 18 out of the Top 25 CWE scenarios.
* _Source Code Filepath:_ Path of the source code file in the data published by [12] from which the prompt is generated.
* _Vulnerable:_ As reported in [12], 85 prompts in our dataset were generated from vulnerable code and it is marked under this field. Although, we have removed any vulnerability specifications from the generated NL prompts (Section III-C).
* _Language:_ Language of the source code from which the prompt is generated. Of 150 prompts, 83 are generated from Python and 67 from C programs. Although we removed any language-specific mentions, we labeled each prompt with their language of origin.
* _Quality Metrics_: The prompts are scored based on 4 metrics and their scores are provided in these fields. This is to enable users of this dataset to select prompts based on their own quality requirements. A detailed description of these metrics is presented in Section V.
* _Secure Code Samples:_ For each prompt in our dataset, we created the corresponding secure implementation in Python. This process was done mostly manually as the majority of the code snippets generated by Copilot in [12] either contained vulnerabilities or minor design flaws. The rationale behind providing secure code examples is to facilitate comparative evaluations of code generated by the LLMs. The security of these examples was checked using a code analysis tool called CodeQL [10].
The full dataset including the secure code examples can be accessed through a **GitHub public repository2** and **DOI3**.
Footnote 2: [https://github.com/tuhh-software/LLMSecEval/](https://github.com/tuhh-software/LLMSecEval/)
Footnote 3: [https://doi.org/10.5281/zenodo.7565964](https://doi.org/10.5281/zenodo.7565964)
## V NL Prompts Quality Analysis
We assessed the quality of the prompts included in the LLMSecEval dataset through some metrics available in the current literature. Particularly, we adopted _language-_ and _content-related_ metrics proposed by Hu et al. [15]. On the one hand, language metrics comprise the _naturalness_ and _expressiveness_ of the NL descriptions. While _Naturalness_ measures how fluent the NL prompt is strictly in terms of grammatically-correct full sentences, _Expressiveness_ measures its readability and understandability. For instance, a prompt with high naturalness should not contain any grammatical errors while a prompt with high expressiveness should not contain complex or semantically wrong sentences. On the other hand, content-related metrics elaborate on the _Adequacy_ and _Conciseness_ of the prompt. That is, on its richness and relevancy, respectively. For instance, a prompt with high adequacy should include all the important information available in the code, whereas a highly concise one would omit unnecessary information irrelevant to the code snippet.
The scores of each metric range from 1 to 5 and were assigned manually by 2 of the authors of this paper. We have followed the same criteria proposed by Hu et al. [15] to assign these scores (for more details, please refer [15]). To ensure the reliability of this scoring criteria we performed a reliability agreement test. For this, we chose a weighted Cohen's Kappa coefficient [16][17] to measure the inter-rater
Fig. 2: An example of NL prompt generated from a Python code snippet covering CWE-20 scenario in the Pearce et al. [12] dataset.
reliability of the scores assigned to all the metrics. Such a coefficient ranges from -1 to +1, where values greater than 0.79 indicates strong agreement among raters [17]. We obtained kappa values of 0.98 for naturalness, 0.83 for expressiveness, 0.8 for adequacy, and 0.88 for conciseness. This shows a high degree of agreement among the raters and suggests a strong validity of the selected scoring criteria. Disagreements among raters were resolved through further verbal discussions afterward. The final results of this assessment are shown in Fig. 3.
_Language-related Metrics_Most of the prompts in our dataset contain fluent English sentences describing the code, with only a few including unnecessary white spaces and special characters that were removed during formatting. Hence, all prompts in our dataset got a score of 4 or higher on the _naturalness_ metric as shown in Fig. 3. Regarding _expressive_ (i.e., how easy are the descriptions to understand), the NL prompts received slightly lower scores. Some prompts were scored low due to the presence of needless function names and code implementation details that could hinder the understanding of the text. Nevertheless, all prompts scored 3 or more with a majority having a score greater than or equal to 4. Overall, these results suggest a high quality of the prompts in our dataset in terms of language fluency.
_Content-related Metrics_As also depicted in Fig. 3, 138 out of 150 prompts received a score higher or equal to 3 when it comes to _adequacy_. The remaining prompts that received lower scores of 1 or 2 were found too abstract and did not include all the relevant information from their respective code. In terms of _conciseness_, 135 out of 150 prompts scored 3 or higher, while the rest scored lower due to the inclusion of unnecessary background information on in-built method calls without adding much value.
## VI Dataset Usage for Secure Code Generation
The main goal of _LLMSecEval_ is to facilitate research on the security of current (and future) automatic code-generation models that take NL prompts/queries as input. Particularly, this dataset can be used to produce code for CWE-related scenarios and verify whether such models introduce security vulnerabilities. Furthermore, the prompts included in _LLMSecEval_ can support further exploratory studies in the area of _prompt engineering_[18] for secure code generation. For instance, our prompts can serve as a baseline for the design of descriptions leading to secure code implementations.
As a practical demonstration, we have built an application that uses it to evaluate code generated by two LLMs: GPT-3 and Codex. For this, we used the API endpoint provided by OpenAI to access the GPT-3 and Codex models. Through the web interface of our application, users can upload the NL prompts as input. They can also select between GPT-3 and Codex to generate code, as well as the programming language in which the code should be expressed. After supplying the necessary input and options, the application produces a file containing the code generated for each prompt in _LLMSecEval_, which can be downloaded afterward. As mentioned in Section I, our tool uses CodeQL [10] to evaluate the security of the generated code. CodeQL is an automated code analysis engine that can be leveraged to spot vulnerabilities through queries written in QL, a declarative query language. We used built-in QL queries to detect 18 of the Top 25 CWEs in code created using _LLMSecEval_. Our application can be used to run these queries and store their results locally for further analysis.
## VII Limitations and Future Improvements
Currently, we have considered only 18 out of the top 25 CWEs released in 2021 for the generation of our NL prompts dataset. We plan to extend the dataset to cover more CWE scenarios and update it annually based on MITRE's yearly list. This can be achieved using the code examples provided by CWE documentation [9] for different weaknesses to generate NL prompts. Additionally, we will also design unit tests for security, tailored to each prompt in our dataset. Another limitation is associated with the language-agnostic nature of the prompts. There are CWEs that are relevant to specific programming languages only. Although we made the prompts in _LLMSecEval_ language-agnostic, prompts covering such CWEs may not be suitable to evaluate code across different programming languages.
## VIII Conclusion
_LLMSecEval_ encompasses 150 NL prompts covering 18 of the Top 25 CWE scenarios from 2021 and their corresponding secure code examples. Such a dataset facilitates the security evaluation of code generated by LLMs trained on a large number of open-source projects. These NL prompts are language-agnostic, allowing for the evaluation of code in a variety of programming languages. An example application was developed to showcase the use of the dataset to assess the security of code generated by GPT-3 and Codex. The dataset and the application are available for further experimentation through a public GitHub repository. In the future, we plan to extend to cover more CWEs and use this dataset to evaluate the security of popular LLMs with code generation capabilities.
Fig. 3: Language- and content-related scores (Note: Frequencies lower than 2 are not labeled in the graph). |
2304.05519 | The thermal history of the intergalactic medium at $3.9 \leq z \leq 4.3$ | A new determination of the temperature of the intergalactic medium over $3.9
\leq z \leq 4.3$ is presented. We applied the curvature method on a sample of
10 high resolution quasar spectra from the Ultraviolet and Visual Echelle
Spectrograph on the VLT/ESO. We measured the temperature at mean density by
determining the temperature at the characteristic overdensity, which is tight
function of the absolute curvature irrespective of $\gamma$. Under the
assumption of fiducial value of $\gamma = 1.4$, we determined the values of
temperatures at mean density $T_{0} = 7893^{+1417}_{-1226}$ K and $T_{0} =
8153^{+1224}_{-993}$ K for redshift range of $3.9 \leq z \leq 4.1$ and $4.1
\leq z \leq 4.3$, respectively. Even though the results show no strong
temperature evolution over the studied redshift range, our measurements are
consistent with an intergalactic medium thermal history that includes a
contribution from He II reionization. | Tomáš Ondro, Rudolf Gális | 2023-04-11T21:59:20Z | http://arxiv.org/abs/2304.05519v1 | # The thermal history of the intergalactic medium at \(3.9\leq z\leq 4.3\)
###### Abstract
A new determination of the temperature of the intergalactic medium over \(3.9\leq z\leq 4.3\) is presented. We applied the curvature method on a sample of \(10\) high resolution quasar spectra from the Ultraviolet and Visual Echelle Spectrograph on the VLT/ESO. We measured the temperature at mean density by determining the temperature at the characteristic overdensity, which is tight function of the absolute curvature irrespective of \(\gamma\). Under the assumption of fiducial value of \(\gamma=1.4\), we determined the values of temperatures at mean density \(T_{0}=7893^{+1417}_{-1226}\) K and \(T_{0}=8153^{+1224}_{-093}\) K for redshift range of \(3.9\leq z\leq 4.1\) and \(4.1\leq z\leq 4.3\), respectively. Even though the results show no strong temperature evolution over the studied redshift range, our measurements are consistent with an intergalactic medium thermal history that includes a contribution from He ii reionization.
intergalactic medium - quasars: absorption lines - cosmology: observations +
Footnote †: journal: The Astronomical Society of Australia
## 1 Introduction
The thermal state of the gas in the intergalactic medium (IGM) is an important characteristic describing the baryonic matter in the Universe (Lidz et al., 2010). Hui & Gnedin (1997) showed that the temperature-density relation of the photoionized IGM in the low-density region can be well-approximated by the following equation
\[T=T_{0}\Delta^{\gamma-1}, \tag{1}\]
where \(T_{0}\) is the temperature at the mean density, \(\Delta\) is the overdensity and \(\left(\gamma-1\right)\) is a power-law index.
The standard model assumes that the evolution of the IGM has passed through two major reheating events. At first, the reionization of hydrogen (H i \(\rightarrow\) H ii) occurred, and it is normally assumed that helium is singly ionized (He i \(\rightarrow\) He ii) along with H i. This process should be completed at the redshift \(z\sim 6\)(Bouwens et al., 2015). Then, the IGM cooled and is reheated again during the He ii reionization phase. This process is expected to be completed at the redshift \(z\sim 2.7\) and can be characterized by three phases (Worseck et al., 2011):
1. He iii "bubble" growth around quasars (QSOs) with redshifts \(z_{\rm em}\geq 4\),
2. overlap of the He iii zones around more abundant QSOs at \(z_{\rm reion}\sim 3\),
3. gradual reionization of remaining dense He ii regions.
In recent years, an attention has been paid to characterizing the \(T-\rho\) relation of the IGM around \(z\sim 3\)(Schaye et al., 2000; Rorai et al., 2018; Hiss et al., 2018; Telikova et al., 2018; Telikova et al., 2019; Walther et al., 2019; Gaikwad et al., 2021).
However, in case of the higher redshifts (\(z\gtrsim 4\)), absorption features start to become strongly blended (Becker et al., 2011). Due to this, most studies are based on the Ly-\(\alpha\) flux power spectrum (Garzilli et al., 2017; Irsic et al., 2017; Walther et al., 2019; Boera et al., 2019). Becker et al. (2011) used the curvature statistic, which does not require the decomposition of the Ly-\(\alpha\) forest into individual spectral lines. There is only one study treating the Ly-\(\alpha\) forest as a superposition of discrete absorption profiles (Schaye et al., 2000) at \(z\sim 4\).
The aim of this work is to study the thermal history of the IGM at \(3.9\leq z\leq 4.3\) using curvature statistics. Besides, we compare our measurements with the \(T_{0}\) evolution predicted by widely used spatially homogeneous UVB models of Haardt & Madau (2012), Onorbe et al. (2017), Khaire & Srianand (2019), Puchwein et al. (2019), and Faucher-Giguere (2020). To be more specific, we compare results with the rescaled models, same as in the study by Gaikwad et al. (2021). The results in the aforementioned study are consistent with the relative late He ii reionization in the models of Onorbe et al. (2017) and Faucher-Giguere (2020), in which the mid-point of the He ii reionization is at \(z_{\rm mid}\sim 3\). On the other hand, in case of the other compared models (Haardt & Madau, 2012; Khaire & Srianand, 2019; Puchwein et al., 2019), we can observe stronger temperature evolution in the redshift range of \(3.8<z<4.4\). Additional impetus could be that even there is good agreement between theory and observations for the temperature evolution of the IGM at \(\sim 3\), there is still a lack of data at higher redshifts.
The article is organized as follows: Section 2 contains the basic information about the observational data used in this work. The curvature method and the summary of the analysis together with the sources of uncertainties are described in Section 3. A description of the used simulations and an explanation of the generation of the simulated spectra are given in Section 4. In Section 5, we present our results and
their comparison with the previously published ones and with \(T_{0}\) evolution predicted by widely used spatially homogeneous UVB models. Our conclusions are given in Section 6.
## 2 Observations
In this study, we used a sample of QSO spectra (Tab. 1) obtained by the Ultraviolet and Visual Echelle Spectrograph (UVES) on the VLT/ESO (Murphy et al., 2019). The UVES Spectral Quasar Absorption Database contain fully reduced, continuum-fitted high-resolution spectra of quasars in the redshift range \(0<z<5\). The spectral data has nominal resolving power \(R_{\rm mom}\simeq 50000\) and dispersion of \(2.5\,\rm km\,s^{-1}\,pixel^{-1}\). From the whole dataset we selected only spectra that meet the following criteria:
1. the sightline partially or fully contains the Ly-\(\alpha\) forest in the redshift range of \(3.9\leq z\leq 4.3\). To be more specific, we focused on the spectral region of rest-frame wavelengths \(1050\,\rm\AA-1180\,\rm\AA\) inside the Ly-\(\alpha\) forest. This is the same range used in Palanque-Delabrouille et al. (2013); Hiss et al. (2018); Walther et al. (2018); Ondro & Galis (2021) and is considered a conservative choice for the Ly-\(\alpha\) forest region.
2. the signal-to-noise (S/N) ratio of the spectrum is higher than \(10\) in the studied spectral region.
We used a sample of 10 QSO spectra, which fulfills above criteria, where the coverage of the analysed QSO spectra is shown in Fig. 1.
Note that the Ly-\(\alpha\) absorbers for which \(\log N_{\rm H_{1}}\geq 20\) (damped Ly-\(\alpha\) systems) were identified by eye and excluded from the analysis. In this case, the excluded part of the spectrum was chosen to enclose the region between the points where the damping wings reached a value below \(0.9\) within the flux error. This value was chosen because the flux only occasionally reaches the continuum value. The spectral intervals with bad pixels were masked and cubically interpolated.
## 3 Curvature Method
In this work, we applied the curvature method to obtain new, robust determinations of the IGM temperature at redshift range of \(3.9\leq z\leq 4.3\). The curvature \(\kappa\) is defined as (Becker et al., 2011)
\[\kappa\equiv\frac{F^{\prime\prime}}{[1+(F^{\prime})^{2}]^{3/2}}, \tag{2}\]
where the \(F^{\prime}={\rm d}F/{\rm d}v\) and \(F^{\prime\prime}={\rm d}^{2}F/{\rm d}v^{2}\) are the first and second derivatives of the flux field with respect to velocity, respectively. The greatest advantage of this method is that it does not require the decomposition of the Ly-\(\alpha\) forest into individual lines. This is useful mainly in the higher redshifts, where absorption features start to become strongly blended.
Due to the reproducibility, we describe the basic steps of the curvature calculation in Appendix 1.
### Sources of Uncertainties
As already shown, \(\kappa\) is easy to compute and can be evaluated on a pixel-by-pixel basis (Becker et al., 2011). Before using it, however, several issues need to be addressed, which are described below.
#### 3.1.1 Noise
The curvature can be affected by the finite S/N of the spectra. To solve this difficulty Becker et al. (2011) and Boera et al. (2014) fitted the \(b\)-spline to the flux, and then compute the curvature from the fit. In this study we used the same approach as Gaikwad et al. (2021), and we smoothed the flux using the Gaussian filter of FWHM \(\sim 10\) km s\({}^{-1}\). The similar approach was used also in Padmanabhan et al. (2015).
#### 3.1.2 Continuum
In general, for spectra with high-resolution and high \(S/N\), the continuum is fitted by locally connecting apparent absorption-free spectral regions. However, this approach depends on the average line density, and thus on the redshift. At higher redshifts (typically \(z>4\)), severe blendings makes it hard or even impossible to identify the unabsorbed spectral regions. Therefore, a polynomial with typically 3 to 5 degrees of freedom for the region from Ly-\(\alpha\) to Ly-\(\beta\) can be used. This approach can produce the statistical uncertainty of the continuum placement exceeding 7% (Becker et al., 2007; Murphy et al., 2019). To
\begin{table}
\begin{tabular}{l c c c c c} \hline Object & R.A. (J2000) & Dec. (J2000) & \(z_{\rm em}\) & S/N & ESO Program IDs \\ \hline J020944+051713 & 02:09:44.61 & +05:17:13.6 & 4.184 & 14 & 69.A-0613(A) \\ J024756-05559 & 02:47:56.56 & -05:55:59.1 & 4.238 & 12 & 71.B-0106(B) \\ J030722-494548 & 03:07:22.90 & -49:45:48.0 & 4.728 & 18 & 60.A-9022(A) \\ J095355-050418 & 09:53:55.74 & -05:04:18.9 & 4.369 & 13 & 072.A-0558(A) \\ J120523-074232 & 12:05:23.11 & -07:42:32.7 & 4.695 & 26 & 166.A-0106(A),66.A-0594(A),71.B-0106(A) \\ J144331+272436 & 14:43:31.16 & +27:24:36.7 & 4.43 & 14 & 072.A-0346(B),077.A-0148(A),090.A-0304(A) \\ J145147-151220 & 14:51:47.03 & -15:12:20.2 & 4.763 & 28 & 166.A-0106(A) \\ J201717-401924 & 20:17:17.12 & -40:19:24.1 & 4.131 & 11 & 71.A-0114(A) \\ J215502+135825 & 21:55:02.01 & +13:58:25.8 & 4.256 & 11 & 65.0-0296(A) \\ J234403+034226 & 23:44:03.11 & +03:42:26.7 & 4.239 & 10 & 65.0-0296(A) \\ \hline \end{tabular}
\end{table}
Table 1: List of QSOs whose spectra were used in this study. The S/N ratio was calculated according to Stoehr et al. (2008) for the spectral regions where the absorbers were parameterized.
circumvent the continuum issue, we re-normalized both, the real data and also the simulations. For each 20 Mpc/\(h\) section, we divided the flux by the maximum value of smoothed flux in that interval.
#### 3.1.3 Metal Lines
It is well known that the Ly-\(\alpha\) forest is contaminated by metal lines, which are a potentially serious source of systematic errors (Boera et al., 2014). These lines are usually associated with the strong H i absorption. For this reason, we visually inspected the studied spectra to identify damped Ly-\(\alpha\) (DLA) and sub-DLA systems, for which the redshifts were determined. In this case, the associated metal lines redward of the Ly-\(\alpha\) emission peak of the QSO help with the proper determination of the DLA redshift (see Fig. 2). If the redshifts were known, the other metal lines (Tab. 2) were determined based on their characteristic \(\Delta\lambda\).
It is worth noting that in the case of other metal-absorption systems not associated with the DLA, we used the doublet metal lines (typically Si iv, C iv) to determine the redshift of metal-absorption systems.
Following these steps we firstly compute the curvature. Then, based on the determined redshifts, the expected wavelength of the metal lines were calculated. Finally, we excluded a region of the curvature field, which corresponds to the 30 km s\({}^{-1}\) in each direction around each potential metal line, so that the metal lines did not affect the results of our analysis.
Figure 1: The coverage of the dataset in which each spectrum represents the Ly-\(\alpha\) redshift range for individual QSOs from our sample.
Figure 2: An example of the adopted procedure for rejecting metal lines based on the DLA system at \(z\approx 3.666\) (A) in the spectrum of the quasar QSO J020944 + 051713. The green dashed line and red solid line represents the continuum level and result of the smoothing the flux using the Gaussian filter, respectively. The shaded region (B) demonstrates the excluded part of curvature field (C) due to the contamination of the region by metal absorption (O i 1302).
### Summary of method
The whole analysis can be summarized as follows:
1. We divided the spectra into 20 Mpc/_h_ sections to directly match the box size of the simulated spectra.
2. The flux field is smoothed using the Gaussian filter of FWHM \(\sim\) 10 km s\({}^{-1}\).
3. We re-normalized the flux, which was already normalized by the broader spectral range fit of the continuum, by dividing the flux of each section by the maximum value of the smoothed flux field in that interval.
4. The curvature \(\kappa\) is determined and only pixels, in which the value of the re-normalized flux \(F^{R}\) falls in the range of \(0.1\leq F^{R}\leq 0.9\) are taken into consideration. The lower value was chosen due to the fact that the saturated pixels do not contain any information on the temperature. Using the higher threshold we exclude the pixels with flux near the continuum.
5. We masked the metal lines.
6. In the case of real QSO spectra, we joint the curvature values from all of the 20 Mpc/h sections and determined the median of the < |k| > from the \(5000\) moving blocks bootstrap realizations.
7. We also applied the same procedure in case of the simulations, which were prepared for the analysis according to the procedure described in the next section. Note that the metal contamination in case of the simulations is not considered.
8. Finally, the temperature \(T(\overline{\Delta})\) is calculated by interpolating the \(T(\overline{\Delta})-\log\) < |k| > relation based on the simulations to the value of \(\log\) < |k| > determined from the data.
It is worth noting that all uncertainties in this study correspond to the 2.5th and 97.5th percentiles of distribution based on the bootstrap realizations.
## 4 Simulations
In this study, we used a part of the THERMAL2 suite, which consists of \(\sim\) 70 Nyx hydrodynamical simulations with different thermal histories on a box size \(L_{\rm box}=20\) Mpc/_h_ and \(1024^{3}\) cells (see details in Onorbe et al., 2017; Hiss et al., 2018; Hiss et al., 2019). From the whole dataset, we chose a subset of 38 simulation snapshots at \(z=4.0\) and also at \(z=4.2\), with different combinations of underlying thermal parameters \(T_{0}\), \(\gamma\) and pressure smoothing scale \(\lambda_{P}\), which satisfy a spacing threshold
Footnote 2: thermal.joseonorbe.com
\[\sqrt{\left(\frac{T_{i}-T_{j}}{\max(T)-\min(T)}\right)^{2}+\left(\frac{\gamma _{i}-\gamma_{j}}{\max(\gamma)-\min(\gamma)}\right)^{2}}\geq 0.1. \tag{3}\]
This condition was based on the fact that some of the models have close values of their thermal parameters. This is the similar approach that was used by Walther et al. (2019). The final subsets of simulations, with different combinations of thermal parameters are depicted in the Fig. 3.
\begin{table}
\begin{tabular}{l l l l}
**Absorber** & \(\lambda_{\rm rest}\) [Å] & \(f\) & **Reference** \\ \hline O vi & 1031.9261 & 0.13290 & 1 \\ C ii & 1036.3367 & 0.12310 & 1 \\ O vi & 1037.6167 & 0.06609 & 1 \\ N ii & 1083.9900 & 0.10310 & 1 \\ Fe iii & 1122.5260 & 0.16200 & 2 \\ Fe ii & 1144.9379 & 0.10600 & 3 \\ Si ii & 1190.4158 & 0.25020 & 1 \\ Si ii & 1193.2897 & 0.49910 & 1 \\ N i & 1200.2233 & 0.08849 & 1 \\ S iii & 1206.5000 & 1.66000 & 1 \\ N v & 1238.8210 & 0.15700 & 1 \\ N v & 1242.8040 & 0.07823 & 1 \\ Si ii & 1260.4221 & 1.00700 & 1 \\ O i & 1302.1685 & 0.04887 & 1 \\ Si ii & 1304.3702 & 0.09400 & 4 \\ C ii & 1334.5323 & 0.12780 & 1 \\ Cu* & 1335.7077 & 0.11490 & 1 \\ Si iv & 1393.7550 & 0.52800 & 1 \\ Si iv & 1402.7700 & 0.26200 & 1 \\ Si ii & 1526.7066 & 0.12700 & 5 \\ C iv & 1548.1950 & 0.19080 & 1 \\ Cr v & 1550.7700 & 0.09522 & 1 \\ Fe ii & 1608.4511 & 0.05800 & 2 \\ Al ii & 1670.7874 & 1.88000 & 1 \\ Al iii & 1854.7164 & 0.53900 & 1 \\ Al iii & 1862.7895 & 0.26800 & 1 \\ Fe ii & 2344.2140 & 0.11400 & 2 \\ Fe ii & 2374.4612 & 0.03130 & 2 \\ Fe ii & 2382.7650 & 0.32000 & 2 \\ Fe ii & 2586.6500 & 0.06910 & 2 \\ Fe ii & 2600.1729 & 0.23900 & 2 \\ Mg ii & 2796.3520 & 0.61230 & 6 \\ Mg ii & 2803.5310 & 0.30540 & 6 \\ Mg i & 2852.9642 & 1.81000 & 1 \\ \hline \end{tabular} References: (1) Morton (1991), (2) Prochaska et al. (2001), (3) Howk et al. (2000), (4) Tripp et al. (1996), (5) Schectman et al. (1998), (6) Verner et al. (1996).
\end{table}
Table 2: List of metal lines included in our semi-automatic rejection procedure with their oscillator strength/.
Note that the parameters \(T_{0}\) and \(\gamma\) were determined from the simulations by fitting a power-law temperature-density relation to the distribution of gas cells using linear least squares method as described in Lukic et al. (2015). In order to determine the \(\lambda_{P}\), the approach present in (Kulkarni et al., 2015) was used. The cosmological parameters used in the simulations were based on the results of the _Planck_ mission (Planck Collaboration et al., 2014): \(\Omega_{\Lambda}=0.6808\), \(\Omega_{\rm m}=0.3192\), \(\sigma_{8}=0.826\), \(\Omega_{\rm b}=0.04964\), \(n_{8}=0.96\), and \(h=0.6704\).
### Skewer generation
In the next step, for each model that fulfils the aforementioned conditions, we transformed the Ly-\(\alpha\) optical depth (\(\tau\)) skewer into the corresponding flux skewer \(F\) according to the equation \(F=F_{c}\exp\left(-A_{\tau}\tau\right)\), where continuum flux \(F_{c}\) was set up to unity and \(A_{\tau}\) is the scaling factor, which allows us to match the lines of sight to observed mean flux values. Its value can be determined by comparing of the mean flux of the simulations with observational mean flux. In this study, we used the value that corresponds to the mean flux evolution presented in Onorbe et al. (2017), which is based on accurate measurements of Fan et al. (2006); Becker et al. (2007); Kirkman et al. (2007); Faucher-Giguere et al. (2008), and Becker & Bolton (2013). It is worth noting, that the mean flux normalization is computed for the full snapshot.
### Modeling noise and resolution
To create mock spectra, we added effects of resolution and noise to the simulated skewers. The magnitude of both effects was adjusted so that the mock spectra corresponded as closely as possible to the observed ones. Note that in case of the real data, we divided the spectra into 20 Mpc/\(l\) sections to directly match the box size of the simulated spectra, and calculate the S/N of each section. Then, we match the S/N of the simulated spectra with the section of the QSO spectra.
## 5 Results and discussion
In this section, we present the measured values of the temperature at the mean density by determining the temperature at the characteristic overdensity, which is a tight function of the absolute curvature irrespective of the \(\gamma\). The final results for the curvature measurements from the real QSO spectra are shown in Fig. 4. The results show that there is a small difference between the values of curvature with and without metal correction. However, there is a significant difference of \(\log\xi\) |\(k\)| \(>\) in the case of redshift bin \(3.9\leq z\leq 4.1\) compared to study of Becker et al. (2011). It is worth noting that it is problematic to compare these values due to the curvature values depends on the data (noise, resolution) as well as on the method of noise treatment. In a preliminary analysis, we found that the main source of this discrepancy is the noisier dataset we used compared to the study of Becker et al. (2007).
Figure 4: Curvature measurements from the observational QSO spectra.
### Characteristic overdensities
As was shown in (Becker et al., 2011; Boera et al., 2014), the \(\log\zeta\mid\kappa\mid\)? follows the tight relation with the gas temperature at the characteristic overdensity (Padmanabhan et al., 2015). The method to inferring the characteristic overdensities used in this study can be explained as follows:
1. We determined the \(\log\zeta\mid\kappa\mid\)? of the simulated spectra for each input model. In this case, the final values of mean absolute curvature corresponds to the median of \(\zeta\mid\kappa\mid\)? from the 200 mock datasets generated for each input model.
2. For a given value of \(A\), we calculated the \(T(\Delta)\) for each model using the \(T_{0}\) and \(\gamma\).
3. We plotted the values of \(T(\Delta)\) versus \(\log\zeta\mid\kappa\mid\)? for each input model (Fig. 5), and fit the relation using a power-law fit using the least squares method: \[\log\zeta\mid\kappa\mid\)? = - \(\left(\frac{T(\Delta)}{A}\right)^{1/\alpha},\] (4) where \(A\) and \(\alpha\) are the free parameters.
4. Subsequently, we varied the value of \(\Delta\) in Eq. (4) and determined its value (by varying \(A\) and \(\alpha\)), which corresponds to the best-fit.
Note that the value of \(\Delta\) obtained by the aforementioned approach is denote by \(\overline{\Delta}\) and is defined as the 'characteristic overdensity' associated with the mean curvature (Padmanabhan et al., 2015).
To quantify the amount of scatter in Fig. 5, we determined the values of the characteristic overdensities and corresponding best-fitting parameters \(A\) and \(\alpha\) from the \(5\,000\) bootstrap realizations of the curvature values of the input models. These were used for the calculation of \(T(\overline{\Delta})\) and also \(T_{0}\) (see below). The final fits in Fig. 5 are based on the median values of the \(\overline{\Delta}\) and corresponding best-fitting parameters \(A\) and \(\alpha\).
### Temperature at the characteristic overdensity
In the previous part of the study, we determined the free parameters \(A\) and \(\alpha\), which allow us to calculate the \(T(\overline{\Delta})\) from the \(\log\zeta\mid\kappa\mid\)? of the QSO spectra for both redshift bins using Eq. (4). It is worth noting that in this case, we combine the values of \(\overline{\Delta}\), \(A\) and \(\alpha\) with the \(\log\zeta\mid\kappa\mid\)? of the QSO spectra obtained by bootstrap method. Subsequently, the medians were used as the best estimates of \(T(\overline{\Delta})\) and \(T_{0}\) (see below). This approach also includes uncertainties which arose during the individual steps implemented in the analysis. The results show that our measurements are in a good agreement with the previous study by Becker et al. (2011), and are depicted in Fig. 6.
### Temperature at the mean density
We can convert the values of \(T(\overline{\Delta})\) into \(T_{0}\) using Eq. (1), which requires knowing the value of \(\gamma\). Under the assumption of \(\gamma\) = 1.4, motivated by the evolution of this parameter predicted by the various UVB models, we determined the values of temperatures at mean density \(T_{0}\) = \(7893^{+1417}_{-1226}\) K and \(T_{0}\) = \(8153^{+1224}_{-993}\) K for redshift range of \(3.9\leq z\leq 4.1\) and \(4.1\leq z\leq 4.3\), respectively. All derived value of parameters are listed in Table 3.
### Comparison with previous studies
The comparison of the results obtained in this study with previously published ones is shown in Fig. 7. The derived value of \(T_{0}\) (within uncertainty) is consistent with that published by Walther et al. (2019) in case of both studied redshift bins. Comparing with the study of Becker et al. (2011), when we rescaled their values assuming \(\gamma\) = 1.4, we obtained the similar value of \(T_{0}\).
In case of the bin which corresponds to the higher redshift (\(z\) = 4.2) our results correspond to the results presented by Garzilli et al. (2017) and Boera et al. (2019). Note that due to similarity of our results and ones of the aforementioned study, the points in the Fig. 7, which corresponds to the Boera et al. (2019) are overlapped with our values.
Figure 5: \(\log\zeta\mid\kappa\mid\)? as a function of \(T(\overline{\Delta})\) for our simulations.
Figure 6: Comparison of the temperatures of the intergalactic medium at the optimal overdensity as a function of redshift obtained in this study and previously published ones.
### Comparison with models
We also compared the obtained results with predictions of five widely used UVB models with rescaled H i, He i and He ii photoheating rates as presented in the study of Gaikwad et al. (2021). Based on the measurement of the thermal state of the IGM in the redshift range of \(2\leq z\leq 4\) determined using various statistics available in the literature (i.e. flux power spectrum, wavelet statistics, curvature statistics, and \(b\)-parameter probability distribution function) the authors found a good match between the shape of the observed \(T_{0}\) and \(\gamma\) evolution and that predicted by the UVB models with scaled photo-heating rates.
The rescaled models, as were presented in the aforementioned study, together with our results are showed in Fig. 7. In the case of the lower redshift bin, the determined value of \(T_{0}\) is lower that predicted by the UVB models. On the other hand, in the case of the higher redshift bin, the determined value of \(T_{0}\) corresponds (within the error) with that predicted by the models of Onorbe et al. (2017) and Faucher-Giguere (2020). It can be concluded that these results are consistent with the relatively late He ii reionization in the aforementioned models.
## 6 Conclusions
In this study, we applied the curvature method on a sample of 10 QSO spectra obtained by the Ultraviolet and Visual Echelle Spectrograph on the VLT/ESO to obtained the value of IGM temperature at a mean density. The main results could be summarized as follows:
* Adopting the assumption of \(\gamma=1.4\), we determined the values of IGM temperatures at mean density \(T_{0}=7893^{+1417}_{-1226}\) K and \(T_{0}=8153^{+1224}_{-993}\) K for redshift range of \(3.9\leq z\leq 4.1\) and \(4.1\leq z\leq 4.3\), respectively.
* The value of \(T_{0}\) that we derived from our \(T(\overline{\Delta})\) starts to be largely independent of \(\gamma\), with increasing \(z\), because we have measured the temperature close to the mean density.
* Although the results show no strong temperature evolution over the studied redshift range, our measurements are consistent with the relatively late He ii reionization presented in the Onorbe et al. (2017) and Faucher-Giguere (2020) models.
## Acknowledgement
This research is based on the data products created from observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere. The authors would also like to thank Jose Onorbe, Prakash Gaikwad and George Becker for the fruitful discussion, and to the anonymous referee for carefully reading our manuscript and for insightful comments and suggestions that improved the quality of this work.
|
2306.13502 | Rational Transformations and Invariant Polynomials | Rational transformations of polynomials are extensively studied in the
context of finite fields, especially for the construction of irreducible
polynomials. In this paper, we consider the factorization of rational
transformations with (normalized) generators of the field $K(x)^G$ of
$G$-invariant rational functions for $G$ a finite subgroup of
$\operatorname{PGL}_2(K)$, where $K$ is an arbitrary field. Our main theorem
shows that the factorization is related to a well-known group action of $G$ on
a subset of monic polynomials. With this, we are able to extend a result by
Lucas Reis for $G$-invariant irreducible polynomials. Additionally, some new
results about the number of irreducible factors of rational transformations for
$Q$ a generator of $\mathbb{F}_q(x)^G$ are given when $G$ is non-cyclic. | Max Schulz | 2023-06-23T13:56:58Z | http://arxiv.org/abs/2306.13502v4 | # Rational Transformations and Invariant Polynomials
###### Abstract
Rational transformations of polynomials are extensively studied in the context of finite fields, especially for the construction of irreducible polynomials. In this paper, we consider the factorization of rational transformations with (normalized) generators of the field \(K(x)^{G}\) of \(G\)-invariant rational functions for \(G\) a finite subgroup of \(\mathrm{PGL}_{2}(K)\), where \(K\) is an arbitrary field. Our main theorem shows that the factorization is related to a well-known group action of \(G\) on a subset of monic polynomials. With this, we are able to extend a result by Lucas Reis for \(G\)-invariant irreducible polynomials. Additionally, some new results about the number of irreducible factors of rational transformations for \(Q\) a generator of \(\mathbb{F}_{q}(x)^{G}\) are given when \(G\) is non-cyclic.
## Introduction
Let \(K\) be an arbitrary field, \(K^{*}=K\setminus\{0\}\) the set of its units, \(K[x]\) the set of polynomials with coefficients in \(K\) and \(\mathcal{I}_{K}\) the set of monic irreducible polynomials in \(K[x]\), \(K(x)\) the rational function field over \(K\) and \(\mathbb{F}_{q}\) the field with \(q\) elements. For a rational function \(Q(x)\in K(x)\) we always denote its numerator and denominator as \(g\) and \(h\), i.e. \(Q(x)=g(x)/h(x)\). Furthermore, we assume that rational functions are represented as reduced fractions, so \(\gcd(g,h)=1\). Recall that the degree of \(Q\) is \(\deg(Q)=\max\{\deg(g),\deg(h)\}\). The \(Q\)-transform of a polynomial \(F(x)=\sum_{i=0}^{k}a_{i}x^{i}\in K[x]\) is defined as
\[F^{Q}(x):=h(x)^{\deg(F)}F\left(\frac{g(x)}{h(x)}\right)=\sum_{i=0}^{k}a_{i}g(x )^{i}h(x)^{k-i}.\]
This is not yet well-defined since for all \(a\in K^{*}\) we have
\[Q(x)=\frac{a\cdot g(x)}{a\cdot h(x)}\]
which leads to
\[F^{Q}(x)=\sum_{i=0}^{k}a_{i}(ag(x))^{i}(ah(x))^{k-i}=a^{k}\cdot\sum_{i=0}^{k}a _{i}g(x)^{i}h(x)^{k-i}.\]
One might make this transformation unambiguous by normalizing either the numerator \(g\) of \(Q\) or the resulting polynomial \(F^{Q}\). In our setup we most often have that \(Q\) satisfies \(\deg(g)>\deg(h)\) and if \(F,g\) are monic so is \(F^{Q}\).
The transformation \(F^{Q}\) is often used for constructing irreducible polynomials of high degree over finite fields starting with an irreducible polynomial and a rational function. There is a rich literature on this topic, for example [1], [3], [7], [17], [18] and [19]. The main criterion in use is
**Lemma** ([6, Lemma 1]).: _Let \(Q(x)=g(x)/h(x)\in K(x)\) and \(F\in K[x]\). Then \(F^{Q}\) is irreducible if and only if \(F\in K[x]\) is irreducible and \(g(x)-\alpha h(x)\) is irreducible over \(K(\alpha)[x]\), where \(\alpha\) is a root of \(F\)._
The original version is only stated for finite fields, but the proof does work for arbitrary fields as well. The concrete application of this lemma for arbitrary rational functions and starting polynomials \(F\) is very hard, which is why the best one can do is to focus on specific rational functions or "small" families of rational functions.
This paper considers two specific \(Q\)-transformations: The first is \(Q\) being a rational function of degree \(1\), i.e.
\[Q(x)=\frac{ax+b}{cx+d}\]
where \(ad-bc\neq 0\). The \(Q\)-transform of \(F\) looks like this
\[F^{Q}(x)=\lambda_{Q,F}(cx+d)^{\deg(F)}F\left(\frac{ax+b}{cx+d}\right),\]
where \(\lambda_{Q,F}\in K^{*}\) makes the resulting polynomial monic. This transformation preserves the irreducibility and degree of \(F\) if \(\deg(F)\geq 2\) by the previous lemma. There is another way to interpret this particular \(Q\)-transformation: Let \(\mathrm{GL}_{2}(K)\) be the set of invertible \(2\times 2\)-matrices over \(K\) and let
\[A=\left(\begin{array}{cc}a&b\\ c&d\end{array}\right)\in\mathrm{GL}_{2}(K). \tag{1}\]
We make the convention that if we write \(A\in\mathrm{GL}_{2}(K)\) then we assume that \(A\) is of the form (1). We consider the _projective general linear group_\(\mathrm{PGL}_{2}(K)=\mathrm{GL}_{2}(K)/Z\) over \(K\), where \(Z=K^{*}I_{2}\) is the set of invertible scalar multiples of the identity matrix \(I_{2}\), which is the center of \(\mathrm{GL}_{2}(K)\). The group \(\mathrm{PGL}_{2}(K)\) is isomorphic to the set of degree \(1\) rational functions in \(K(x)\), where the multiplication is composition. We denote by \([A]\) the coset of \(A\) in \(\mathrm{PGL}_{2}(K)\), that is,
\[[A]:=\{\alpha\cdot A|\alpha\in K^{*}\}.\]
We define \(*:\mathrm{PGL}_{2}(K)\times K[x]\to K[x]\) by
\[[A]*f(x):=\lambda_{A,f}\cdot(cx+d)^{\deg(f)}f\left(\frac{ax+b}{cx+d}\right), \tag{2}\]
where \(\lambda_{A,f}\in K^{*}\) makes the output-polynomial monic. We call \(f\in K[x]\)_\([A]\)-invariant_ for an \([A]\in\mathrm{PGL}_{2}(K)\) if \([A]*f(x)=f(x)\). Moreover \(f\) is called \(G\)_-invariant_ for a subgroup \(G\leq\mathrm{PGL}_{2}(K)\) if it is \([A]\)-invariant for all \([A]\in G\)
It can be shown that an \([A]\)-invariant polynomial is also \(\langle[A]\rangle\)-invariant. There is a substantial amount of literature on this transformation and its variations in the context of finite fields, for example [12], [20], [21], [22], [25], [26]. For instance, it is shown that this transformation induces a (right) group action of \(\mathrm{PGL}_{2}(K)\) on the set of monic polynomials with no roots in \(K\). The following theorem shows that \([A]\)-invariant irreducible monic polynomials over finite fields are always \(Q_{A}\)-transformations for specific rational functions \(Q_{A}\) depending on \(A\in\mathrm{GL}_{2}(\mathbb{F}_{q})\):
**Theorem R** ([20, Theorem 6.0.7.]).: _Let \([A]\in\mathrm{PGL}_{2}(\mathbb{F}_{q})\) be an element of order \(D=\mathrm{ord}([A])\). Then there exists a rational function \(Q_{A}(x)=g_{A}(x)/h_{A}(x)\) of degree \(D\) with the property that the \([A]\)-invariant monic irreducible polynomials of degree \(Dm>2\) are exactly the monic irreducible polynomials of the form_
\[F^{Q_{A}}(x)=h_{A}(x)^{m}\cdot F\left(\frac{g_{A}(x)}{h_{A}(x)}\right),\]
_where \(\deg(F)=m\). In addition, \(Q_{A}\) can be explicitly computed from \(A\)._
This theorem is proved by dividing \(\mathrm{PGL}_{2}(\mathbb{F}_{q})\) into four types of conjugacy classes and showing it for a nice representative of each class.
Let \(G\leq\mathrm{PGL}_{2}(K)\) be a finite subgroup and for \(A\in\mathrm{GL}_{2}(K)\) set
\[[A]\circ x:=\frac{ax+b}{cx+d}. \tag{3}\]
There exists a rational function \(Q_{G}\in K(x)\) of degree \(|G|\) so that \(K(x)^{G}=K(Q_{G}(x))\) where
\[K(x)^{G}:=\{Q\in K(x)|\ Q([A]\circ x)=Q(x)\ \text{for all}\ [A]\in G\}\]
is the fixed field of \(G\) (for reference see [4]). Moreover, every rational function \(Q\in K(x)^{G}\) of degree \(|G|\) is a generator of \(K(x)^{G}\), so we can normalize \(Q_{G}\) in such a way that \(Q_{G}(x)=g(x)/h(x)\) with \(0\leq\deg(h)<\deg(g)=|G|\) and \(g\) monic. Based on [4], we call these generators _quotient maps_ for \(G\) and this is the second class of rational functions we consider in this paper. In [20] it is noted that for some \([A]\in\mathrm{PGL}_{2}(\mathbb{F}_{q})\) the functions \(Q_{A}\) in Theorem R are in fact generators of the fixed field \(K(x)^{\langle[A]\rangle}\).
A natural question to ask is whether the function \(Q_{A}\) in Theorem R is always a generator of \(K(x)^{\langle[A]\rangle}\) for all \([A]\in\mathrm{PGL}_{2}(\mathbb{F}_{q})\). An understanding of this question is of interest since many constructions of irreducible polynomials over finite fields via \(Q\)-transformations that work very well use specific generators of specific fields of invariant functions, see for example [7], [18] and [19]. Another natural question is whether the theorem still holds if we consider \(G\)-invariant and not necessarily irreducible polynomials for arbitrary finite subgroups of \(\mathrm{PGL}_{2}(K)\). These two questions led us to study the \(Q_{G}\)-transformations of irreducible polynomials and their factorization. We did not want to necessarily restrict ourselves to the case that \(K\) is finite, so we formulate the results for arbitrary fields. However, the theory is especially beautiful in characteristic \(p>0\) because the finite subgroups of \(\mathrm{PGL}_{2}(K)\) are more diverse there (see [10], [11] and [27]).
The main result and starting point of this paper can be summarized as the following theorem about the factorization of \(F^{Q_{G}}\) for \(F\) an irreducible monic polynomial and \(Q_{G}\) a quotient map for \(G\):
**Main Theorem**.: _Let \(F\in K[x]\) be monic and irreducible, \(G\leq\operatorname{PGL}_{2}(K)\) a finite subgroup and \(Q_{G}=g/h\in K(x)\) a quotient map for \(G\). Then there is an irreducible monic polynomial \(r\in K[x]\) with \(\deg(F)|\deg(r)\) and an integer \(k>0\) such that_
\[F^{Q_{G}}(x)=\left(\prod_{t\in G*r}t(x)\right)^{k},\]
_where \(G*r:=\{[A]*r|[A]\in G\}\) is the \(G\)-orbit of \(r\). Additionally \(k=1\) for all but finitely many irreducible monic polynomials \(F\in K[x]\)._
The main difficulty of the proof is to show that \(k=1\) for all but finitely many irreducible and monic \(F\in K[x]\) in non-perfect fields.
We want to point out that a very similar result is known for the case that \(F\in\mathcal{I}_{K}\) is of degree \(1\) and \(K=\mathbb{F}_{q}\); we state said theorem for convenience:
**Theorem** ([13, Theorem 26]).: _Let \(G\) be a subgroup of \(\operatorname{PGL}_{2}(\mathbb{F}_{q})\) and \(Q(x)=g(x)/h(x)\) a generator for \(\mathbb{F}_{q}(x)^{G}\). Let \(\alpha\in\overline{\mathbb{F}}_{q}\) have the property that \(Q(\alpha)\in\mathbb{F}_{q}\) and assume that \(G\) acts regularly on the roots of \(F_{\alpha}(T):=g(T)-Q(\alpha)h(T)\in\mathbb{F}_{q}[T]\) via Mobius-Transformation, then_
1. \(F_{\alpha}\) _will factor into irreducible polynomials of the same degree over_ \(\mathbb{F}_{q}[T]\)__
2. _The minimal polynomial of_ \(\alpha\) _is one of the factors of_ \(F_{\alpha}\)__
3. _The degree of each factor must be the order of an element of_ \(G\)_._
To see that both theorems are connected notice that for \(\beta=Q(\alpha)\in\mathbb{F}_{q}\) we have that \(F^{Q_{G}}(T)=g(T)-\beta h(T)\) for \(F=T-\beta\), which factors into a \(G\)-orbit of an irreducible polynomial by our Main Theorem and all elements in a \(G\)-orbit have the same degree, which explains item 1. The second item is also true in our setup, that is, if \(\beta\in\overline{K}\) is a root of \(F\), then \(\alpha\in Q_{G}^{-1}(\beta)\) is a root of \(F^{Q_{G}}\). The third item, however, is a finite field specific result and generalizes to arbitrary fields and irreducible polynomials \(F\) of arbitrary degree as follows: Every irreducible factor of \(F^{Q_{G}}\) has degree \(\deg(F)\) times the size of a subgroup of \(G\). The condition that the set of roots of \(F_{\alpha}\) only contains regular \(G\)-orbits is a crucial one for the case that \(k=1\) in the Main Theorem. All of this will be explained in depth in this paper. The phenomenon that \(F^{Q_{G}}\) factorizes into a \(G\)-orbit of an irreducible polynomial was, until now, only noted for some instances of generators of specific invariant rational function fields over finite fields.
_Example 1_.:
1. We start with \(Q_{1}=x+1/x\in\mathbb{F}_{q}(x)\) and look at the factorization of \(F^{Q_{1}}\), where \(F\in\mathcal{I}_{q}:=\mathcal{I}_{\mathbb{F}_{q}}\). It is proved in [18, Lemma 4] that \(F^{Q_{1}}\) is either irreducible and _self-reciprocal_ or factorizes into a _reciprocal pair_. For \(r\in\mathcal{I}_{q}\setminus\{x\}\) we set \(r^{*}(x):=a_{0}^{-1}x^{\deg(r)}r(1/x)\) as its reciprocal polynomial, where \(a_{0}\) is the constant term of \(r\). A polynomial is said to be self-reciprocal if \(r(x)=r^{*}(x)\) and a reciprocal pair is a pair \(r,r^{*}\) such that \(r\neq r^{*}\). This result can be explained with our Main Theorem: Let \[G_{1}=\left\langle\left[\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right)\right]\right\rangle.\] This is a subgroup of order \(2\) and a generator of \(G_{1}\) is \(Q_{G_{1}}(x)=x+1/x=(x^{2}+1)/x\in K(x)\). Then, for all but finitely many irreducible monic
polynomials \(F\in\mathcal{I}_{K}\) we obtain that there exists an irreducible monic polynomial \(r\in K[x]\) such that \[F^{Q_{G_{1}}}(x)=\begin{cases}r(x),&\text{if $F^{Q_{G_{1}}}$ is irreducible}\\ r(x)\cdot a_{0}^{-1}x^{\deg(r)}r(1/x),&\text{if $F^{Q_{G_{1}}}$ is not irreducible}\end{cases}.\]
2. The factorization of \(F(x^{n})\) in \(\mathbb{F}_{q}[x]\) for \(n|q-1\) leads to another nice example. Let \(a\in\mathbb{F}_{q}^{*}\) be a primitive \(n\)-th root of unity. It can be shown that for all \(F\in\mathcal{I}_{q}\setminus\{x\}\) there exists \(m\mid n\) and \(r\in\mathcal{I}_{q}\) such that \[F(x^{n})=\prod_{i=0}^{m-1}a^{-i\cdot\deg(r)}\cdot r(a^{i}x).\] For reference see [2] and [8]; for a nice application of this see [15]. The rational function \(Q_{2}(x)=x^{n}\) is a quotient map for the subgroup \[G_{2}:=\left\{\left[\left(\begin{array}{cc}a^{i}&0\\ 0&1\end{array}\right)\right]|i\in\mathbb{N}\right\}\] and \(F^{Q_{2}}(x)=F(x^{n})\). The factors belong to the same \(G_{2}\)-orbit, since \[\left[\left(\begin{array}{cc}a^{i}&0\\ 0&1\end{array}\right)\right]*r(x)=a^{-i\deg(r)}\cdot r(a^{i}x).\]
3. In [5] it is noted that if \(F(x^{p}-x)\) is not irreducible for \(F\in\mathcal{I}_{q}\) and \(p^{l}=q\), then \(F(x^{p}-x)\) factorizes into exactly \(p\) irreducible polynomials of degree \(\deg(F)\) and, more precisely, there exists \(r\in\mathcal{I}_{q}\) with \(\deg(r)=\deg(F)\) such that \[F(x^{p}-x)=r(x)\cdot r(x+1)\cdot\ldots\cdot r(x+(p-1)).\] The rational function \(Q_{3}(x)=x^{p}-x\) is a quotient map for \[G_{3}:=\left\{\left[\left(\begin{array}{cc}1&a\\ 0&1\end{array}\right)\right]|a\in\mathbb{F}_{p}\right\}\] and for \(r\in K[x]\) the transformation with an element of \(G_{3}\) looks like this \[\left[\left(\begin{array}{cc}1&a\\ 0&1\end{array}\right)\right]*r(x)=r(x+a).\]
All of these examples still hold in every field \(K\) in which the corresponding subgroups of \(\mathrm{PGL}_{2}(K)\) exist. The factorization of \(F^{Q_{G}}\) can be easily obtained by finding just one irreducible factor and calculating the \(G\)-orbit of this factor. We also see that in literature, apart from [13, Theorem 26], only small cyclic subgroups of \(\mathrm{PGL}_{2}(\mathbb{F}_{q})\) of prime order were considered. In contrast, we want to look at big subgroups of \(\mathrm{PGL}_{2}(\mathbb{F}_{q})\) instead. We can obtain the following new result over finite fields:
**Theorem 2**.: _Let \(K=\mathbb{F}_{q}\) and \(G\leq\mathrm{PGL}_{2}(\mathbb{F}_{q})\) with quotient map \(Q_{G}\in\mathbb{F}_{q}(x)\). Moreover, set \(\mu_{G}\in\mathbb{N}\) as the maximal order of an element in \(G\) and let \(F\in\mathbb{F}_{q}[x]\) be an irreducible monic polynomial such that \(F^{Q_{G}}\) is separable. Then we have that \(F^{Q_{G}}\) has at least \(|G|/\mu_{G}\) irreducible factors and every such factor has degree at most \(\mu_{G}\cdot\deg(F)\)._
_Remark 3_.: The polynomial \(F^{Q_{G}}\) is separable if \(F\in\mathcal{I}_{q}\) and \(\deg(F)\geq 3\), so the only exception polynomials for which the theorem does not necessarily hold are irreducible polynomials of degree less than \(3\). For an explanation see Theorem 17, Theorem 22 and Lemma 28.
For example, take \(\{0\}\neq V\leq_{p}\mathbb{F}_{q}\) as a \(\mathbb{F}_{p}\)-subspace of \(\mathbb{F}_{q}\), then define
\[\widetilde{V}:=\left\{\left[\left(\begin{array}{cc}1&v\\ 0&1\end{array}\right)\right]|v\in V\right\}.\]
We call \(\widetilde{V}\) the to \(V\) associated subgroup in \(\mathrm{PGL}_{2}(\mathbb{F}_{q})\). Observe that \(V\cong\widetilde{V}\) as groups. A quotient map for \(\widetilde{V}\) is the to \(V\) associated subspace polynomial, that is,
\[Q_{V}(x)=\prod_{v\in V}(x-v)\in\mathbb{F}_{q}[x].\]
Every non-trivial element in \(\widetilde{V}\) has order \(p\), so \(\mu_{\widetilde{V}}=p\) and therefore we obtain the following corollary
**Corollary 4**.: _Let \(\{0\}\neq V\leq_{p}\mathbb{F}_{q}\) be an \(\mathbb{F}_{p}\)-subspace of \(\mathbb{F}_{q}\) and \(Q_{V}\in\mathbb{F}_{q}[x]\) the associated subspace polynomial. For every irreducible (monic) polynomial \(F\in K[x]\) we have that \(F(Q_{V}(x))\) has at least \(|V|/p\) irreducible factors and every irreducible factor has the same degree, which is at most \(p\cdot\deg(F)\)._
In the last part of this paper we consider two further examples of big subgroups of \(\mathrm{PGL}_{2}(\mathbb{F}_{q})\) and show how to apply Theorem 2 to them.
The Main Theorem shows that the irreducible factors of \(F^{Q_{G}}\) belong to the same \(G\)-orbit. Together with the fact that for every \(G\)-orbit \(G*r\) in \(\mathcal{I}_{K}\) there exists an irreducible \(F\in\mathcal{I}_{K}\) such that \(F^{Q_{G}}\) has all polynomials in \(G*r\) as its factors we can prove a generalization of Theorem R:
**Theorem 5**.: _All but finitely many \(G\)-invariant irreducible monic polynomials \(f\) can be written as a \(Q_{G}\)-transformation, i.e. there is \(F\in\mathcal{I}_{K}\) such that \(f=F^{Q_{G}}\)._
This result does not say anything about the existence of \(G\)-invariant irreducible polynomials, it just makes a statement about them if they exist in \(K[x]!\)
Our proof of a general version of Theorem R avoids the original idea of dividing \(\mathrm{PGL}_{2}(K)\) into different types of conjugacy classes and showing the theorem for each type, which should be hard as such a list can become quite large depending on the field, see [11] or [27].
The last result shows that the \(G\)-invariant but not-necessarily irreducible polynomials are a product of a \(Q_{G}\)-transformation and some exception polynomials:
**Theorem 6**.: _Let \(G\leq\mathrm{PGL}_{2}(K)\) be a finite subgroup and \(Q_{G}\) a quotient map. There exists \(k\in\mathbb{N}\setminus\{0\}\) and irreducible monic polynomials \(r_{1},\ldots r_{k}\in K[x]\) and \(n_{1},\ldots n_{k}\in\mathbb{N}\setminus\{0\}\) such that for every \(G\)-invariant monic polynomial \(f\in K[x]\) there is a unique monic \(F\in K[x]\) and integers \(k_{i}<n_{i}\) such that_
\[f=\left(\prod_{i=1}^{k}(\prod_{t\in G*r_{i}}t)^{k_{i}}\right)\cdot F^{Q_{G}}.\]
We give full explanations about what the polynomials \(r_{1},\ldots,r_{k}\) and the integers \(n_{i}\) are in section 3.
Preliminaries
### Invariant Polynomials
We denote by \(\circ:\mathrm{PGL}_{2}(K)\times(\overline{K}\cup\{\infty\})\to\overline{K}\cup\{\infty\}\) the Mobius-Transformation on \(\overline{K}\cup\{\infty\}\), that is,
\[[A]\circ v=\frac{av+b}{cv+d}.\]
This equation is self-explanatory if \(v\notin\{\infty,-\frac{d}{c}\}\). For \(c\neq 0\) we set \([A]\circ\infty=\frac{a}{c}\) and \([A]\circ(-\frac{d}{c})=\infty\); \([A]\circ\infty=\infty\) if \(c=0\). The Mobius-Transformation is a left group action of \(\mathrm{PGL}_{2}(K)\) on \(\overline{K}\cup\{\infty\}\) and thus every subgroup of \(\mathrm{PGL}_{2}(K)\) acts on \(\overline{K}\cup\{\infty\}\) too. For \(G\leq\mathrm{PGL}_{2}(K)\) we denote the \(G\)-orbit of \(v\in\overline{K}\cup\{\infty\}\) as \(G\circ v\). Let \(G\leq\mathrm{PGL}_{2}(K)\) then define
\[\mathcal{N}\mathcal{R}_{K}^{G}:=\{f\in K[x]|f\text{ monic and }f(\alpha)\neq 0 \text{ for all }\alpha\in G\circ\infty\}.\]
This set is closed under multiplication, i.e. is a submonoid of \(K[x]\). We make the convention that \(f(\infty)=\infty\) for all polynomials of degree greater than \(0\) and \(a(\infty)=a\) for \(a\in K\). The following basic result about \(*\) holds:
**Lemma 7**.: _Let \(G\leq\mathrm{PGL}_{2}(K)\). For all \(f,g\in\mathcal{N}\mathcal{R}_{K}^{G}\) and \([A],[B]\in G\) the following hold:_
1. \(\deg([A]*f)=\deg(f)\)__
2. \([AB]*f=[B]*([A]*f)\) _and_ \([I_{2}]*f=f\)_, so_ \(*\) _is a right group action of_ \(G\) _on_ \(\mathcal{N}\mathcal{R}_{K}^{G}\)__
3. \([A]*(fg)=([A]*f)([A]*g)\)__
4. \(f\) _irreducible if and only if_ \([A]*f\) _irreducible_
We omit the proof as it can be done almost exactly as in [12] or [26]. Because of the fourth item of the previous lemma we know that \(G\) induces a group action on
\[\mathcal{I}_{K}^{G}:=\mathcal{I}_{K}\cap\mathcal{N}\mathcal{R}_{K}^{G}.\]
Remember that we write \(G*f\) for the \(G\)-orbit of \(f\). Note that \(G*f\subset\mathcal{N}\mathcal{R}_{K}^{G}\) if \(f\in\mathcal{N}\mathcal{R}_{K}^{G}\) and every polynomial in the orbit has the same degree as \(f\). The following lemma explains the connection between \(G\)-invariant polynomials and the Mobius-Transformation and the proof can be done similarly as in [12] or [26] again:
**Lemma 8**.: _Let \(G\leq\mathrm{PGL}_{2}(K)\) and \(f\in\mathcal{N}\mathcal{R}_{K}^{G}\). Further we denote by_
\[R_{f}:=\{v\in\overline{K}|f(v)=0\} \tag{4}\]
_the set of roots of \(f\) in \(\overline{K}\). Then the following hold:_
1. _If_ \(f\) _is_ \(G\)_-invariant, then_ \([A]\circ R_{f}=R_{f}\) _for all_ \([A]\in G\)_. Here_ \([A]\circ R_{f}:=\{[A]\circ v|v\in R_{f}\}\)__
2. _If_ \(f\) _is irreducible the converse is also true, more precisely:_ \([A]\circ R_{f}=R_{f}\) _for all_ \([A]\in G\) _implies that_ \(f\) _is_ \(G\)_-invariant_
From now on we use \(R_{f}\) as the set of roots of a polynomial \(f\in K[x]\) as defined in (4). In the lemma above we did not make the assumption that \(G\) has to be a finite subgroup of \(\operatorname{PGL}_{2}(K)\). So now, we want to explore what happens if \(G\) is infinite. Let \([A]\in\operatorname{PGL}_{2}(K)\), then it is quite obvious that all fixed points of \([A]\) in \(\overline{K}\) under \(\circ\) are in \(K\cup\{\infty\}\) or in a quadratic extension of \(K\). Let \(v\in\overline{K}\) with \([K(v):K]\geq 3\), then \([A]\circ v\neq v\) for all \([A]\in\operatorname{PGL}_{2}(K)\), so \(G\circ v\) contains infinitely many elements. Therefore there can not exist \(G\)-invariant irreducible monic polynomials of degree greater than \(2\) for \(G\) an infinite subgroup of \(\operatorname{PGL}_{2}(K)\), since otherwise it would have infinitely many roots by Lemma 8. This is one of the reasons why we focus on finite subgroups of \(\operatorname{PGL}_{2}(K)\). Thus, from now on, \(G\) denotes a finite subgroup of \(\operatorname{PGL}_{2}(K)\). The following corollary helps us to understand the factorization of \(G\)-invariant polynomials:
**Corollary 9**.: _Let \(f,s,t\in\mathcal{NR}_{K}^{G}\), where \(f\) is a \(G\)-invariant polynomial with irreducible factor \(r\in\mathcal{I}_{K}^{G}\). Then the following hold:_
1. \([A]*r\) _divides_ \(f\) _for all_ \([A]\in G\)__
2. _If_ \(\gcd(s,t)=1\)_, then_ \(\gcd([A]*s,[A]*t)=1\) _for all_ \([A]\in G\)__
3. _If_ \(r^{n}|f\) _and_ \(r^{n+1}\nmid f\)_, then_ \(([A]*r)^{n}|f\) _and_ \(([A]*r)^{n+1}\nmid f\) _for all_ \([A]\in G\)_. So all polynomials in_ \(G*r\) _divide_ \(f\) _with the same multiplicity_
Proof.: The first statement is immediate, since \([A^{-1}]\circ R_{r}=R_{[A]*r}\subset R_{f}\) by Lemma 8, so \([A]*r\) divides \(f\) as well.
The condition \(\gcd(s,t)=1\) is equivalent to \(R_{s}\cap R_{t}=\varnothing\) in \(\overline{K}\) and again \([A^{-1}]\circ R_{s}=R_{[A]*s}\) and \([A^{-1}]\circ R_{t}=R_{[A]*t}\). With the fact that every \([A]\in G\) induces a bijection on \(\overline{K}\cup\{\infty\}\) we obtain \(R_{[A]*s}\cap R_{[A]*t}=\varnothing\).
For the last item we use the previous statement: Write \(f=r^{n}\cdot P\) where \(\gcd(r,P)=1\). Then, with the third item of Lemma 7
\[f=[A]*f=[A]*(r^{n}\cdot P)=([A]*r)^{n}\cdot([A]*P)\]
and \(\gcd([A]*r,[A]*P)=1\).
We just showed that every \(G\)-invariant polynomial \(f\in\mathcal{NR}_{K}^{G}\) consists of powers of \(G\)-orbits in \(\mathcal{I}_{K}^{G}\) that are glued together by multiplication. So, in a nutshell, \(G\)-orbits in \(\mathcal{I}_{K}^{G}\) are the atoms of \(G\)-invariant polynomials and thus are quite important for this paper; hence the following definition:
**Definition 10**.: We call \(f\in\mathcal{NR}_{K}^{G}\)\(G\)-orbit polynomial (or simply orbit polynomial) if there exists an irreducible polynomial \(r\in\mathcal{I}_{K}^{G}\) such that
\[f=\prod_{t\in G*r}t=:\prod(G*r).\]
For the sake of completeness we state our observation about the factorization of \(G\)-invariant polynomials as a corollary, but we omit the proof since it is a trivial consequence of Corollary 9.
**Corollary 11**.: _Let \(f\in\mathcal{NR}_{K}^{G}\) be a \(G\)-invariant polynomial. Then there are \(r_{1},\dots,r_{k}\in\mathcal{I}_{K}^{G}\) and \(n_{1},\dots,n_{k}\in\mathbb{N}\setminus\{0\}\) such that_
\[f=\prod_{i=1}^{k}(\prod(G*g_{i}))^{n_{i}}.\]
### Quotient Maps and Rational Transformations
Throughout the rest of the paper we assume that \(Q_{G}\in K(x)\) is a quotient map for \(G\) with monic numerator polynomial \(g\). Note that such a rational function exists for all finite subgroups of \(\operatorname{PGL}_{2}(K)\) and if \(Q^{\prime}_{G}\) is another quotient map for \(G\), then there are constants \(a,b\in K\) such that \(Q^{\prime}_{G}(x)=aQ_{G}(x)+b\) (see [4]). We denote by \((\overline{K}\cup\{\infty\})/G\) the set of \(G\)-orbits in \(\overline{K}\cup\{\infty\}\). The following theorem is an important tool for proving our Main Theorem and explains the name "quotient map":
**Theorem 12** ([4, Proposition 3.9]).: _The quotient map \(Q_{G}\) induces a bijection between \((\overline{K}\cup\{\infty\})/G\) and \(\overline{K}\cup\{\infty\}\), more precisely_
\[\psi:\begin{cases}(\overline{K}\cup\{\infty\})/G\to\overline{K}\cup\{\infty\},\\ G\circ v\mapsto Q_{G}(G\circ v)=Q_{G}(v)\end{cases}\]
_is a bijection and \(\psi(G\circ\infty)=Q_{G}(\infty)=\infty\)._
There are essentially two ways to calculate a quotient map for a given finite subgroup \(G\) that we know of. One of them is explained in [13] and works as follows: Calculate the polynomial
\[F_{G}(y):=\prod_{[A]\in G}(y-([A]\circ x))\in K(x)[y].\]
One of the coefficients of \(F_{G}(y)\) has to be a generator of \(K(x)^{G}\) and thus can be normalized so that it becomes a quotient map. Another method is explained in subsection 3.3. in [4].
For an arbitrary rational function \(Q(x)=g(x)/h(x)\) with \(g,h\) having leading coefficients \(a(g),a(h)\in K^{*}\) we set
\[Q(\infty):=\begin{cases}\infty,&\deg(g)>\deg(h)\\ \frac{a(g)}{a(h)},&\deg(g)=\deg(h)\\ 0,&\deg(h)>\deg(g).\end{cases}\]
We collect some known facts about rational transformations in the next lemma:
**Lemma 13**.: _Let \(Q=\frac{a}{h}\) be such that \(g\) is monic and \(F\in K[x]\) such that \(F(Q(\infty))\neq 0\), then the following hold:_
1. \(\deg(F^{Q})=\deg(F)\cdot\deg(Q)\)__
2. _If_ \(F\) _is reducible so is_ \(F^{Q}\)_, more precisely:_ \(F=rt\) _and_ \(\deg(r),\deg(t)\geq 1\)_, then_ \(F^{Q}=r^{Q}t^{Q}\)__
3. _If_ \(F\) _is monic and_ \(\deg(g)>\deg(h)\)_, then_ \(F^{Q}\) _is monic as well_
The following lemma shows that rational transformations with \(Q_{G}\) yield \(G\)-invariant polynomials:
**Lemma 14**.: _Let \(F\in K[x]\) be a monic polynomial, then \(F^{Q_{G}}\in\mathcal{NR}_{K}^{G}\) and \(F^{Q_{G}}\) is \(G\)-invariant._
Proof.: By Theorem 3.10 and Proposition 3.4 in [4], the denominator of \(Q_{G}\) is of the form
\[h(x)=\prod\limits_{v\in(G\circ\infty)\setminus\{\infty\}}(x-v)^{m_{\infty}},\]
where \(m_{\infty}:=|\operatorname{Stab}_{G}(\infty)|=\frac{|G|}{|G\circ\infty|}\) is the cardinality of the stabilizer of \(\infty\) in \(G\). First of all we want to show that \(F^{Q_{G}}\in\mathcal{NR}_{K}^{G}\). For that we write \(F(x)=\sum\limits_{i=0}^{k}a_{i}x^{i}\) with \(a_{k}=1\), then
\[F^{Q_{G}}(x)=\sum\limits_{i=0}^{k}a_{i}g(x)^{i}h(x)^{k-i}.\]
Since the set of roots of \(h\) is \(G\circ\infty\setminus\{\infty\}\) we obtain for all \(w\in G\circ\infty\setminus\{\infty\}\):
\[F^{Q_{G}}(w)=\sum\limits_{i=0}^{k}a_{i}g(w)^{i}h(w)^{k-i}=g(w)^{k}\neq 0.\]
The last step in the calculation is a consequence of \(\gcd(h,g)=1\) which implies \(g(w)\neq 0\). Thus we got \(F^{Q_{G}}\in\mathcal{NR}_{K}^{G}\), since \(\mathcal{NR}_{K}^{G}\) is the set of monic polynomials with no roots in the orbit of \(\infty\) and \(f(\infty)=\infty\neq 0\) for all polynomials1 with \(\deg(f)\geq 1\).
Footnote 1: Note that the only polynomial of degree \(0\) in \(\mathcal{NR}_{K}^{G}\) is \(1\)
For the calculation that \(F^{Q_{G}}\) is indeed \(G\)-invariant we verify whether for all \([A]\in G\) there exists \(\alpha_{A}\in K^{*}\) such that
\[(cx+d)^{\deg(F^{Q_{G}})}F^{Q_{G}}(\frac{ax+b}{cx+d})=\alpha_{A}F^{Q_{G}}(x)\]
Writing the left side out gives
\[(cx+d)^{\deg(F^{Q_{G}})}F^{Q_{G}}((A\circ x))=(cx+d)^{\deg(F^{Q_{G}})}h(\frac{ ax+b}{cx+d})^{\deg(F)}F(Q_{G}(\frac{ax+b}{cx+d})).\]
Since \(Q_{G}(\frac{ax+b}{cx+d})=Q_{G}(x)\), we can focus on \((cx+d)^{\deg(F^{Q_{G}})}h(\frac{ax+b}{cx+d})^{\deg(F)}\). For that we have to consider \(2\) separate cases, namely \(c\neq 0\) and \(c=0\). Here we only consider the first as the latter can be done similarly. So now \(c\neq 0\), then
we get:
\[(cx+d)^{\deg(F^{Q_{G}})}h(\frac{ax+b}{cx+d})^{\deg(F)}\] \[=(cx+d)^{\deg(F)\deg(Q_{G})}\prod_{v\in(G\infty\infty)\setminus\{ \infty\}}(\frac{ax+b}{cx+d}-v)^{m_{\infty}\deg(F)}\] \[=\frac{(cx+d)^{\deg(F)\cdot|G|}}{(cx+d)^{\deg(F)\cdot(|G\circ\infty| -1)\cdot\frac{|G|}{|G\circ\infty|}}}\prod_{v\in(G\circ\infty)\setminus\{\infty \}}((a-cv)x+(b-dv)))^{m_{\infty}\cdot\deg(F)}\] \[=\left((cx+d)\left(b-d\cdot\frac{a}{c}\right)\prod_{v\in G\circ \infty\setminus\{\infty,\frac{a}{c}\}}(a-cv)\left(x+\frac{b-dv}{a-cv}\right) \right)^{m_{\infty}\cdot\deg(F)}\] \[=\alpha_{A}\left(\left(x+\frac{d}{c}\right)\prod_{v\in G\circ \infty\setminus\{\infty,\frac{a}{c}\}}\left(x-[A^{-1}]\circ v\right)\right)^ {m_{\infty}\cdot\deg(F)}\] \[=\alpha_{A}\left(\prod_{v\in G\circ\infty\setminus\{\infty\}} \left(x-[A^{-1}]\circ v\right)\right)^{m_{\infty}\cdot\deg(F)}\] \[=\alpha_{A}\left(\prod_{u\in G\circ\infty\setminus\{\infty\}} \left(x-u\right)\right)^{m_{\infty}\cdot\deg(F)}=\alpha_{A}h(x)^{\deg(F)}.\]
The factor \(\alpha_{A}\) has the form
\[\alpha_{A}=\left(\underbrace{c(b-d\cdot\frac{a}{c})}_{=-\det(A)}\prod_{v\in G \circ\infty\setminus\{\infty,\frac{a}{c}\}}(a-cv)\right)^{m_{\infty}\cdot\deg( F)},\]
so it is non-zero, since all its factors are non-zero. Moreover note that the inverse of \([A]\) in \(\mathrm{PGL}_{2}(K)\) is \([B]\) with
\[B=\left(\begin{array}{cc}d&-b\\ -c&a\end{array}\right),\]
because \(A\cdot B=\det(A)I_{2}\). This finishes the proof.
This lemma does not hold in general if we consider a general generator instead of quotient maps. Consider a quotient map \(Q_{G}=g/h\), then \(Q:=Q_{G}^{-1}=h/g\) is a generator of \(K(x)^{G}\). However, for \(F=x\) we get \(F^{Q}=h(x)\), which is not in \(\mathcal{NR}_{K}^{G}\) because \(h\) has \(G\circ\infty\setminus\{\infty\}\) as its roots; therefore \(F^{Q}\) can not be \(G\)-invariant. This example suggests that the only exceptions are the monic polynomials \(F\in K[x]\) with root \(Q(\infty)\). To show that we need a lemma first.
**Lemma 15**.: _Let \(Q_{G}\in K(x)\) be a quotient map for \(G\) and \(Q\in K(x)\) another generator of \(K(x)^{G}\), then \(\deg(Q)=\deg(Q_{G})=|G|\) and there is \([C]\in\operatorname{PGL}_{2}(K)\) such that \(Q=[C]\circ Q_{G}\). More precisely there is_
\[C=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\operatorname{GL}_{2}(K)\]
_such that_
\[Q(x)=\frac{aQ_{G}(x)+b}{cQ_{G}(x)+d}.\]
Proof.: The proof is a combination of Lemma 3.1 and Proposition 3.3 in [4].
**Corollary 16**.: _Let \(Q_{G}=\frac{a}{n}\) be a quotient map, \(Q\in K(x)\) an arbitrary generator of \(K(x)^{G}\) and \(F\in K[x]\) monic. Write \(Q=[C]\circ Q_{G}\), which is possible by the lemma above. If \(F([C]\circ\infty)\neq 0\), or equivalently \(F(Q(\infty))\neq 0\), then \(a\cdot F^{Q}\in\mathcal{N}\mathcal{R}_{K}^{G}\) and \(a\cdot F^{Q}\) is \(G\)-invariant (the factor \(a\in K^{*}\) is needed to make \(F^{Q_{G}}\) monic)._
Proof.: Let \(H=[C]*F\), then \(\deg(H)=\deg(F)\) by the assumption that \(F([C]\circ\infty)\neq 0\) (see Lemma 7). Write
\[C=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\operatorname{GL}_{2}(K),\]
then \(H(x)=\lambda_{C,H}(cx+d)^{\deg(F)}F(\frac{ax+b}{cx+d})\) and \(Q(x)=\frac{ag(x)+bh(x)}{cg(x)+dh(x)}\). Moreover
\[H^{Q_{G}}(x) =h(x)^{\deg(H)}H(\frac{g(x)}{h(x)})\] \[=\lambda_{C,H}h(x)^{\deg(H)}(c\frac{g(x)}{h(x)}+d)^{\deg(F)}F \left(\frac{a\frac{g(x)}{h(x)}+b}{c\frac{g(x)}{h(x)}+d}\right)\] \[=\lambda_{C,H}(cg(x)+dh(x))^{\deg(F)}F\left(\frac{ag(x)+bh(x)}{ cg(x)+dh(x)}\right)=\lambda_{C,H}F^{Q}(x).\]
With Lemma 14 we have that \(H^{Q_{G}}\) is \(G\)-invariant and an element of \(\mathcal{N}\mathcal{R}_{K}^{G}\), thus both facts are also true for \(\lambda_{C,H}F^{Q}\).
## 2 Proof of the Main Theorem
Here \(\overline{K}\) denotes a fixed algebraic closure of \(K\) and for a polynomial \(P\in K[x]\) we denote its splitting field in \(\overline{K}\) as \(L_{P}\). Further we define for an extension \(K\subset L\subset\overline{K}\) the set \(\operatorname{hom}_{K}(L)\) as the \(K\)-automorphisms \(\sigma:L\to\overline{K}\). If \(L\) is normal over \(K\) then \(\operatorname{hom}_{K}(L)=\operatorname{Aut}_{K}(L)\). We are going to prove the first part of the Main Theorem:
**Theorem 17**.: _Let \(F\in\mathcal{I}_{K}\), then there is \(k\in\mathbb{N}\setminus\{0\}\) and \(r\in\mathcal{I}_{K}^{G}\) with \(\deg(F)|\deg(r)\) such that_
\[F^{Q_{G}}=(\prod G*r)^{k}.\]
_In words: The \(Q_{G}\)-transform of an irreducible monic polynomial is a power of an orbit polynomial._
Proof.: We know that \(F^{Q_{G}}\in\mathcal{NR}^{G}_{K}\) is \(G\)-invariant by Lemma 14 and therefore all its irreducible factors are contained in \(\mathcal{I}^{G}_{K}\). First we prove the degree condition for the irreducible factors of \(F^{Q_{G}}\). For that let \(r\in\mathcal{I}^{G}_{K}\) be an arbitrary irreducible factor of \(F^{Q_{G}}\) and \(v\in\overline{K}\) a root of \(r\), then
\[0=F^{Q_{G}}(v)=h(v)^{\deg(F)}F(Q_{G}(v))\]
and \(h(v)\neq 0\) because \(r\in\mathcal{NR}^{G}_{K}\). Thus \(F(Q_{G}(v))=0\) which shows that \(Q_{G}(v)=\alpha\in R_{F}\) and with that \(K(Q_{G}(v))\subseteq K(v)\). We conclude
\[\deg(r)=[K(v):K]=[K(v):K(\alpha)]\cdot[K(\alpha):K]=[K(v):K(\alpha)]\cdot\deg( F).\]
For the rest note that \(G*r\) divides \(F^{Q_{G}}\) by Corollary 9. So our goal now is to show that every irreducible factor of \(F^{Q_{G}}\) belongs to \(G*r\). With Lemma 8 we know that the set of roots of \(F^{Q_{G}}\) can be partitioned into \(G\)-orbits under the Mobius-Transformation, i.e. there exist \(v_{1},\ldots,v_{l}\in R_{F^{Q_{G}}}\) such that
\[R_{F^{Q_{G}}}=\bigcup_{i=1}^{l}(G\circ v_{i})\]
and \((G\circ v_{i})\cap(G\circ v_{j})=\varnothing\) for \(i\neq j\). We set w.l.o.g. \(v=v_{1}\). By Theorem 12 we know that there are \(\alpha_{1},\ldots,\alpha_{l}\in\overline{K}\) such that \(Q_{G}(G\circ v_{i})=\alpha_{i}\). Note that \(R_{F}=\{\alpha_{1},\ldots,\alpha_{l}\}\). Now consider the splitting fields \(L_{F},L_{F^{Q_{G}}}\) of \(F\) and \(F^{Q_{G}}\) over \(K\). The extensions \(L_{F}/K,L_{F^{Q_{G}}}/K\) and \(L_{F^{Q_{G}}}/L_{F}\) are normal and finite. It can be shown that for all \(\alpha_{i},\alpha_{j}\in R_{F}\) there is \(\sigma_{i,j}\in\hom_{K}(L_{F})=\Aut_{\K}(L_{F})\) such that \(\sigma_{i,j}(\alpha_{i})=\alpha_{j}\) because \(F\) is irreducible (for reference see [24, Theorem 2.8.3] for example). Now let \(\beta\in R_{F}\) be arbitrary, then there is \(\sigma_{\beta}\in\Aut_{K}(L_{F})\) such that \(\sigma_{\beta}(\alpha_{1})=\beta\). The automorphism \(\sigma_{\beta}\) can be extended to an automorphism in \(\Aut_{K}(L_{F^{Q_{G}}})\), we denote it by \(\overline{\sigma}_{\beta}\) (for reference see [24, Theorem 2.8.4]). Finally, we put everything together: Let \(w\in R_{F^{Q_{G}}}\) and \(Q_{G}(w)=\gamma\in R_{F}\), then
\[Q_{G}(w)=\gamma=\overline{\sigma}_{\gamma}(\alpha)=\overline{\sigma}_{\gamma} (Q_{G}(v))=Q_{G}(\overline{\sigma}_{\gamma}(v)),\]
so \(w\) and \(\overline{\sigma}_{\gamma}(v)\) are contained in the same \(G\)-orbit. We just showed that every \(G\)-orbit in \(R_{F^{Q_{G}}}\) contains at least one root of \(r\), since \(\sigma(v)\) is always a root of \(r\) for all \(\sigma\in\Aut_{K}(L_{F^{Q_{G}}})\). To finish the proof let \(t\in\mathcal{I}^{G}_{K}\) be an arbitrary irreducible factor of \(F^{Q_{G}}\) and \(w\) a root of \(t\), then there is \([A]\in G\) and \(v\in R_{r}\) such that \([A]^{-1}\circ v=w\), thus \(t=[A]*r\).
_Remark 18_.: This theorem still holds for arbitrary generators \(Q=\frac{g}{h}\) of \(K(x)^{G}\) if \(F\in\mathcal{I}_{K}\) satisfies \(F(Q(\infty))\neq 0\) because of Corollary 16 with proof. Notice that \(Q(\infty)\in K\cup\{\infty\}\), thus for \(\deg(F)\geq 2\) it always holds. But \(F^{Q}\) is not guaranteed to be monic, so we have to normalize it on occasion.
What is left to show is that \(k=1\) for all but finitely many \(F\in\mathcal{I}_{K}\). The next corollary is very helpful:
**Corollary 19**.: _Let \(F\in\mathcal{I}_{K}\), then every \(G\)-orbit in \(R_{F^{Q_{G}}}\) is of the same size. So for \(v\in R_{F^{Q_{G}}}\) we obtain:_
\[|R_{F^{Q_{G}}}|=|R_{F}|\cdot|G\circ v|\]
Proof.: Let \(v\in R_{F^{Q_{G}}}\) be such that \(Q_{G}(G\circ v)=\alpha\in R_{F}\). Additionally, for \(\beta\in R_{F}\) let \(\overline{\sigma}_{\beta}:L_{F^{Q_{G}}}\to L_{F^{Q_{G}}}\) be an automorphism of \(L_{F^{Q_{G}}}\) such that \(\overline{\sigma}_{\beta}(\alpha)=\beta\) as in the proof of Theorem 17. Moreover let \(w_{\beta}\) be a root of \(F^{Q_{G}}\) such that \(Q_{G}(w_{\beta})=\beta\). We have
\[\overline{\sigma}_{\beta}(G\circ v)\subseteq G\circ w_{\beta}\]
and since \(\overline{\sigma}_{\beta}^{-1}\in\operatorname{Aut}_{K}(L_{F^{Q_{G}}})\) with \(\overline{\sigma}_{\beta}^{-1}(\beta)=\alpha\) also \(G\circ w_{\beta}\subseteq\overline{\sigma}_{\beta}(G\circ v)\). Hence \(G\circ w_{\beta}=\overline{\sigma}_{\beta}(G\circ v)\) and \(|G\circ v|=|\overline{\sigma}_{\beta}(G\circ v)|\) since \(\overline{\sigma}_{\beta}\) is bijective on \(L_{F^{Q_{G}}}\). Thus we obtain
\[|R_{F^{Q_{G}}}|=|\bigcup_{\beta\in R_{F}}Q_{G}^{-1}(\beta)|=\sum_{\beta\in R_{ F}}|Q_{G}^{-1}(\beta)|=|R_{F}|\cdot|Q_{G}^{-1}(\alpha)|=|R_{F}|\cdot|G \circ v|.\]
It follows that if \(K\) is perfect, then \(F^{Q_{G}}\) is separable if and only if \(G\circ v\) is regular, because then
\[|R_{F^{Q_{G}}}|=|R_{F}|\cdot|G\circ v|=\deg(F)\cdot|G|=\deg(F^{Q_{G}}).\]
Later we will see that there are only finitely many non-regular \(G\)-orbits in \(\overline{K}\cup\{\infty\}\) and consequentially there are only finitely many \(F\in\mathcal{I}_{K}\) such that \(F^{Q_{G}}\) is a proper power of an orbit polynomial. But before we do that we want to show that \(G\circ v\) is regular for a root of \(F^{Q_{G}}\) implies that \(F^{Q_{G}}\) is a \(G\)-orbit polynomial and not a proper power thereof holds over every field.
### Proof of the Second Part of the Main Theorem, Theorem R and Theorem 6
Let \(L/K\) be a finite field extension. The separable degree of \(L\) over \(K\) is defined as
\[[L:K]_{s}:=|\operatorname{hom}_{K}(L)|.\]
Recall that it behaves in the same way as the degree of field extensions, that is, for \(M/L/K\) we have
\[[M:K]_{s}=[M:L]_{s}\cdot[L:K]_{s}.\]
If \(K\) is perfect, then \([L:K]_{s}=[L:K]\) for every finite field extension \(L\) of \(K\). Now let \(\operatorname{char}(K)=p>0\) and \(r\in\mathcal{I}_{K}\) with root \(v\in R_{r}\). Then there is a natural number \(d\) such that
\[[K(v):K]=p^{d}\cdot[K(v):K]_{s},\]
this \(d\) is called the _radical exponent_ of \(r\) or \(v\) over \(K\). It can be shown that the radical exponent of \(r\) is the smallest positive integer such that there is an irreducible and separable polynomial \(s\in K[x]\) such that \(r(x)=s(x^{p^{d}}).\) For a nice reference on this topic see [24]. For the sake of convenience we use the notation
\[\operatorname{rad}(r):=p^{d}=\frac{[K(v):K]}{[K(v):K)]_{s}}\]
for \(r\in\mathcal{I}_{K}\) with radical exponent \(d\) and \(v\in R_{r}\).
The essential part of the proof is to show that \(\operatorname{rad}(F)=\operatorname{rad}(r)\) for all irreducible factors \(r\in\mathcal{I}_{K}^{G}\) of \(F^{Q_{G}}\). So let \(F\in\mathcal{I}_{K}\), \(r\in\mathcal{I}_{K}^{G}\) an irreducible factor of \(F^{Q_{G}}\) and \(v\in R_{r}\) a root of \(r\) with \(Q_{G}(v)=\alpha\in R_{F}\), then
\[\operatorname{rad}(r) =\frac{[K(v):K]}{[K(v):K]_{s}}=\frac{[K(v):K(\alpha)]}{[K(v):K( \alpha)]_{s}}\cdot\frac{[K(\alpha):K]}{[K(\alpha):K]_{s}}\] \[=\frac{[K(v):K(\alpha)]}{[K(v):K(\alpha)]_{s}}\cdot\operatorname{ rad}(F),\]
so \(\operatorname{rad}(F)\leq\operatorname{rad}(r)\). For \(\operatorname{rad}(F)\geq\operatorname{rad}(r)\) we need to work a bit.
**Lemma 20**.: _Let \(F\in\mathcal{I}_{K}\) be such that \(|G\circ v|=|G|\) for \(v\in R_{F^{Q_{G}}}\) and \(\operatorname{char}(K)=p>0\). Further \(F(x)=H(x^{q})\) for \(q\) a power of \(p\) and \(H\in\mathcal{I}_{K}\) also separable, thus \(\operatorname{rad}(F)=q\). Then there is a separable polynomial \(S\in K[x]\) such that_
\[F^{Q_{G}}(x)=S(x^{q}).\]
Proof.: This proof has two parts. At first we show that we can write \(F^{Q_{G}}\) as \(S(x^{q})\) and afterwards show that the polynomial \(S\) is separable. The first part is a calculation exercise. For that let \(Q_{G}=\frac{q}{h}\) and \(H=\sum_{i=0}^{n}a_{i}x^{i}\). Additionally, for an arbitrary polynomial \(P:=\sum_{i=0}^{m}c_{i}x^{i}\) we define
\[P^{(q)}:=\sum_{i=0}^{m}c_{i}^{q}x^{i}.\]
Observe that \(P(x)^{q}=P^{(q)}(x^{q})\) for \(q\) a power of \(\operatorname{char}(K)=p\). With that we obtain the following:
\[F^{Q_{G}}(x) =h(x)^{\deg(F)}F(\frac{g(x)}{h(x)})=h(x)^{q\deg(H)}H(\frac{g(x)^{ q}}{h(x)^{q}})\] \[=\sum_{i=0}^{n}a_{i}g(x)^{iq}h(x)^{(n-i)q}=\sum_{i=0}^{n}a_{i}g^{ (q)}(x^{q})^{i}h^{(q)}(x^{q})^{(n-i)}\]
Since \(K[x^{q}]\subseteq K[x]\) is a subring there exists a polynomial \(S\) such that \(F^{Q_{G}}(x)=S(x^{q})\), which is exactly what we wanted. The polynomial \(S\) is of degree \(\deg(S)=\deg(F^{Q_{G}})/q=|G|\cdot\deg(H)\). For every \(v\in R_{F^{Q_{G}}}\) holds \(v^{q}\in R_{S}\). The map \(y\mapsto y^{q}\) is bijective on \(\overline{K}\), thus \(\rho:R_{F^{Q_{G}}}\to R_{S}\) with \(v\mapsto v^{q}\) is injective and therefore \(|R_{F^{Q_{G}}}|\leq|R_{S}|\). Conversely, for \(\alpha\in R_{S}\) the \(q\)-th root \(\alpha^{1/q}\) is a root of \(F^{Q_{G}}\), because \(F(\alpha^{1/q})=S((\alpha^{1/q})^{q})=S(\alpha)=0\). This shows that \(\rho\) is actually a bijection and \(|R_{S}|=|R_{F^{Q_{G}}}|\). We finish this proof by applying Corollary 19 and using \(\deg(H)=|R_{F}|\) as well as our assumption that \(|G|=|G\circ v|\):
\[|R_{S}|=|R_{F^{Q_{G}}}|=|R_{F}|\cdot|G\circ v|=\deg(H)\cdot|G|=\deg(S)\]
So \(S\) is separable because it has \(\deg(S)\) many roots in \(\overline{K}\).
If \(S\in K[x]\) is the separable polynomial such that \(F^{Q_{G}}=S(x^{q})\) as in the lemma above, then \(S\) factorizes into separable irreducible factors \(S=s_{1}\cdot\ldots\cdot s_{l}\). Hence \(S(x^{q})=s_{1}(x^{q})\cdot\ldots\cdot s_{l}(x^{q})\), so it should be beneficial to study the factorization of polynomials of the form \(s(x^{q})\) for \(s\in\mathcal{I}_{K}\) irreducible and separable. The next lemma shows that such polynomials consist of only one
irreducible factor. We give a proof of this lemma as it is a crucial tool for the following theorem. In the proof we employ a similar method as in the proof of Theorem 17. We want to point out that there is no finite subgroup \(G\leq\operatorname{PGL}_{2}(K)\) with \(x^{q}\in K(x)\) as a quotient map for \(q\) a power of \(\operatorname{char}(K)\), so this is not a particular case of Theorem 17.
**Lemma 21**.: _Let \(s\in K[x]\) be an irreducible, separable and monic polynomial. Furthermore let \(\operatorname{char}(K)=p>0\) and \(q=p^{d}\) for \(d>0\). Then there is an irreducible,separable and monic polynomial \(f\in K[x]\) with \(\deg(f)=\deg(s)\) and \(a,b\in\mathbb{N}\) with \(a+b=d\) such that_
\[s(x^{q})=(f(x^{p^{a}}))^{p^{b}}\]
_and \(f(x^{p^{a}})\) is irreducible._
Proof.: At first we show that \(s(x^{q})\) only has one irreducible factor. Since \(s\) is both irreducible and separable, \(L_{s}/K\) is a Galois extension. If we set \(P(x):=s(x^{q})\) and consider the splitting field \(L_{P}/K\) of \(P\), then, with similar arguments as in the proof of Theorem 17, we can extend every \(\sigma\in\operatorname{Gal}(L_{s}/K)\) to a \(K\)-homomorphism \(\overline{\sigma}\in\hom_{K}(L_{P})\). Since splitting fields are normal, every such \(K\)-homomorphism is actually an automorphism on \(L_{P}\). Let \(F\in\mathcal{I}_{K}\) be an irreducible factor of \(P\) and \(v\in R_{F}\) one of its roots. Then \(v^{q}=:\alpha\) is a root of \(s\) and similarly \(w^{q}=:\beta\in R_{s}\) for \(w\in R_{P}\). Let \(\overline{\sigma}\in\hom(L_{P})\) be the extension of the homomorphism \(\sigma\in\operatorname{Gal}(L_{s}/K)\) with \(\sigma(\alpha)=\beta\), then
\[w^{q}=\beta=\overline{\sigma}(\alpha)=\overline{\sigma}(v^{q})=(\overline{ \sigma}(v))^{q}.\]
As \(y\mapsto y^{q}\) is injective on \(\overline{K}\) we obtain that \(w=\overline{\sigma}(v)\). Therefore \(w\) has to be a root of \(F\in K[x]\) as well, since \(\overline{\sigma}\) is an automorphism of \(L_{P}\) that fixes \(K\). So we just showed that all roots of \(P\) are also roots of the irreducible factor \(F\), thus \(P=F^{k}\) for \(k\in\mathbb{N}\). With the degree formula for field extensions we get
\[\deg(F)=[K(v):K]=[K(v):K(\alpha)]\cdot[K(\alpha):K]=[K(v):K(\alpha)]\cdot\deg( s),\]
which shows \(\deg(s)|\deg(F)\). Together with the fact that \(k\cdot\deg(F)=\deg(P)=q\cdot\deg(s)\) we get \(\deg(F)=p^{a}\cdot\deg(s)\) for an \(a\in\{0,\dots,d\}\) and \(k=p^{b}\) such that \(a+b=d\). Moreover \(p^{a}=[K(v):K(\alpha)]\), which is the degree of the minimal polynomial \(m_{v}\in K(\alpha)[x]\) of \(v\) in \(K(\alpha)\). Since \(v^{q}=\alpha\), it is also a root of \(x^{q}-\alpha\) and therefore \(m_{v}|x^{q}-\alpha\). A simple calculation shows that \(m_{v}=x^{p^{a}}-\alpha^{\frac{1}{p^{b}}}\), so \(\alpha^{\frac{1}{p^{b}}}\in K(\alpha)\). Conversely \(\alpha\in K(\alpha^{\frac{1}{p^{b}}})\), since \(\alpha\) is the \(p^{b}\)-th power of \(\alpha^{\frac{1}{p^{b}}}\), hence \(K(\alpha)=K(\alpha^{\frac{1}{p^{b}}})\). Let \(f\in\mathcal{I}_{K}\) be the minimal polynomial of \(\alpha^{\frac{1}{p^{b}}}\) in \(K[x]\). We just showed that \(\deg(f)=[K(\alpha):K]=\deg(s)\). All that shows that \(v\) is also root of the monic polynomial \(F^{\prime}:=f(x^{p^{a}})\in K[x]\). Since \(F\) is the minimal polynomial of \(v\) over \(K\) it has to divide \(F^{\prime}\). Moreover, \(F\) and \(F^{\prime}\) have the same degree and are monic, thus have to be equal.
This is enough to prove that \(k=1\) if \(G\circ v\) is regular for \(v\in R_{F^{Q_{G}}}\):
**Theorem 22**.: _Let \(F\in\mathcal{I}_{K}\) be such that \(|G\circ v|=|G|\) for \(v\in R_{F^{Q_{G}}}\). Then \(F^{Q_{G}}=\prod(G*r)\), i.e. it is an orbit polynomial._
Proof.: If \(F\) is separable and \(|G\circ v|=|G|\) for a root of \(F^{Q_{G}}\), then all \(G\)-orbits in \(R_{F^{Q_{G}}}\) are of the same size by Corollary 19 and as a consequence \(F^{Q_{G}}\) is also separable since \(\deg(F^{Q_{G}})=|R_{F^{Q_{G}}}|\). If \(\operatorname{rad}(F)=q>1\) then by Lemma 20
\[F^{Q_{G}}(x)=S(x^{q})=\prod_{i=1}^{l}s_{i}(x^{q})=\prod_{i=1}^{l}(f_{i}(x^{p^{ a_{i}}}))^{p^{b_{i}}},\]
where \(s_{i}\in\mathcal{I}_{K}\) are the irreducible and separable factors of \(S\) and \(f_{i}\in\mathcal{I}_{K}\) the separable and irreducible polynomials as in the lemma above, so \(a_{i}+b_{i}=d\) for \(q=p^{d}\). Observe that \(\gcd(f_{i}(x^{p^{a_{i}}}),f_{j}(x^{p^{a_{j}}}))=1\) for \(i\neq j\). The reason for that is that since \(S\) is separable, \(s_{i}\) and \(s_{j}\) have to be different irreducible polynomials, i.e. \(\gcd(s_{i},s_{j})=1\) which is equivalent to \(R_{s_{i}}\cap R_{s_{j}}=\varnothing\). Now, the roots of \(s_{i}(x^{q})\) and \(s_{j}(x^{q})\) are the preimages of \(R_{s_{i}}\) and \(R_{s_{j}}\) under the map \(y\mapsto y^{q}\) on \(\overline{K}\). This map is bijective and therefore also injective, so the preimages are also different and thus \(s_{i}(x^{q})\) and \(s_{j}(x^{q})\) have no roots in common. This small observation is very important because now, with the help of Theorem 17, we can deduce that
\[S(x^{q}) =\prod_{i=1}^{l}(f_{i}(x^{p^{a_{i}}}))^{p^{b_{i}}}\] \[=(\prod_{t\in G*r}t)^{k}.\]
The irreducible factors \(f_{i}(x^{p^{a_{i}}})\) and \(t\in G*r\) have to coincide with each other because \(K[x]\) is a factorial ring. So we obtain with the remarks above (and the fact that all \(t_{1},t_{2}\in G*r\) with \(t_{1}\neq t_{2}\) also satisfy \(\gcd(t_{1},t_{2})=1\) since they are irreducible) that for all \(t\in G*r\) there is exactly one \(i\in[l]\) such that \(t(x)=f_{i}(x^{p^{a_{i}}})\). This also implies \(k=p^{b_{i}}\) for all \(i\in[l]\) and thus \(p^{a_{i}}=p^{a_{j}}\) for all \(i,j\in[l]\). To summarize what we could obtain:
\[S(x^{q})=\prod_{i=1}^{l}(f_{i}(x^{p^{a}}))^{k}=\prod_{t\in G*r}t^{k}=F^{Q_{G}} (x).\]
This shows that \(\operatorname{rad}(t)=p^{a}\leq q=\operatorname{rad}(F)\) for all \(t\in G*r\), thus \(\operatorname{rad}(t)=\operatorname{rad}(F)\), since we already explained that \(\operatorname{rad}(r)\geq\operatorname{rad}(F)\). Hence \(p^{a}=q\) and \(b_{i}=0\), which shows \(k=1\).
Before we finish the proof of the Main Theorem we want to state an immediate consequence of this result
**Corollary 23**.: _Let \(F\in\mathcal{I}_{K}\) be such that \(|G\circ v|=|G|\) for a root \(v\) of \(F^{Q_{G}}\), then_
1. _The degree of every irreducible factor_ \(r\) _of_ \(F^{Q_{G}}\) _satisfies_ \[\deg(r)=\frac{|G|}{|G*r|}\cdot\deg(F)\]
2. _If_ \(\deg(F)<\deg(r)\) _for an irreducible factor_ \(r\) _of_ \(F^{Q_{G}}\)_, then_ \(\{[I_{2}]\}\neq\operatorname{Stab}_{G}(r)\leq G\) _is non-trivial_
Proof.: For the first we use Theorem 22 (so \(k=1\)) and calculate:
\[|G|\cdot\deg(F)=\deg(F^{Q_{G}})=|G*r|\cdot\deg(r).\]
The last is obvious because \(\deg(r)=\frac{|G|}{|G*r|}\cdot\deg(F)>\deg(F)\), so \(|\operatorname{Stab}_{G}(r)|=\frac{|G|}{|G*r|}>1\).
_Remark 24_.: Observe that for \(Q\) an arbitrary generator of \(K(x)^{G}\) Theorem 22 and Corollary 23 still hold if \(F\in\mathcal{I}_{K}\) and \(F(Q(\infty))\neq 0\). The reason for this is again the proof of Corollary 16: Write
\[Q(x)=[C]\circ Q_{G}(x)=\frac{aQ_{G}(x)+b}{cQ_{G}(x)+d}\]
as in Corollary 16, where \(Q_{G}\) is a quotient map, then there is \(H\in\mathcal{I}_{K}\) such that \(H^{Q_{G}}=a\cdot F^{Q}\). Since we always assume something about the roots of the resulting polynomial, i.e. about \(F^{Q}\) and thus also about \(H^{Q_{G}}\), both Theorem 22 and Corollary 23 hold because if \(F^{Q}\) only contains roots in regular \(G\)-orbits so does \(H^{Q_{G}}\).
We state an analogue of Theorem 12 for polynomials:
**Corollary 25**.: _The map \(\delta_{Q_{G}}:\mathcal{I}_{K}\to\mathcal{I}_{K}^{G}/G\) with \(F\mapsto G*r\) such that \(F^{Q_{G}}=\prod(G*r)^{k}\) is a bijection._
Proof.: By Theorem 17\(\delta_{Q_{G}}\) defines a mapping between \(\mathcal{I}_{K}\) and \(\mathcal{I}_{K}^{G}/G\). First we show that \(\delta_{Q_{G}}\) is surjective. Let \(r\in\mathcal{I}_{K}^{G}\) and \(v\in R_{r}\) a root of \(r\). Then \(r\) is the minimal polynomial of \(v\) over \(K\). Moreover, let \(\alpha\in\overline{K}\) be such that \(Q_{G}(v)=\alpha\) and denote by \(F\in\mathcal{I}_{K}\) the minimal polynomial of \(\alpha\). Then \(F^{Q_{G}}\) has \(v\) as a root, thus \(r|F^{Q_{G}}\) and \(F^{Q_{G}}=(\prod(G*r))^{k}\) by Theorem 17, so \(\delta_{Q_{G}}(F)=G*r\). Now onto the injectivity: Let \(F,H\in\mathcal{I}_{K}\) be such that \(\delta_{Q_{G}}(F)=\delta_{Q_{G}}(H)=G*r\) for \(r\in\mathcal{I}_{K}^{G}\), so \(F^{Q_{G}}=(\prod(G*r))^{k}\) and \(H^{Q_{G}}=(\prod(G*r))^{l}\) and both \(F^{Q_{G}}\) and \(H^{Q_{G}}\) have the same roots. With the help of what we observed in the proof of Theorem 17 we obtain:
\[\bigcup_{\alpha\in R_{F}}Q_{G}^{-1}(\alpha)=R_{F^{Q_{G}}}=R_{H^{Q_{G}}}= \bigcup_{\beta\in R_{H}}Q_{G}^{-1}(\beta).\]
Therefore \(R_{F}=R_{H}\) since \(Q_{G}\) induces a bijection between \(\overline{K}\cup\{\infty\}\) and \((\overline{K}\cup\{\infty\})/G\) by Theorem 12. As \(F\) and \(H\) are irreducible, monic and share the same roots they have to be equal.
As an immediate consequence of this corollary we obtain the main part of the general version of Theorem R:
**Theorem 26**.: _If \(f\in\mathcal{I}_{K}^{G}\) is a \(G\)-invariant monic irreducible polynomial with root \(v\in R_{f}\) that is contained in a regular \(G\)-orbit, then there is \(F\in\mathcal{I}_{K}\) such that \(f=F^{Q_{G}}\)._
Proof.: If \(f\in\mathcal{I}_{K}^{G}\) is \(G\)-invariant, then \(G*f=\{f\}\). By Corollary 25 we get that there is \(F\in\mathcal{I}_{K}\) such that \(\delta_{Q_{G}}(F)=\{f\}\), which translates to \(F^{Q_{G}}=f^{k}\). Further \(k=1\) because of Theorem 22 and the assumption that \(v\in R_{f}\) is contained in a regular \(G\)-orbit.
To complete the proofs of the Main Theorem and the general Theorem R we need to show that the number of irreducible polynomials for which \(F^{Q_{G}}=(\prod G*r)^{k}\) with \(k>1\) is finite. By Theorem 22 this is equivalent to showing that the number of irreducible polynomials \(F\in\mathcal{I}_{K}\) for which \(F^{Q_{G}}\) has roots in non-regular \(G\)-orbits is finite. We give these polynomials the following name:
**Definition 27**.: We call \(F\in\mathcal{I}_{K}\)\(Q_{G}\)**-non-conformal** if \(F^{Q_{G}}=(\prod(G*r))^{k}\) for \(r\in\mathcal{I}_{K}^{G}\), \(k\in\mathbb{N}\) and \(k>1\). For the set of \(Q_{G}\)-non-conformal polynomials we write \(\mathcal{NC}^{Q_{G}}\).
Further we set
\[P_{G}:=\{u\in\overline{K}\cup\{\infty\}|\ |G\circ u|<|G|\}\]
as the set of elements in \(\overline{K}\cup\{\infty\}\) contained in non-regular \(G\)-orbits. We have the following:
**Lemma 28** ([4, Lemma 2.1]).: _Let \(G\leq\mathrm{PGL}_{2}(K)\) be finite and \(v\in\overline{K}\cup\{\infty\}\). Then \(G\circ v\) is non-regular if and only if there is \([A]\in G\setminus\{[I_{2}]\}\) such that \([A]\circ v=v\), thus_
\[P_{G}=\{u\in\overline{K}\cup\{\infty\}|\ \exists[A]\in G\setminus\{[I_{2}]\}: \ [A]\circ u=u\}\]
_and this set is finite; more precisely \(|P_{G}|\leq 2(|G|-1)\). Furthermore \([K(u):K]\leq 2\) for all \(u\in P_{G}\setminus\{\infty\}\)._
We denote by \(\mathcal{P}_{G}\) the set of minimal polynomials of elements in \(P_{G}\), that is,
\[\mathcal{P}_{G}:=\{r\in\mathcal{I}_{K}^{G}|\exists\alpha\in P_{G}:\ r(\alpha)=0\}. \tag{5}\]
Notice that \(\mathcal{P}_{G}\subseteq\mathcal{I}_{K}^{G}\) by definition, thus \(\mathcal{P}_{G}\) does not contain polynomials with roots in \((G\circ\infty)\setminus\{\infty\}\), even if this orbit is non-regular. We obtain
**Lemma 29**.: _We have2_
Footnote 2: The set \(\mathcal{NC}^{Q_{G}}\) can be empty. In that case we define the left side of the subset equation to be empty as well.
\[\bigcup_{F\in\mathcal{NC}^{Q_{G}}}\delta_{Q_{G}}(F)\subseteq\mathcal{P}_{G}.\]
_In particular_
\[|\mathcal{NC}^{Q_{G}}|\leq|\mathcal{P}_{G}|\leq|P_{G}|\leq 2(|G|-1),\]
_so there are only finitely many \(Q_{G}\)-non-conformal polynomials._
Proof.: The map \(\mathcal{NC}^{Q_{G}}\to\mathcal{P}_{G}/G\) with \(F\mapsto\delta_{Q_{G}}(F)\) defines an injective mapping by Theorem 25 and Theorem 22, thus the subset equation
\[\bigcup_{F\in\mathcal{NC}^{Q_{G}}}\delta_{Q_{G}}(F)\subseteq\mathcal{P}_{G}\]
follows. Since \(\mathcal{P}_{G}\) only contains minimal-polynomials of elements in \(P_{G}\) and the degree of \(r\in\mathcal{P}_{G}\) is either \(1\) or \(2\) by Lemma 28 we get that \(|\mathcal{P}_{G}|\leq|P_{G}|\), so \(\mathcal{P}_{G}\) is finite because \(|P_{G}|\leq 2(|G|-1)\) by Lemma 28. Hence \(\mathcal{NC}^{Q_{G}}\) is finite as well and
\[|\mathcal{P}_{G}|\geq\sum_{F\in\mathcal{NC}^{Q_{G}}}|\delta_{Q_{G}}(F)|\geq \sum_{F\in\mathcal{NC}^{Q_{G}}}1=|\mathcal{NC}^{Q_{G}}|.\]
We close this section by proving Theorem 6:
**Theorem 30**.: _Let \(F_{1},\ldots F_{l}\in\mathcal{NC}^{Q_{G}}\) be all \(Q_{G}\)-non-conformal polynomials. Further let \(r_{1},\ldots,r_{l}\in\mathcal{I}_{K}^{G}\) be such that \(\delta_{Q_{G}}(F_{i})=G*r_{i}\) and \(n_{i}\in\mathbb{N}\) such that \(F_{i}^{Q_{G}}=(\prod G*r_{i})^{n_{i}}\). Then for every \(G\)-invariant polynomial \(f\in\mathcal{N}\mathcal{R}_{K}^{G}\) there is a unique monic polynomial \(F\in K[x]\) and unique natural numbers \(0\leq k_{i}<n_{i}\) such that_
\[f=\left(\prod_{i=1}^{l}(\prod G*r_{i})^{k_{i}}\right)\cdot F^{Q_{G}}.\]
Proof.: By Corollary 11 we have
\[f=\prod_{i=1}^{e}(\prod(G*g_{i}))^{m_{i}}\]
with \(g_{i}\in\mathcal{I}_{K}^{G}\) and \(m_{1},\ldots,m_{e}\in\mathbb{N}\setminus\{0\}\). We refine this factorization by grouping the orbit polynomials into either belonging to \(\mathcal{P}_{G}\) or not, which gives
\[f=\prod_{i=1}^{l}(\prod(G*r_{i}))^{l_{i}}\cdot\prod_{j=1}^{c}(\prod(G*h_{j}))^ {d_{j}};\]
here we allow \(l_{i}=0\). Since \(h_{j}\in\mathcal{I}_{K}^{G}\setminus\mathcal{P}_{G}\) for all \(j\in[c]\) there exists a unique monic polynomial \(H_{1}\in K[x]\) such that
\[H_{1}^{Q_{G}}=\prod_{j=1}^{c}(\prod(G*h_{j}))^{d_{j}}\]
by Theorem 22 together with Lemma 13 item 2. For the remaining factor we divide \(l_{i}\) by \(n_{i}\) and write \(k_{i}\) for the remainder, so \(l_{i}=a_{i}\cdot n_{i}+k_{i}\) and \(0\leq k_{i}<n_{i}\). We have
\[(F_{i}^{a_{i}})^{Q_{G}}=(\prod(G*r_{i}))^{a_{i}\cdot n_{i}}.\]
Hence we define
\[H_{2}:=\prod_{i=1}^{l}F_{i}^{a_{i}}\]
and get
\[f=\left(\prod_{i=1}^{l}(\prod G*r_{i})^{k_{i}}\right)\cdot(H_{2}\cdot H_{1})^{ Q_{G}}.\]
Set \(F=H_{2}\cdot H_{1}\), which is unique since both \(H_{1}\) and \(H_{2}\) are unique.
## 3 Some Notes on the Galois Theory of Invariant Polynomials
In this section we want to explain some statements about the Galois theory of \(G\)-invariant polynomials and their implications for finite fields. In particular, we give an alternative proof of the fact that all irreducible monic polynomials \(f\in\mathbb{F}_{q}[x]\) of degree \(\deg(f)\geq 3\) have cyclic stabilizers in \(\operatorname{PGL}_{2}(\mathbb{F}_{q})\) ([22, Theorem
1.3]). This means that if \(f\in\mathcal{I}_{\mathbb{F}_{q}}=:\mathcal{I}_{q}\) is of degree \(\deg(f)\geq 3\) and \(G\leq\mathrm{PGL}_{2}(\mathbb{F}_{q})\) is non-cyclic, then \(|G*f|>1\). We will exploit this in the proof of Theorem 2.
Consider the \(G\)-invariant separable and monic polynomial \(f\in\mathcal{I}_{K}^{G}\) with roots belonging to regular \(G\)-orbits and its splitting field \(L_{f}\) in a fixed algebraic closure of \(K\). As seen before we can partition the set of roots of \(f\) into \(G\)-orbits
\[R_{f}=\bigcup_{i=1}^{k}(G\circ v_{i})\]
where \(v_{1},\ldots v_{k}\in R_{f}\) is a set of representatives of \(R_{f}/G\). Thus \(\deg(f)=|G|\cdot k\), since all \(G\)-orbits in \(R_{f}\) are regular. Let \(\sigma\in\mathrm{Gal}(f):=\mathrm{Gal}(L_{f}/K)\) and
\[A=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\]
such that \([A]\in G\), then
\[\sigma([A]\circ v)=\sigma(\frac{av+b}{cv+d})=\frac{a\sigma(v)+b}{c\sigma(v)+d }=[A]\circ\sigma(v)\]
for all \(v\in R_{f}\). This shows that the actions of \(G\) and \(\mathrm{Gal}(f)\) on \(R_{f}\) commute and that \(G\circ v_{1},\ldots,G\circ v_{k}\) is a non-trivial block system for \(\mathrm{Gal}(f)\):
**Definition 31** (See [9]).: Let \(G\) be a finite group acting transitively on a non-empty finite set \(X\). We say that a subset \(Y\subseteq X\) is a block for \(G\) if \(g\cdot Y=Y\) or \(g\cdot Y\cap Y=\varnothing\). Moreover, \(Y\) is a non-trivial block if \(1<|Y|<|X|\). If \(Y\) is a block then \(\{g\cdot Y|g\in G\}\) is a partition of \(X\) and is called a block system of \(X\) for \(G\).
We define the point-wise stabilizer subgroup of a block \(G\circ v\) as
\[\mathrm{p-Stab}_{\mathrm{Gal}(f)}(G\circ v):=\{\sigma\in\mathrm{Gal}(f)| \sigma(w)=w\text{ for all }w\in G\circ v\}.\]
Similarly, the set-wise stabilizer is
\[\mathrm{s-Stab}_{\mathrm{Gal}(f)}(G\circ v):=\{\sigma\in\mathrm{Gal}(f)| \sigma(w)\in G\circ v\text{ for all }w\in G\circ v\}.\]
Notice that \(\mathrm{p-Stab}_{\mathrm{Gal}(f)}(G\circ v)\unlhd\mathrm{s-Stab}_{\mathrm{ Gal}(f)}(G\circ v)\). Our first goal is to show that \(G\) is isomorphic to the quotient of these stabilizers. For that, we need a nice lemma about commuting group actions stated in [13] and [14]:
**Lemma 32** ([13] & [14]).: _Let \(X\) be a finite non-empty set and \(G,H\) groups acting transitively and faithful3 on \(X\). Moreover, \(G\) acts regularly on \(X\), that is, \(\mathrm{Stab}_{G}(x)=\{1\}\) for all \(x\in X\) and the actions of \(G\) and \(H\) commute, i.e._
Footnote 3: \(G\) acts faithful on \(X\) if \(g\cdot x=x\) for all \(x\in X\) implies \(g=1\)
\[g(h(x))=h(g(x))\]
_for all \(h\in H\), \(g\in G\) and \(x\in X\). Then \(H\) acts regularly on \(X\) and is isomorphic to \(G\)._
Additionally we prove the following
**Lemma 33**.: _Let \(f\in\mathcal{I}_{K}^{G}\) be \(G\)-invariant, separable and \(R_{f}\) only contains regular \(G\)-orbits. Moreover let \(v\in R_{f}\) be a root of \(f\). Then we have:_
1. \(G\) _acts transitively and regularly on_ \(G\circ v\)__
2. \(U:=\operatorname{s-Stab}_{\operatorname{Gal}(f)}(G\circ v)/\operatorname{p- Stab}_{\operatorname{Gal}(f)}(G\circ v)\) _acts faithful and transitively on_ \(G\circ v\)__
3. \(\operatorname{p-Stab}_{\operatorname{Gal}(f)}(G\circ v)=\operatorname{ Stab}_{\operatorname{Gal}(f)}(v)\)_, thus_ \(U\) _acts regularly on_ \(G\circ v\)__
Proof.: For the first item note that the action of \(G\) restricted to any of its orbits in \(R_{f}\) is always transitive. Additionally, \(G\) acts regularly on \(G\circ v\) since all orbits are regular. That the induced action of \(U\) on \(G\circ v\) is faithful and transitive follows from standard facts about group actions, so onto the last item: The inclusion \(\operatorname{p-Stab}_{\operatorname{Gal}(f)}(G\circ v)\subseteq\operatorname {Stab}_{\operatorname{Gal}(f)}(v)\) is obvious. For the other let \(\sigma\in\operatorname{Stab}_{\operatorname{Gal}(f)}(v)\), so \(\sigma(v)=v\). Moreover, by the first item, \(G\) acts transitively on \(G\circ v\), so for all \(w\in G\circ v\) there is \([A]\in G\) such that \([A]\circ v=w\). As a consequence
\[\sigma(w)=\sigma([A]\circ v)=[A]\circ\sigma(v)=[A]\circ v=w,\]
so both sets are equal. Moreover, all stabilizers of elements in \(G\circ v\) are equal to \(\operatorname{p-Stab}_{\operatorname{Gal}(f)}(G\circ v)\). So for all \(w\in G\circ v\) we get
\[U=\operatorname{s-Stab}_{\operatorname{Gal}(f)}(G\circ v)/\operatorname{Stab} _{\operatorname{Gal}(f)}(w),\]
thus \(U\) acts regularly on \(G\circ v\).
We apply Lemma 32 to our setup. We set \(G\) as \(G\) in Lemma 32 and \(H=U\) as in the previous lemma, then
\[G\cong U=\operatorname{s-Stab}_{\operatorname{Gal}(f)}(G\circ v)/\operatorname {Stab}_{\operatorname{Gal}(f)}(v). \tag{6}\]
With that we can obtain
**Corollary 34**.: _Let \(f\in\mathcal{I}_{K}^{G}\) be \(G\)-invariant, separable and all \(G\)-orbits in \(R_{f}\) are regular. Moreover let \(v\in R_{f}\) be a root of \(f\)._
1. _If_ \(\operatorname{Gal}(f)\) _is abelian, then_ \(G\cong U\leq\operatorname{Gal}(f)\)__
2. _If_ \(\deg(f)=|G|\)_, then_ \(G\cong\operatorname{Gal}(f)\)__
Proof.: We want to show that \(\operatorname{Gal}(f)\) is abelian implies \(\operatorname{Stab}_{\operatorname{Gal}(f)}(v)=\{\operatorname{id}\}\). To see this let \(\sigma\in\operatorname{Stab}_{\operatorname{Gal}(f)}(v)\) and \(w\in R_{f}\). Additionally set \(\tau\in\operatorname{Gal}(f)\) such that \(\tau(v)=w\) (exists since \(f\) is irreducible and thus \(\operatorname{Gal}(f)\) acts transitively on \(R_{f}\)). Then we obtain
\[\sigma(w)=\sigma(\tau(v))=\tau(\sigma(v))=\tau(v)=w,\]
so \(\sigma(w)=w\) for all \(w\in R_{f}\) and thus \(\sigma=\operatorname{id}\) because \(\operatorname{Gal}(f)\) acts faithful on \(R_{f}\). Consequentially \(G\cong\operatorname{s-Stab}_{\operatorname{Gal}(f)}(G\circ v)\leq\operatorname {Gal}(f)\), which finishes the first part.
Now let \(f\) be of degree \(|G|\), so also \(|R_{f}|=|G|\) and \(R_{f}=G\circ v\) for all \(v\in R_{f}\). Therefore \(\operatorname{Gal}(f)=\operatorname{s-Stab}_{\operatorname{Gal}(f)}(G\circ v)\) by definition and \(\operatorname{p-Stab}_{\operatorname{Gal}(f)}(G\circ v)=\{\operatorname{id}\}\) because \(\operatorname{Gal}(f)\) acts faithful on \(R_{f}\). This shows
\[\operatorname{Gal}(f)=\operatorname{s-Stab}_{\operatorname{Gal}(f)}(G\circ v )\cong G.\]
Further, we get:
**Corollary 35**.: _Let \(f\in\mathcal{I}_{q}\). If \(f\) is \(G\)-invariant for a subgroup \(G\leq\operatorname{PGL}_{2}(\mathbb{F}_{q})\) and \(R_{f}\) only contains regular \(G\)-orbits then \(G\) has to be cyclic and \(|G|\mid\deg(f)\)._
Proof.: It is well-known that \(\operatorname{Gal}(f)\cong C_{\deg(f)}\), where \(C_{n}\) is the cyclic group of order \(n\), so \(\operatorname{Gal}(f)\) is also abelian. If \(f\) is \(G\)-invariant for \(G\leq\operatorname{PGL}_{2}(\mathbb{F}_{q})\), then \(G\) has to be isomorphic to a subgroup of \(C_{\deg(f)}\) by Corollary 34 (1) and therefore has to be cyclic as well. By Lagrange's theorem \(|G|\) divides \(|C_{\deg(f)}|=\deg(f)\).
_Remark 36_.: This result does not hold for quadratic irreducible polynomials over finite fields. Define \(\mathcal{I}_{q}^{n}\) as the set of monic irreducible polynomials over \(\mathbb{F}_{q}\) of degree \(n\). It can be shown that for \(g\in\mathcal{I}_{q}^{2}\) we have
\[G\ast g=\mathcal{I}_{q}^{2}\]
and thus
\[|\operatorname{Stab}_{\operatorname{PGL}_{2}(\mathbb{F}_{q})}(g)|=\frac{| \operatorname{PGL}_{2}(\mathbb{F}_{q})|}{|\mathcal{I}_{q}^{2}|}=2(q+1).\]
Since the biggest order of an element in \(\operatorname{PGL}_{2}(\mathbb{F}_{q})\) is \(q+1\) the stabilizer can not be cyclic. In fact, it is dihedral. The reason why Corollary 35 fails is that the set of roots of a quadratic irreducible polynomial \(g\in\mathcal{I}_{q}^{2}\) does not contain regular \(\operatorname{Stab}_{\operatorname{PGL}_{2}(\mathbb{F}_{q})}(g)\)-orbits.
We have enough to prove Theorem 2
Proof.: _(Theorem 2)._ Since \(F^{Q_{G}}\) is separable it has degree many roots and together with Corollary 19 we have that all \(G\)-orbits in \(R_{F^{Q_{G}}}\) are regular. Note that \(K=\mathbb{F}_{q}\) is perfect so \(|R_{r}|=\deg(r)\) for all irreducible polynomials in \(\mathbb{F}_{q}[x]\). With Theorem 17 and 22 we know that there exists an irreducible polynomial \(r\in\mathcal{I}_{K}^{G}\) such that \(F^{Q_{G}}=\prod G\ast r\). Observe that \(r\) is \(\operatorname{Stab}_{G}(r)\leq G\) invariant and the \(\operatorname{Stab}_{G}(r)\)-orbits in \(R_{r}\) are regular because the \(G\)-orbits in \(R_{F^{Q_{G}}}\) are regular. Consequentially \(\operatorname{Stab}_{G}(r)\) has to be cyclic by Corollary 35 and with Corollary 23 we obtain
\[\deg(r)=|\operatorname{Stab}_{G}(r)|\cdot\deg(F)\leq\mu_{G}\cdot\deg(F)\]
and
\[|G\ast r|=\frac{|G|}{|\operatorname{Stab}_{G}(r)|}\geq\frac{|G|}{\mu_{G}}.\]
**Corollary 37**.: _If \(G\leq\operatorname{PGL}_{2}(\mathbb{F}_{q})\) is non-cyclic and \(Q_{G}\in\mathbb{F}_{q}(x)\) is a quotient map for \(G\). Then \(F^{Q_{G}}\) is reducible for all \(F\in\mathbb{F}_{q}[x]\)._
Proof.: Note that "\(F^{Q_{G}}\) is irreducible" implies "\(F\) is irreducible", so we can focus on \(F\) being irreducible. If \(F\in\mathcal{I}_{K}\setminus\mathcal{NC}^{Q_{G}}\), then \(F^{Q_{G}}\) has at least \(|G|/\mu_{G}\) irreducible factors by Theorem 2 and \(|G|/\mu_{G}>1\) if \(G\) is non-cyclic. If \(F\in\mathcal{NC}^{Q_{G}}\), then
\[F^{Q_{G}}=\prod(G\ast r)^{k}\]
for \(r\in\mathcal{I}_{K}^{G}\) and \(k>1\).
Examples of Invariant Polynomials
In this section we show how our result apply to specific subgroups of \(\mathrm{PGL}_{2}(K)\) where \(\mathrm{char}(K)>0\).
### Unipotent Subgroups
Consider a field \(K\) with \(\mathrm{char}(K)=p>0\) and \(q=p^{l}\) for \(l>0\). Moreover assume \(\mathbb{F}_{q}\subseteq K\) and let \(V\leq_{q}K\) be a \(\mathbb{F}_{q}\)-subspace of \(K\) of dimension \(n\in\mathbb{N}\setminus\{0\}\). For the subspace \(V\) we define
\[\widetilde{V}:=\left\{\left[\left(\begin{array}{cc}1&v\\ 0&1\end{array}\right)\right]:v\in V\right\}\leq\mathrm{PGL}_{2}(K).\]
Observe that \(\widetilde{V}\cong\mathbb{F}_{q}^{n}\) as groups, so \(\widetilde{V}\) is abelian and every non-trivial element \([A]\in\widetilde{V}\) has order \(p\). Additionally, \(\widetilde{V}\subseteq\mathrm{Stab}_{\mathrm{PGL}_{2}(K)}(\infty)\), so \(\mathcal{NR}_{K}^{\widetilde{V}}\) is just the set of monic polynomials over \(K\). A quotient map is the to \(V\) associated subspace polynomial (see [4, SS10])
\[Q_{\widetilde{V}}(x)=\prod_{v\in V}(x-v). \tag{7}\]
The set \(P_{\widetilde{V}}\) only contains \(\infty\), so \(\mathcal{P}_{\widetilde{V}}=\varnothing=\mathcal{NC}^{Q_{\widetilde{V}}}\). This makes the classification of \(\widetilde{V}\)-invariant polynomials especially nice:
**Corollary 38**.: _For every monic \(\widetilde{V}\)-invariant polynomial \(f\in\mathcal{NR}_{K}^{\widetilde{V}}\) exists a unique monic polynomial \(F\in K[x]\) such that_
\[f(x)=F\left(\prod_{v\in V}(x-v)\right).\]
_Remark 39_.: This result is already known, see [23, Theorem 2.5.]. There, \(\widetilde{V}\)-invariant polynomials are called \(V\)-translation invariant polynomials since
\[[A]*f(x)=f(x+v)\]
for
\[A=\left(\begin{array}{cc}1&v\\ 0&1\end{array}\right).\]
Even though the proof of Theorem 2.5. is only stated for finite fields, the assumption that \(K\) is finite is not used at all so it also holds if \(K\) is infinite. We gave an alternative proof of this result.
Next we look at the factorization of \(F(Q_{\widetilde{V}}(x))\).
**Lemma 40**.: _Let \(F\in\mathcal{I}_{K}\), then we obtain:_
1. _All irreducible factors of_ \(F(Q_{\widetilde{V}}(x))\) _have the same stabilizer in_ \(\widetilde{V}\)_, i.e. there is_ \(W\leq V\) _such that all irreducible factors of_ \(F(Q_{\widetilde{V}}(x))\) _are_ \(\widetilde{W}\)_-invariant._
2. _Let_ \(F(Q_{\widetilde{V}}(x))=\prod(V*r)\) _for an_ \(r\in\mathcal{I}_{K}\) _and_ \(\operatorname{Stab}_{\widetilde{V}}(r)=\overset{\sim}{W}\) _for_ \(W\leq V\)_. Moreover let_ \(v_{1},\ldots,v_{k}\in V\) _be a complete set of representatives for_ \(V/W\)_, then_ \[F(Q_{\widetilde{V}}(x))=\prod_{i=1}^{k}r(x+v_{i}).\] (8)
Proof.: By Theorem 17 and 22 we know that \(F(Q_{\widetilde{V}}(x))=\prod(\overset{\sim}{V}*r)\) for an \(r\in\mathcal{I}_{K}^{\widetilde{V}}=\mathcal{I}_{K}\). All elements in the same orbit have conjugated stabilizers. Since \(V\) is abelian every subgroup is normal, thus \(\operatorname{Stab}_{\widetilde{V}}(t)=\operatorname{Stab}_{\widetilde{V}}(r)\) for all \(t\in\overset{\sim}{V}*r\), so the first part is proved.
For the second item notice that for \(v,u\in V\) we have
\[r(x+v)=r(x+u)\Leftrightarrow r(x)=r(x+(u-v))\Leftrightarrow u-v\in W,\]
hence all \(r(x+v_{i})\) are different irreducible polynomials for \(v_{1},\ldots,v_{k}\) a complete set of representatives of \(V/W\). Since \(F(Q_{\widetilde{V}}(x))\) has exactly \(|V|/|W|\) irreducible factors by Corollary 23 equation (8) follows.
Note that Corollary 4 is an immediate consequence of Theorem 2.
### Borel-Subgroup
Here we consider the _Borel_-subgroup of \(\operatorname{PGL}_{2}(q)\) in fields \(K\) with \(\mathbb{F}_{q}\subseteq K\). This groups is defined as
\[B(q)=\left\{\left[\left(\begin{array}{cc}a&b\\ 0&1\end{array}\right)\right]:a\in\mathbb{F}_{q}^{*},b\in\mathbb{F}_{q}\right\}.\]
The transformation with \([A]\in B(q)\) looks like
\[[A]*f(x)=a^{-\deg(f)}\cdot f(ax+b)\]
for
\[A=\left(\begin{array}{cc}a&b\\ 0&1\end{array}\right). \tag{9}\]
The group can be seen as
\[B(q)\cong\mathbb{F}_{q}\rtimes\mathbb{F}_{q}^{*}\]
where the multiplication is defined as
\[(b,a)\cdot(d,c)=(b+ad,ac)\]
and for \(A\in\operatorname{GL}_{2}(K)\) as in (9) we have
\[\operatorname{ord}([A])=\begin{cases}\operatorname{ord}_{\mathbb{F}_{q}^{*}}( a),&\text{ if }a\neq 1\\ p,&\text{ if }a=1\text{ and }b\neq 0\end{cases}\]
Moreover \(B(q)=\operatorname{Stab}_{\operatorname{PGL}_{2}(q)}(\infty)\), so \(\mathcal{NR}_{K}^{B(q)}\) is the set of monic polynomials in \(\mathbb{F}_{q}[x]\). We calculate a quotient map for \(B(q)\) using
**Lemma 41** ([4, Theorem 3.10]).: _Let \(G\leq\operatorname{PGL}_{2}(K)\) be a finite subgroup and for \(v\in\overline{K}\) let_
\[g_{v}(x):=\prod_{u\in G\circ v}(x-u)^{m_{v}}\text{ and }h_{\infty}(x)=\prod_{u\in G \circ\infty\setminus\{\infty\}}(x-u)^{m_{\infty}},\]
_where \(m_{u}:=|\operatorname{Stab}_{G}(u)|\) for \(u\in\overline{K}\cup\{\infty\}\). Then there is \(w\in\overline{K}\) such that_
\[Q_{G}(x):=\frac{g_{v}(x)}{h_{\infty}(x)}+w\in K(x)\]
_is a quotient map for \(G\). Conversely, if there is \(w\in\overline{K}\) with \(\frac{g_{v}(x)}{h_{\infty}(x)}+w\in K(x)\), then \(Q(x):=\frac{g_{v}(x)}{h_{\infty}(x)}+w\) is a quotient map for \(G\)._
Let \(v\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\), then \(B(q)\circ v=\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\) because \(\{1,v\}\) is a \(\mathbb{F}_{q}\) basis of \(\mathbb{F}_{q^{2}}\), thus
\[\{a\cdot v+b|a\in\mathbb{F}_{q}^{*},b\in\mathbb{F}_{q}\}=\mathbb{F}_{q^{2}} \setminus\mathbb{F}_{q}.\]
Hence, a quotient map is given by
\[Q_{B(q)}(x) =\prod_{w\in B(q)\circ v}(x-w)^{|\operatorname{Stab}_{B(q)}(v)|}= \prod_{w\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}}(x-w)^{|\operatorname{ Stab}_{B(q)}(v)|}\] \[=\prod_{g\in\mathcal{I}_{q}^{2}}g(x).\]
Recall that \(\mathcal{I}_{q}^{2}\) is the set of monic irreducible polynomials of degree \(2\) over \(\mathbb{F}_{q}\). That \(|\operatorname{Stab}_{B(q)}(v)|=1\) holds is a consequence of \(av+b=v\) only having solutions \(v\in\mathbb{F}_{q}\cup\{\infty\}\) if \((a,b)\neq(1,0)\).
If \(q\neq 2\) then \(P_{B(q)}\) is equal to \(\mathbb{F}_{q}\cup\{\infty\}\) because \(A\) as in (9) fixes \(-b/(a-1)\). For \(q=2\) we have \(B(2)\cong\mathbb{F}_{2}\) and \(P_{B(2)}=\{\infty\}\); so this case belongs to the previous example. The group \(B(q)\) acts transitively on \(P_{G}\setminus\{\infty\}=\mathbb{F}_{q}\) because for fixed \(c\in\mathbb{F}_{q}\) and \(b\in\mathbb{F}_{q}\) arbitrarily take the matrix
\[B=\left(\begin{array}{cc}1&b-c\\ 0&1\end{array}\right)\]
and we see \([B]\circ c=c+(b-c)=b\). Consequentially, \(B(q)\) also acts transitively on \(\mathcal{P}_{G}\), which consists of polynomials of the form \(x-c\) for \(c\in\mathbb{F}_{q}\). Hence there is a monic irreducible polynomial \(F\) of degree \(1\) (so \(F=x-\alpha\) for \(\alpha\in K\)) and an exponent \(k>1\) such that
\[\left(\prod_{g\in\mathcal{I}_{q}^{2}}g(x)\right)-\alpha=F^{Q_{B(q)}}(x)=\left( \prod_{v\in\mathbb{F}_{q}}(x-v)\right)^{k}=(x^{q}-x)^{k}.\]
We can deduce the exponent \(k\) from comparing the degree of the polynomials on both sides of the equality. The left side has degree \(2\cdot N_{q}(2)=2\cdot(\frac{1}{2}(q^{2}-q))=q^{2}-q=q(q-1)\), so \(k=q-1\). To obtain \(\alpha\) we calculate
\[(x^{q}-x)^{q-1}=\frac{x^{q^{2}}-x^{q}}{x^{q}-x}=\frac{x^{q^{2}}-x}{x^{q}-x}- \frac{x^{q}-x}{x^{q}-x}=\prod_{g\in\mathcal{I}_{q}^{2}}g(x)-1\]
so \(\alpha=1\) and thus
\[\left(\prod_{g\in\mathcal{I}_{q}^{2}}g(x)\right)-1=(x^{q}-x)^{q-1}.\]
Hence \((x-1)\in\mathcal{NC}^{Q_{b(q)}}\) and \(\delta_{Q_{B(q)}}(x-1)=\mathcal{P}_{B(q)}\), so \(\mathcal{NC}^{Q_{b(q)}}=\{x-1\}\) by Lemma 29. With this we can characterize all \(B(q)\)-invariant polynomials as follows:
**Corollary 42**.: _For every monic \(B(q)\)-invariant polynomial \(f\in\mathcal{N}\mathcal{R}_{K}^{B(q)}\) exists a unique monic polynomial \(F\in K[x]\) and \(m\in\mathbb{N}\) with \(0\leq m<q-1\) such that_
\[f(x)=(x^{q}-x)^{m}\cdot F\left(\prod_{g\in\mathcal{I}_{q}^{2}}g(x)\right).\]
_Remark 43_.: The polynomial
\[Q(x)=(x^{q}-x)^{q-1}\]
is another quotient map for \(B(q)\). For \(Q\) the sets \(P_{B(q)}\) and \(\mathcal{P}_{B(q)}\) remain the same (notice that these sets are always the same regardless which quotient map we choose), just \(\mathcal{NC}^{Q}=\{x\}\) is different. Therefore we can reformulate the previous Corollary in the following way:
For every monic \(B(q)\)-invariant polynomial \(f\in\mathcal{N}\mathcal{R}_{K}^{B(q)}\) exists a unique monic polynomial \(F\in K[x]\) and \(m\in\mathbb{N}\) with \(0\leq m<q-1\) such that
\[f(x)=(x^{q}-x)^{m}\cdot F\left((x^{q}-x)^{q-1}\right)\]
Changing the quotient maps for \(B(q)\) in the representation of \(B(q)\)-invariant polynomials is like changing the basis of a vector space. The polynomials \(F\) are, in this analogy, like the coefficients of the vectors written as the linear combination of the basis elements.
The factorization over finite \(K\) can be explained with Theorem 2 again:
**Corollary 44**.: _Let \(K=\mathbb{F}_{q^{s}}\) and \(F\in\mathcal{I}_{q^{s}}\) for an \(s\in\mathbb{N}\setminus\{0\}\) and \(q=p^{n}\). If \(F\neq x-1\) then \(F^{Q_{B(q)}}\) has at least_
1. \(q\) _irreducible factors if_ \(q\) _is not prime, i.e._ \(n>1\)_, or_
2. \(q-1\) _irreducible factors if_ \(q\) _is prime, i.e._ \(n=1\)__
_Every such factor has a cyclic stabilizer in \(B(q)\) and thus has degree at most \((q-1)\cdot\deg(F)\) if \(q\) is not prime and \(q\cdot\deg(F)\) if \(q\) is prime._
### Projective General Linear Groups
Let \(\mathrm{char}(K)=p>0\) and assume that \(\mathbb{F}_{q}\subseteq K\) for \(q\) a power of \(p\), then \(G:=\mathrm{PGL}_{2}(\mathbb{F}_{q})\leq\mathrm{PGL}_{2}(K)\). First of all we need to calculate a quotient map for \(G\). As shown in [4, Example 3.12]
\[Q_{G}(x)=\frac{\prod_{r\in\mathcal{I}_{q}^{3}}r(x)}{(\prod_{h\in\mathcal{I}_{q }^{1}}h(x))^{(q-1)q}}=\frac{\prod_{r\in\mathcal{I}_{q}^{3}}r(x)}{(x^{q}-x)^{(q- 1)q}} \tag{10}\]
is a quotient map for \(G\) over every field that contains \(\mathbb{F}_{q}\), so also over \(K\). The sets \(\mathcal{I}_{q}^{1}\) and \(\mathcal{I}_{q}^{3}\) are the sets of monic irreducible polynomials in \(\mathbb{F}_{q}[x]\) of degree \(1\) and \(3\) respectively. We want to determine \(P_{G},\mathcal{P}_{G}\) and \(\mathcal{NC}^{Q_{G}}\). By Lemma 28 we know that \(P_{G}\subseteq\mathbb{F}_{q^{2}}\cup\{\infty\}\) since the equation
\[[A]\circ v=v\]
for \([A]\in\mathrm{PGL}_{2}(q)\) is, in essence, a polynomial equation over \(\mathbb{F}_{q}\), hence all solutions are algebraic over \(\mathbb{F}_{q}\) (except \(\infty\)) and thus \([\mathbb{F}_{q}(v):\mathbb{F}_{q}]\leq 2\). Indeed, \(P_{G}=\mathbb{F}_{q^{2}}\cup\{\infty\}\) since for \(a\in\mathbb{F}_{q}\) and \(v\in\mathbb{F}_{q^{2}}\) we have
\[G\circ a =\mathbb{F}_{q}\cup\{\infty\}\] \[G\circ v =\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}.\]
Therefore \(\mathcal{NR}_{K}^{G}\) consists of monic polynomials in \(K[x]\) with no roots in \(\mathbb{F}_{q}\). The set \(\mathcal{P}_{G}\) contains minimal polynomials of elements in \(P_{G}\setminus(G\circ\infty)=\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\), thus we have two cases:
1. If \(\mathbb{F}_{q^{2}}\not\subseteq K\), then \(\mathcal{P}_{G}=\mathcal{I}_{q}^{2}\) and every \(g\in\mathcal{I}_{q}^{2}\) is also irreducible over \(K[x]\)
2. If \(\mathbb{F}_{q^{2}}\subseteq K\), then \[\mathcal{P}_{G}=\{x-v|v\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\}.\]
Since \(B(q)\subseteq G\) and \(B(q)\) acts transitively on \(\mathcal{P}_{G}\) so does \(G\) and thus \(G*g=\mathcal{P}_{G}\) for all \(g\in\mathcal{P}_{G}\). Since \(\delta_{Q_{G}}(\mathcal{NC}^{Q_{G}})\subseteq\mathcal{P}_{G}\) we are looking for an irreducible polynomial \(F\in\mathcal{I}_{K}\) such that
\[F^{Q_{G}}=(\prod G*h)^{k}=(\prod\mathcal{P}_{G})^{k}=(\prod_{g\in\mathcal{I}_ {q}^{2}}g)^{k}\]
for an \(h\in\mathcal{P}_{G}\) and \(k>1\). Looking back at Example 3.12 in [4] gives \(F=x+1\) and \(k=q+1\), thus \(\mathcal{NC}^{Q_{G}}=\{x+1\}\). With Theorem 30 we obtain
**Corollary 45**.: _For every monic \(\mathrm{PGL}_{2}(q)\)-invariant polynomial \(f\in\mathcal{N}\mathcal{R}_{K}^{G}\) exists a unique monic polynomial \(F\in K[x]\) and \(m\in\mathbb{N}\) with \(0\leq m<q+1\) such that_
\[f(x)=\left(\prod_{g\in\mathcal{I}_{q}^{2}}g(x)\right)^{m}\cdot\left((x^{q}-x) ^{(q-1)q\cdot\deg(F)}F\left(\frac{\prod_{r\in\mathcal{I}_{q}^{3}}r(x)}{(x^{q} -x)^{(q-1)q}}\right)\right).\]
For the factorization over finite fields we shortly recall the \(3\) types of conjugacy classes of cyclic subgroups of \(\mathrm{PGL}_{2}(\mathbb{F}_{q})\) (for reference see [4, Proposition 11.1] or [16, SS8]).
Every \([A]\in\mathrm{PGL}_{2}(q)\) is contained in one of the following three types of conjugacy classes:
1. \([A]\) fixes a unique element in \(\mathbb{F}_{q}\cup\{\infty\}\), i.e. \([A]\circ v=v\) for a unique \(v\in\mathbb{F}_{q}\cup\{\infty\}\)
2. \([A]\) fixes two different elements in \(\mathbb{F}_{q}\cup\{\infty\}\) under Mobius-transformation
3. \([A]\) fixes \(\lambda,\lambda^{q}\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\) under Mobius-transformation
We then say that \([A]\) is of type \(1,2\) or \(3\) respectively. If \([A]\) is of type \(1\) then \(\operatorname{ord}([A])=p\) and \(p\) is the prime dividing \(q\). Every element \([B]\) of type \(2\) has an order dividing \(q-1\) and if \([C]\) is of type \(3\) then \(\operatorname{ord}([C])|q+1\). So \(\mu_{\operatorname{PGL}_{2}(\mathbb{F}_{q})}=q+1\) and we obtain
**Corollary 46**.: _Let \(K=\mathbb{F}_{q^{s}}\) for \(s>0\) and \(F\in\mathcal{I}_{q^{s}}\) and \(G=\operatorname{PGL}_{2}(\mathbb{F}_{q})\). If \(F\neq x+1\) then \(F^{Q_{G}}\) has at least \(q^{2}-q\) irreducible factors and every such factor has degree at most \((q+1)\cdot\deg(F)\)._
Proof.: Follows immediately from Theorem 2 together with \(\mu_{G}=q+1\) and \(|G|=q^{3}-q\).
## Acknowledgements
I want to thank Alev Topuzoglu and Henning Stichtenoth for their helpful remarks and the advice they have given me. I am especially grateful to Henning Stichenoth helping me with some technicalities of section 2.1. and making me aware of the paper [27].
I am also very grateful for all of the invaluable help my supervisor Gohar Kyureghyan has given me. Without her, this paper would probably not exist.
|
2305.05630 | Accurate Real-Time Estimation of 2-Dimensional Direction of Arrival
using a 3-Microphone Array | This paper presents a method for real-time estimation of 2-dimensional
direction of arrival (2D-DOA) of one or more sound sources using a nonlinear
array of three microphones. 2D-DOA is estimated employing frame-level time
difference of arrival (TDOA) measurements. Unlike conventional methods, which
infer location parameters from TDOAs using a theoretical model, we propose a
more practical approach based on supervised learning. The proposed model
employs nearest neighbor search (NNS) applied to a spherical Fibonacci lattice
consisting of TDOA to 2D-DOA mappings learned directly in the field. Filtering
and clustering post-processors are also introduced for improved source
detection and localization robustness. | Anton Kovalyov, Kashyap Patel, Issa Panahi | 2023-05-09T17:20:22Z | http://arxiv.org/abs/2305.05630v1 | # Accurate Real-Time Estimation of 2-Dimensional Direction of Arrival using a 3-Microphone Array
###### Abstract
This paper presents a method for real-time estimation of 2-dimensional direction of arrival (2D-DOA) of one or more sound sources using a nonlinear 3-microphone array. 2D-DOA is estimated employing frame-level time difference of arrival (TDOA) measurements. Unlike conventional methods, which infer location parameters from TDOAs using a theoretical model, we propose a more practical approach based on supervised learning. The proposed model employs nearest neighbor search (NNS) applied to a spherical Fibonacci lattice consisting of TDOA to 2D-DOA mappings learned in the field. Filtering and clustering post-processors are also introduced for improving source detection and localization robustness.
DOA, TDOA, array, nearest neighbor, clustering
## I Introduction
Time difference of arrival (TDOA)-based sound source localization (SSL) is a well-established approach in the literature. When a source is at far-field from the array, or the number of microphones is less than four, 3-dimensional (3D) TDOA-based SSL is not possible, and direction of arrival (DOA) is estimated instead. Practical applications include beamforming [1], blind source separation [2], and to provide a visual DOA indicator for people with spatial hearing loss (SHL) [3, 4, 5]. In this work, we are especially interested in the latter.
Anshuman et al. [3] proposed a smartphone application for providing a visual azimuthal DOA indicator of a speech source for people with SHL. The use of a smartphone for this purpose is especially convenient due to its widespread availability. However, DOA is estimated using only two microphones, resulting in what is known as the _front-back_ ambiguity, which is a common issue in linear arrays. Nowadays many smartphones have an array of at least three microphones. When the array is nonlinear, not only do we avoid front-back ambiguity, but we also allow estimating both azimuth and elevation angles of a source, known as 2D-DOA estimation. Tokgoz et al. [4, 5] proposed adaptations of the work in [3] for \(L\)-shaped arrays of three microphones. However, these methods are constrained to the detection and localization of an individual speech source. Additionally, no practical scheme is proposed to compensate for discrepancies in measured and theoretical TDOAs, which are common in practical systems for reasons such as erroneous array calibration and varying phase response of microphones.
Motivated by the above observations, we propose a practical method for real-time 2D-DOA estimation of multiple sound sources using a nonlinear 3-microphone array. Instead of a theoretical model, TDOA to 2D-DOA mappings are learned in the field and inference is performed applying nearest neighbor search (NNS) to a spherical Fibonacci lattice containing the learned mappings. Furthermore, filtering and clustering post-processors, designed for reliable detection and localization of one or more sources, are also introduced. As shown in Fig. 1, the proposed method was implemented on a smartphone.
## II Problem Formulation
Let us consider an array of three microphones in a noisy and reverberant environment with \(C\) acoustic sources. Let \(\mathbf{m}_{i}\) and \(\mathbf{s}_{c}\) denote the \(i\)-th microphone and \(c\)-th source 3D positions, for \(i\in\{1,2,3\}\) and \(c\in\{1,2,\dots,C\}\). The signal captured by the \(i\)-th microphone is modeled by
\[\mathbf{y}_{i}=\mathbf{v}_{i}+\sum_{c=1}^{C}\mathbf{x}_{ic}\,, \tag{1}\]
Fig. 1: The proposed method deployed on a smartphone application to provide a visual 2D-DOA indicator (orthographic projection of the source position on a hemisphere above the device) for people with SHL.
where \(\mathbf{v}_{i}\) is incoherent noise and \(\mathbf{x}_{ic}\) is the reverberant signal of the \(c\)-th source. Assuming sufficient angular spacing, the problem is formulated as real-time detection of the \(C\) sources and estimation of their azimuth and elevation angles with respect to the microphone array.
## III Methodology
Let us segment all \(\mathbf{y}_{i}\) into \(K\) overlapping frames of length \(L\). The proposed method operates at frame-level in a causal manner. The processing pipeline consists of four stages: (1) TDOA estimation; (2) mapping measured TDOAs to 2D-DOA; (3) filtering unreliable estimates; and (4) clustering.
### _TDOA Estimation_
Let \(\mathbf{y}_{ik}\) be the \(k\)-th segment of \(\mathbf{y}_{i}\), for \(k=1,2,\ldots,K\). Let \(V\) be the subset of frame indices for which exactly one dominant source is present at a time. It is assumed that \(V\) is not empty and it includes frame indices corresponding to every source in the mixture, which, as long as the segment length is not made too long, are reasonable assumptions, especially for sparse signals such as speech. Let \(\mathbf{s}(v)\), denote the 3D position of the dominant source at frame index \(v\in V\). The TDOA in meters of the direct path signal originating at \(\mathbf{s}(v)\) when received between \(\mathbf{m}_{i}\) and \(\mathbf{m}_{j}\), for \(j\in\{1,2,3\}\) such that \(j\neq i\), is given by
\[r_{ij}(v)=||\mathbf{s}(v)-\mathbf{m}_{i}||-||\mathbf{s}(v)-\mathbf{m}_{j}||\;. \tag{2}\]
Assuming the signal propagation speed is known, an estimate of \(r_{ij}(v)\) can be found by the peak of some weighted cross-correlation function between \(\mathbf{y}_{iv}\) and \(\mathbf{y}_{jv}\)[6]. Here we use modified Cross-Power Spectrum Phase [7] (mCPSP) paired with quadratic interpolation [8] (QI) for improved estimation resolution. TDOAs are estimated for every \(k\)-th frame. Unreliable estimates are rejected at the filtering stage.
### _Accurate mapping of TDOAs to 2D-DOA_
Following the formulation of the previous section, we consider frame-level localization of a single source. Hence, for simplicity, we drop the source and frame indices notation unless an ambiguity arises. Let \(\mathbf{s}\), \(r_{ij}\), and \(\hat{r}_{ij}\) denote the 3D source position, the true TDOA in meters and its corresponding estimate, respectively. To have better insight into the problem geometry let us first consider a simple closed form solution (CF) that maps TDOAs to 2D-DOA. To reduce degrees of freedom, we fix \(\mathbf{m}_{1}\) at the origin, i.e., \([0,0,0]^{T}\), \(\mathbf{m}_{2}\) at \([b,0,0]^{T}\), and \(\mathbf{m}_{3}\) at \([c_{x},c_{y},0]^{T}\). Let \(r\) be the distance from source to origin. When \(r\) is large, i.e., the source is at far field, its actual value has negligible impact on TDOA. Thus, we reduce another degree of freedom by fixing \(r\) to some large value. Fig. 2 illustrates the problem geometry. The parameters of interest are \(\theta\) and \(\phi\), which are the azimuth and elevation angles of the source, respectively. For simplicity, let us reparametrize the problem into finding the source 3D position \(\mathbf{s}\) on a sphere of far-field radius \(r\) centered at the origin, where \(\theta\) and \(\phi\) can be inferred from \(\mathbf{s}\) using the relationship in Fig. 2. Let \(s_{x}\), \(s_{y}\), and \(s_{z}\) represent the corresponding \(xyz\) coordinates of \(\mathbf{s}\). A CF mapping linearly independent TDOAs \(r_{12}\) and \(r_{13}\) to \(\mathbf{s}\) is given by
\[\begin{split} s_{x}&=\frac{b^{2}+2r_{12}r-r_{12}^{2 }}{2b}\\ s_{y}&=\frac{c_{x}^{2}+c_{y}^{2}-r_{13}^{2}+2r_{13}r-2 c_{x}s_{x}}{2c_{y}}\\ s_{z}&=\pm\sqrt{r^{2}-s_{x}^{2}-s_{y}^{2}}\;.\end{split} \tag{3}\]
We notice that \(b\) and \(c_{y}\) cannot be 0, meaning that a nonlinear array is needed to estimate 2D-DOA. We further note that \(s_{z}\) can either be negative or positive. Throughout this work we let \(s_{z}\) be positive. The problem can then be visualized as localizing a source on a hemisphere above the array, which is equivalent to letting \(\theta\in[-\pi,\pi]\) and \(\phi\in[0,\pi/2]\).
If measured TDOAs do not match theoretical values for reasons other than reverberation and noise, e.g., erroneous array calibration and varying phase response of microphones, the CF in (3) may not be accurate. For improved performance, TDOA to 2D-DOA mappings can be learned directly in the field. Consequently, we propose the following supervised learning approach. Let us discretize the search space into
\[S=\{\boldsymbol{\gamma}^{(1)},\boldsymbol{\gamma}^{(2)},\ldots,\boldsymbol{ \gamma}^{(N)}\}\;, \tag{4}\]
where \(\boldsymbol{\gamma}^{(n)}=[\theta^{(n)},\phi^{(n)}]^{T}\), for \(n=1,2,...,N\), groups the \(n\)-th 2D-DOA tuple in the search space. Similarly, let
\[Q=\{\mathbf{q}^{(1)},\mathbf{q}^{(2)},\ldots,\mathbf{q}^{(N)}\}\;, \tag{5}\]
where \(\mathbf{q}^{(n)}=[r_{12}^{(n)},r_{13}^{(n)},r_{23}^{(n)}]^{T}\) groups corresponding TDOA mappings. All combinations are considered for robustness. The TDOA mappings in \(Q\) are collected in a supervised manner offline. During inference, NNS is applied as follows
\[\mathbf{\hat{\gamma}}=\operatorname*{arg\,min}_{\boldsymbol{\gamma}^{(n)}}|| \mathbf{q}^{(n)}-\mathbf{\hat{q}}||^{2}\;, \tag{6}\]
where \(\mathbf{\hat{q}}=[\hat{r}_{12},\hat{r}_{13},\hat{r}_{23}]^{T}\). Mappings are stored in a \(k\)-d tree structure [9, 10], allowing NNS in expected logarithmic time.
For further efficiency, the candidate solutions in \(S\) should be distributed as evenly as possible throughout a hemisphere.
Fig. 2: Problem geometry of 2D-DOA estimation with three microphones.
A simple approach is to discretize \(\theta\) and \(\phi\) by some angular spacing \(\delta=180^{\circ}/u\), where \(u\) is a positive integer, resulting in a point distribution known as the _latitude-longitude_ lattice [11]. This lattice can be visualized as a set of points placed at the intersections of a grid of meridians and parallels (Fig. 3a). The total number of points is \(N=u^{2}+1\), which comes from the number of meridians (\(2u\)) times the number of parallels (\(u/2\)) plus one pole. In this lattice, however, points concentrate around the pole, resulting in noticeable anisotropy. Instead, we apply the spherical Fibonacci point set algorithm [12] for a more uniform point distribution (see Fig. 3b). This lattice is generated by
\[\begin{split}\theta^{(n)}&=\frac{2\pi(n-1)}{\Phi} \\ \phi^{(n)}&=\frac{\pi}{2}-\cos^{-1}\left(1-\frac{2n-1 }{2N}\right)\,,\end{split} \tag{7}\]
where \(\Phi=(1+\sqrt{5})/2\) is the golden ratio.
### _Filtering_
We propose a simple three-step filter that rejects unreliable sets of TDOA measurements, i.e., \(\hat{\mathbf{q}}\) in (6). The purpose is to filter out all measurements computed at frame index \(k\not\in V\). Each sequential step consists of verifying that a measurement satisfies a certain reliability condition. The first two conditions are applied on individual TDOAs only. If any of the estimates does not satisfy a given condition, the entire set is rejected. The first condition verifies that there is acoustic activity at the estimated lag by ensuring that the cross-correlation value of the peak is above a positive threshold \(T_{R}\), as given by
\[R(\ell_{\text{max}})>T_{R}\, \tag{8}\]
where \(R(\ell)\) is the weighted cross-correlation function given by mCPSP at lag index \(\ell\), and \(\ell_{\text{max}}\) is the lag of the peak. In the second step, we quantify how dominant the cross-correlation peak is compared to other solution candidates by computing a confidence level \(\beta\in[0,1]\), given by
\[\begin{split}\beta&=1-\frac{\eta}{R(\ell_{\text{ max}})}\\ \eta&=\frac{1}{|\mathcal{L}|}\sum_{\ell\in\mathcal{ L}}\max\left\{0,R(\ell)\right\}\,\end{split} \tag{9}\]
where \(\mathcal{L}\) is the set of all plausible lags not including \(\ell_{\text{max}}\). We then verify that
\[\beta>T_{\beta}\, \tag{10}\]
where \(T_{\beta}\) is some threshold. Finally, the third step ensures coherence among TDOAs in \(\hat{\mathbf{q}}\) by verifying that the error of the NNS estimator in (6) is below a threshold \(T_{q}\), as given by
\[||\mathbf{q}(\hat{\mathbf{Y}})-\hat{\mathbf{q}}||^{2}<T_{q}\, \tag{11}\]
where \(\mathbf{q}(\hat{\mathbf{Y}})\) is the closest match to \(\hat{\mathbf{q}}\) found by NNS.
### _Clustering_
Clustering is used to assign frame-level 2D-DOA estimates to the correct source and combine accumulated measurements in such a way to improve source detection and localization reliability. For this purpose, we propose Recency and Frequency aware Exponential Filter Clustering (REFFC). REFFC is closely based on the exponential filtering concept used in Real-Time Exponential Filter Clustering (RTEFC) [13]. In RTEFC, exponential filtering is employed to update the cluster within minimum distance from a given location measurement. Apart from allowing real-time frame-level processing and low memory and computational overhead, RTEFC also exhibits good tracking capabilities. However, RTEFC does not take into account recency and frequency of data, which are important characteristics in the context of this work.
In REFFC, a fixed number of clusters \(N_{c}\) is maintained in real-time. Each \(i\)-th cluster, for \(i\in\{1,2,\ldots,N_{c}\}\), consists of a cluster centroid \(\hat{\mathbf{s}}_{i}\) and a cluster confidence level \(\rho_{i}\). Here, \(\hat{\mathbf{s}}_{i}\) represents an estimate of the \(i\)-th 3D source position on a unit radius hemisphere. The use of Cartesian representation is necessary for averaging. \(\rho_{i}\), on the other hand, is a value between \(0\) and \(1\) quantifying the confidence of estimate \(\hat{\mathbf{s}}_{i}\), with \(1\) meaning high confidence. Initially, \(\hat{\mathbf{s}}_{i}=\varnothing\) and \(\rho_{i}=0\) for all \(i\), meaning the clusters are inactive, i.e., no source is detected. Let \(\hat{\mathbf{s}}\) be a frame-level estimate of some source location sampled every \(\Delta t\) seconds. If the estimate was rejected by the filtering process, we let \(\hat{\mathbf{s}}=\varnothing\). REFFC searches for the cluster with maximum confidence level whose centroid is within some minimum distance \(d_{\text{min}}\) from \(\hat{\mathbf{s}}\). If such cluster is found, its centroid \(\hat{\mathbf{s}}_{\text{close}}\) is updated using exponential filtering and its confidence level \(\rho_{\text{close}}\) is increased as follows
\[\begin{split}\hat{\mathbf{s}}_{\text{close}}&=\alpha \hat{\mathbf{s}}_{\text{close}}+(1-\alpha)\hat{\mathbf{s}}\\ \rho_{\text{close}}&=\min\left\{1,\rho_{\text{close }}+N_{s}^{-1}\right\}\,,\end{split} \tag{12}\]
where \(N_{s}\) is some positive integer representing the number of consecutive frames needed for \(\rho_{\text{close}}\) to reach \(1\), thus keeping track of the _frequency_ with which a cluster is updated. If a cluster whose centroid is within \(d_{\text{min}}\) distance from \(\hat{\mathbf{s}}\) is not found, REFFC selects the cluster with lowest confidence level and updates it as
\[\begin{split}\hat{\mathbf{s}}_{\text{old}}&=\hat{ \mathbf{s}}\\ \rho_{\text{old}}&=N_{s}^{-1}\.\end{split} \tag{13}\]
This update can be interpreted as creating a new cluster from stale data. Finally, to give a sense of _recency_, for every \(i\)-th
Fig. 3: Two different methods for placing \(N=1297\) points on a hemisphere. Latitude-longitude lattice (a) and Fibonacci lattice (b). Orthographic projections centered at the pole.
cluster that was not updated, RFEFC decreases the confidence threshold followed by forgetting clusters whose confidence level reaches \(0\). This sequence of operations is given by
\[\begin{split}\rho_{i}&=\max\left\{0,\rho_{i}-\frac{ \Delta t}{T_{\text{win}}}\right\}\\ \hat{\mathbf{s}}_{i}&=\begin{cases}\hat{\mathbf{s}}_{ i}&\text{if }\rho_{i}>0\\ \varnothing&\text{if }\rho_{i}=0\,\end{cases}\end{split} \tag{14}\]
where \(T_{\text{win}}\) is some positive real number controlling the time in seconds it takes RFEFC to forget an inactive cluster.
The introduction of \(\rho_{i}\) in RFEFC allows a simple mechanism to decide if a source is present at an estimated location. Here, a source \(i\) is detected once \(\rho_{i}\) reaches \(1\) and remains above some threshold \(T_{a}\). The memory and computational complexities of RFEFC are linear in \(N_{c}\), thus making RFEFC similarly efficient to RTEFC. Also, due to exponential filtering, RFEFC can in principle track a moving source that produces continuous sound. Finally, unlike RTEFC, which updates the cluster closest to the location estimate, RFEFC updates the cluster with highest confidence level within a specified distance. The purpose of this selection rule is to minimize the likelihood of multiple closely spaced clusters being active for prolonged periods of time due to jitter estimates. In such scenarios, RFEFC would only update the cluster with highest confidence, thus forcing other nearby clusters to be forgotten.
## IV Experiments
Three experiments were conducted in the field to evaluate the performance of the proposed method using the nonlinear 3-microphone array of a Pixel 3 smartphone, shown in Fig. 1. Experiments were conducted in a moderately sized office room with typical office noise.
The data collection, needed to populate \(Q\) in (5), was conducted as follows. The smartphone was placed on a turntable and a speaker was placed on a fixed surface at a distance of 0.6 meters from the turntable and at ten varying elevations ranging uniformly from \(0^{\circ}\) to \(90^{\circ}\). During data collection, the speaker played white noise while the turntable turned at a fixed rate till it made a full revolution, thus covering the entire azimuth range. White noise was the source signal due to its good autocorrelation properties. TDOAs were measured and mapped to known points on a latitude-longitude lattice. Interpolation was applied to generate the Fibonacci lattice in (7).
The parameters were defined as follows. The sampling rate \(f_{s}\) was 48 kHz and a frame length \(L=1024\) with 50% overlap was used. The number of lattice points \(N\) in Section III-B was set to \(10^{4}\). The thresholds \(T_{R}\), \(T_{\beta}\), and \(T_{q}\) in Section III-C were set to 1e-2, 0.5, and 5e-5, respectively. The parameters \(N_{c}\), \(\Delta t\), \(d_{\text{min}}\), \(N_{s}\), \(\alpha\), \(T_{\text{win}}\), and \(T_{a}\) in Section III-D were set to 10, \(L/(2f_{s})\) s, 0.25, 5, 0.75, 5 s, and 0.5, respectively.
In the first experiment, we use collected TDOAs to assess the practical effectiveness of supervised learning vs. simpler CF in mapping TDOAs to 2D-DOA, with the latter being the type of mapping scheme used in [3, 4, 5]. For this purpose, we benchmark 2D-DOA estimation performance of NNS in (6) against its CF counterpart in (3). Localization root mean square error (RMSE\({}_{\text{loc}}\)) is used as the performance metric. RMSE\({}_{\text{loc}}\) measures the localization error on a unit hemisphere over \(M\) trials, as given by
\[\text{RMSE}_{\text{loc}}=\sqrt{\frac{1}{M}\sum_{i=1}^{M}||\mathbf{s}^{(i)}- \hat{\mathbf{s}}^{(i)}||^{2}}\, \tag{15}\]
where \(\mathbf{s}^{(i)}\) is the source position on a unit hemisphere generated during the \(i\)-th trial, and \(\hat{\mathbf{s}}^{(i)}\) is the corresponding estimate. To generate \(\mathbf{s}^{(i)}\) for evaluation purposes, azimuth and elevation angles were drawn independently and uniformly at random from their respective ranges and mapped to Cartesian coordinates. Corresponding TDOAs were interpolated using the collected dataset in the field. Finally, to simulate measurement noise, interpolated TDOAs were corrupted using additive white Gaussian noise (AWGN) with varying standard deviation \(\sigma\in[0.01,10]\) cm. Since CF requires accurate knowledge of microphone array geometry, we consider two variants. In the first variant, simply referred to as CF, the array geometry is measured by hand. In the second variant (CF + Calibration), we use the dataset collected for NNS to calibrate the array applying Levenberg-Marquardt as the optimization algorithm. Fig. 4 reports the results. We note that CF without calibration exhibits somewhat poor performance, which is attributed to its sensitivity to non-precise microphone position estimates. Although introducing calibration improves results considerably, this scheme is still not sufficient to attain the high performance exhibited by NNS, especially at low and moderate noise. As a result, NNS can be used to improve the localization resolution attained by previous work in [3, 4, 5].
In the second experiment, we test the capability of the proposed method to detect and localize two overlapping sound sources. One source was a speaker playing music, which was placed at two-meter distance, \(55^{\circ}\) azimuth, and \(0^{\circ}\) elevation. The other source was a human speaker talking, which stood at half a meter distance, \(145^{\circ}\) azimuth and \(40^{\circ}\) elevation. Results are shown in Fig. 5. The frame-level localization estimates are mostly accurate except for a "ghost" source at \(100^{\circ}\), which we attribute to smoothing in the cross-correlation function caused
Fig. 4: Experiment 1. 2D-DOA estimation performance of NNS vs. CF for varying noise in TDOA measurements.
by multiple dominant sources in a frame. However, filtering removes most outliers. Finally, RFEFC detects the two sources and, as a result of having \(N_{s}>1\), adds an additional layer of filtering by dismissing remaining outliers. We note that, unlike [3, 4, 5], no voice activity detector is used, allowing detection and localization of sources other than speech.
As in RTEFC, the use of exponential filtering in RFEFC suggests that the proposed method may in principle be capable of tracking moving sources producing continuous sound. To verify this claim, in the third and last experiment, we test the ability of the proposed method to track two overlapping sources. The same two-source localization scenario is considered as in the second experiment with the difference that the smartphone is placed on a rotating turntable making a full revolution in 20 seconds. Hence, we expect source positions to form two concentric circles on a hemisphere, with each circle being defined according to the elevation of a respective source. Results in Fig. 6 suggest that the proposed method indeed exhibits excellent tracking capabilities.
## V Conclusion
A method for accurate 2D-DOA estimation using a nonlinear 3-microphone array was proposed. The problem is modeled as localization of one or more acoustic sources on a unit-radius hemisphere above the array. For best practical results, the derived CF is replaced with NNS applied to a spherical Fibonacci lattice containing TDOA to 2D-DOA mappings learned in the field. Filtering and clustering post-processors were also introduced to reject unreliable measurements and allow more robust detection and localization of multiple sound sources. When evaluated in the field, the proposed method displayed remarkable 2D-DOA estimation accuracy and tracking capabilities of two overlapping sources.
|
2310.10935 | Intent Detection and Slot Filling for Home Assistants: Dataset and
Analysis for Bangla and Sylheti | As voice assistants cement their place in our technologically advanced
society, there remains a need to cater to the diverse linguistic landscape,
including colloquial forms of low-resource languages. Our study introduces the
first-ever comprehensive dataset for intent detection and slot filling in
formal Bangla, colloquial Bangla, and Sylheti languages, totaling 984 samples
across 10 unique intents. Our analysis reveals the robustness of large language
models for tackling downstream tasks with inadequate data. The GPT-3.5 model
achieves an impressive F1 score of 0.94 in intent detection and 0.51 in slot
filling for colloquial Bangla. | Fardin Ahsan Sakib, A H M Rezaul Karim, Saadat Hasan Khan, Md Mushfiqur Rahman | 2023-10-17T02:12:12Z | http://arxiv.org/abs/2310.10935v1 | # Intent Detection and Slot Filling for Home Assistants: Dataset and Analysis for Bangla and Sylheti
###### Abstract
As voice assistants cement their place in our technologically advanced society, there remains a need to cater to the diverse linguistic landscape, including colloquial forms of low-resource languages. Our study introduces the first-ever comprehensive dataset for intent detection and slot filling in formal Bangla, colloquial Bangla, and Sylheti languages, totaling 984 samples across 10 unique intents. Our analysis reveals the robustness of large language models for tackling downstream tasks with inadequate data. The GPT-3.5 model achieves an impressive F1 score of 0.94 in intent detection and 0.51 in slot filling for colloquial Bangla. 1
Footnote 1: The dataset and the analysis code can be found in the following directory: [https://github.com/mushfiqur11/bangla-sylheti-snips.git](https://github.com/mushfiqur11/bangla-sylheti-snips.git)
## 1 Introduction
Smart devices have become commonplace, establishing home assistants as indispensable fixtures in contemporary households. These voice-activated virtual companions adeptly manage an array of tasks, ranging from setting reminders to controlling room temperatures. The efficacy of home assistants in performing these tasks is closely intertwined with their underlying Natural Language Understanding (NLU) models, which enable seamless interactions in high-resource languages Chen et al. (2019); Stoica et al. (2021); Antoun et al. (2020); Upadhyay et al. (2018). However, this advantage in NLU capabilities is not extended to low-resource languages Stoica et al. (2019); Schuster et al. (2018), presenting a notable discrepancy. This discrepancy holds considerable significance, especially considering the global demand for home assistants and the extensive usage of low-resource languages, which have a substantial speaker base.
Bangla and Sylheti Ethnologue (2023), with 285 million native speakers combined, have rich cultural and colloquial nuances. Specialized datasets are needed to capture these intricacies as users prefer to interact with home assistants in their native languages, highlighting the research need Bali et al. (2019).
The language understanding of home assistants is dependent on two key NLU tasks: intent detection and slot filling Weld et al. (2022); Louvan and Magnini (2020). Intent detection determines user actions, like playing music or checking the weather, while slot filling extracts specific details, such as song titles or locations. These tasks enable seamless human-device interactions, especially for home assistants.
Research on intent detection and slot filling primarily focuses on high-resource languages Liu and Lane (2016); Qin et al. (2021); Niu et al. (2019); Zhang et al. (2018). While there have been limited studies dedicated to the Bangla language Bhattacharjee et al. (2021); Alam et al. (2021); Hossain et al. (2020), none of them have addressed the tasks of intent detection and slot filling in Bangla. Furthermore, these studies have not taken into account colloquial variants or closely related languages like Sylheti. This gap in research leaves a significant portion of the speaker base underserved.
This paper bridges this research gap with several notable contributions. Firstly, we introduce a comprehensive dataset encompassing 328 entries for intent detection and slot filling for each of the three languages - totaling 984 samples. These languages include formal Bangla, colloquial Bangla, and colloquial Sylheti. We further show a comparative study between generative LLMs and state-of-the-art language models for intent detection and slot filling.
## 2 Dataset
At the core of our exploration stands a meticulously curated dataset that is inspired by the SNIPS dataset Coucke et al. (2018), which caters to the broad audience.
### Dataset Size and Distribution
Originating from the 328 English samples present in the SNIPS dataset, our dataset underwent a manual correction phase to ensure that the English samples were of optimal quality. Then, we created three linguistically diverse variants, maintaining the same distribution across intent classes and slots as the original samples. These are:
1. **Formal Bangla:** This represents the standard version of the Bangla language, majorly used in contexts like official documents, news broadcasts, and literature. Formal Bangla tends to adhere strictly to grammatical rules.
2. **Colloquial Bangla:** An informal variant predominantly used in Bangladesh, colloquial Bangla resonates with everyday conversations of its people. While there are numerous dialects in different regions of Bangladesh, this form remains more or less consistent across the country. Colloquial Bangla is more flexible regarding syntax and incorporates a significant number of loanwords from English, Arabic, Persian, and other languages.
3. **Colloquial Sylheti:** A language with unique intricacies, Sylheti stands apart from Bangla and is spoken in the Sylhet region of Bangladesh and among diaspora communities. It's rich in expressions, proverbs, and idiomatic language that reflect the history and culture of the Sylhet region.
The curated dataset spans 10 distinctive intents. Each specific intent has a distinct set of slot categories. Figure 1 shows the number of samples for each intent and Figure 2 shows the fraction of slots that frequently occur for each intent, with respect to infrequently occurring slots.
### Data Generation Process
The generation of our dataset was methodical and rigorous to ensure authenticity and accuracy.
**Annotator Engagement**
Four doctoral students were on board as annotators for our project. The initial phase involving the rectification of English data from the SNIPS dataset was a collaborative effort, with each annotator working on a distinct, non-overlapping segment. Subsequent phases involved two individuals fluent in Bangla for the Bangla datasets and two native Sylheti speakers for the colloquial Sylheti dataset.
**Base Creation**
The base dataset was created using the Bangla-T5 model [1], a state-of-the-art English-to-Bangla translation tool, following the work of De bruyn et al.. The refined English samples served as the foundation to produce the initial Bangla translations for each sample. An auto-generated dataset comes with a myriad of issues. Therefore, these samples were manually re-translated and annotated with the auto-translations as the base.
**Inter-Annotator Agreement**
An essential step in ensuring the reliability of our dataset was to gauge the consistency between annotators. For each language variant, 28 randomly chosen samples were annotated independently by both designated annotators, followed by calculating their inter-annotator agreement (Table 1). This exercise helped us discern the degree of concordance and areas of divergence.
**Consensus Building**
Post the initial agreement calculation, a meeting was convened where the annotators discussed and reconciled their differences. This step was instrumental in ironing out inconsistencies and ensuring a unified approach going forward.
**Blind Overlap**
As the annotators progressed with data creation, a random 10% of the samples were earmarked for blind overlap. These served as a secondary check on inter-annotator agreement after dataset creation.
**Independent Adjudication**
After the final compilation of the dataset, each entry underwent a rigorous review by an independent adjudicator who had not previously worked on that particular language variant. This added an additional layer of scrutiny and quality assurance.
### Ensuring Quality
Our data generation process, featuring multiple checks, blind overlaps, third-party reviews, and inter-annotator agreement stages, highlights our
\begin{table}
\begin{tabular}{c c c} \hline \hline \multicolumn{3}{c}{Inter-annotator agreement} \\ \hline & Cohen’s & Average \\ & Kappa & BLEU \\ \hline First 28 samples & 0.42 & 0.43 \\ Blind overlap (10\%) & 0.55 & 0.51 \\ \hline \hline \end{tabular}
\end{table}
Table 1: There was an increase in annotator agreement before and after the annotator’s meeting. This ensures the homogeneity of annotations in the dataset.
commitment to quality. It minimizes biases and discrepancies that could result from a single annotator's viewpoint. The inclusion of an independent adjudicator in the final review further bolsters the dataset's integrity and reliability. Using a well-established dataset as the baseline ensures proper distribution of the data across different labels (Figure 1 and Figure 2).
## 3 Methodology and Experimental Setup
Our experiments were divided into four phases. In our initial experiment, we employed JointBERT Chen et al. (2019), the state-of-the-art model in this domain, for both intent detection and slot-filling tasks. In our next experiment, JointBERT was retained for intent detection, while we explored the capabilities of GPT-3.5 (Generative Pre-trained Transformer) Brown et al. (2020) model for slot filling. The third experiment fully utilized GPT-3.5 for both tasks. For our concluding experiment, we provided GPT-3.5 with the original intents and then analyzed its performance on the slot-filling task. The final experiment gives the raw result of slot-filling for the GPT model.
**JointBERT** leverages the BERT Devlin et al. (2019) model to provide a unified approach encompassing both intent classification and slot filling by utilizing the representations from the pre-trained BERT model. We employed the default BERT tokenizer and maintained consistent parameters for all three languages. The utilization of these default settings and tokenization methods ensures an equitable and consistent evaluation across the languages.
**GPT-3.5** (Generative Pre-trained Transformer) Brown et al. (2020) model operates on the Transformer architecture and is adept at generating text resembling human language by predicting subsequent words or tokens in a sequence. GPT-3.5's deep contextual understanding is a result of extensive pre-training on a diverse corpus of textual data, encompassing various languages and linguistic intricacies enabling it to excel across a spectrum of NLP tasks Goyal et al. (2022); Liu et al. (2021); Sakib et al. (2023); Kumar et al. (2020). We used GPT in a few-shot setting, passing 5 training samples along with the prompt. Rigorous prompt engineering was performed before settling on the two prompts for the two tasks. Figure 3 and Figure 4 show the final versions of the prompts used in the experimentations.
### Experimental Setup
We divided each of the three datasets into training, development, and test sets using a standard 80-10-10 split. The JointBERT model was trained and evaluated on an A100 GPU, using a batch size of 8. We closely followed the setup provided by the original authors for this phase. For GPT, we used the OPENAI API with the "GPT-3.5-turbo" engine and set the token limit to 50.
## 4 Results
Tables 2 and 3 present the performance of the models we evaluated on our intent detection and slot-filling tasks. A clear pattern emerges: GPT-3.5 consistently outperforms JointBERT in both tasks.
While intent detection is generally more straightforward, JointBERT performs reasonably well in this aspect, although it doesn't quite match the exceptional performance achieved by GPT-3.5. However, when it comes to the more intricate task of slot-filling, JointBERT's performance falls significantly short, leaving ample room for improvement. In contrast, GPT-3.5 demonstrates its proficiency
Figure 1: The number of samples for each intent varies, but they are fairly distributed, with 18 to 68 samples per intent.
Figure 2: Slot categories appearing in at least 30% of the instances are marked as ”frequent,” while others are ”infrequent.” Despite varying slot categories per intent, frequent ones are evenly distributed.
in handling the complexities of this task.
A significant reason behind GPT-3.5's superior performance is its broader exposure to diverse languages during training, including Bangla. JointBERT, conversely, hasn't been specifically trained on any Bangla dataset. This linguistic familiarity gives GPT-3.5 a clear advantage, enabling it to process and interpret Bangla's nuances far more effectively than JointBERT. The results underline the significance of using LLMs for low-resource languages, especially in scenarios where obtaining high volumes of training data for a particular downstream task is challenging.
## 5 Conclusion
In the era of smart devices, a home assistant's voice interfaces must resonate with the authentic linguistic intricacies of its users. Our research presents the first-ever dataset for intent detection and slot filling in Bangla and Sylheti, emphasizing their colloquial forms. This focus on colloquial forms bridges the often-overlooked gap between formal language models and the nuances of everyday speech. By championing colloquial forms, we ensure a voice interface that's more natural and attuned to genuine communication habits. Through rigorous data collection and validation, we have produced a high-quality benchmark dataset, providing a solid foundation for subsequent analyses and model evaluations. The comparative study between large lan
\begin{table}
\begin{tabular}{c|c c c} \hline \hline \multicolumn{4}{c}{Intent Detection (_Accuracy and F1 Score_)} \\ \hline \multirow{2}{*}{Models} & Formal & Colloquial & Colloquial \\ & Bangla & Bangla & Sylheti \\ \hline JointBERT & 0.57 \(\mid\) 0.56 & 0.63 \(\mid\) 0.61 & 0.45 \(\mid\) 0.46 \\ GPT-3.5 & 0.94 \(\mid\) 0.94 & 0.94 \(\mid\) 0.94 & 0.87 \(\mid\) 0.89 \\ \hline \hline \end{tabular}
\end{table}
Table 2: While the performance of JointBERT is noteworthy for Bangla and its variants, the GPT-3.5 model excels across all metrics for all three datasets
Figure 4: The input structure for the slot-filling task is quite similar to the intent detection task. The major difference is the prompt. For slot-filling, the set of possible slots is based on the intent type of the query. The intent type is obtained from a separate model and then from the train set, all possible slots for the given intent are fetched
Figure 3: The figure illustrates how the input is formatted for the intent-detection task. A base-prompt is passed on to the GPT model. A few samples (5) from the training set are also passed as the context. From these sentence-output pairs, the LLM understands how the task needs to be solved. Finally, the current query is passed
guage models (LLM) like GPT-3.5 and non-LLMs underscores the remarkable capability of LLMs to excel even with minimal datasets, marking a considerable stride for underrepresented languages.
## 6 Limitations
While our research has made significant strides in understanding intent detection and slot filling for Bangla and Sylheti, like any study, it has its limitations. Our dataset, although carefully curated for the Bangla and Sylheti variants, is on the smaller side compared to established benchmarks. A precise and robust data generation process was prioritized, naturally limiting our data volume. We confined our evaluations to the JointBERT model and GPT-3.5. The pronounced difference in their performance deterred us from testing a broader range of models. Moreover, the dearth of optimized Bangla models for specific tasks posed challenges. An attempt with a Bangla BERT tokenizer didn't yield satisfactory outcomes, affecting the JointBERT's efficacy. As promising as our results are, they are tied to our specific dataset and context. Extending our findings to diverse settings or other languages requires further exploration, marking just the beginning of this exciting journey.
|
2307.10322 | R2D2 -- An equivalent-circuit model that quantitatively describes domain
wall conductivity in ferroelectric LiNbO$_3$ | Ferroelectric domain wall (DW) conductivity (DWC) can be attributed to two
separate mechanisms: (a) the injection/ejection of charge carriers across the
Schottky barrier formed at the (metal-) electrode-DW junction and (b) the
transport of those charge carriers along the DW. Current-voltage (IU)
characteristics, recorded at variable temperatures from LiNbO$_3$ (LNO) DWs,
are clearly able to differentiate between these two contributions. Practically,
they allow us here to directly quantify the physical parameters relevant for
the two mechanisms (a) and (b) mentioned above. These are, e.g., the resistance
of the DW, the saturation current, the ideality factor, and the Schottky
barrier height of the electrode/DW junction. Furthermore, the activation
energies needed to initiate the thermally-activated electronic transport along
the DWs, can be extracted. In addition, we show that electronic transport along
LiNbO$_3$ DWs can be elegantly viewed and interpreted in an adapted
semiconductor picture based on a double-diode/double-resistor equivalent
circuit model, the R2D2 model. Finally, our R2D2 model was checked for its
universality by fitting the DWC data not only to z-cut LNO bulk DWs, but
equally to z-cut thin-film LNO DWs, and DWC from x-cut DWs as reported in
literature. | Manuel Zahn, Elke Beyreuther, Iuliia Kiseleva, Ahmed Samir Lotfy, Conor J. McCluskey, Jesi R. Maguire, Ahmet Suna, Michael Rüsing, J. Marty Gregg, Lukas M. Eng | 2023-07-19T07:16:27Z | http://arxiv.org/abs/2307.10322v2 | R2D2 - An equivalent-circuit model that quantitatively describes domain wall conductivity in ferroelectric LiNbO\({}_{3}\)
###### Abstract
Ferroelectric domain wall (DW) conductivity (DWC) can be attributed to two separate mechanisms: (a) the injection/ejection of charge carriers across the Schottky barrier formed at the (metal-) electrode-DW junction and (b) the transport of those charge carriers along the DW. Current-voltage (IU) characteristics, recorded at variable temperatures from LiNbO\({}_{3}\) (LNO) DWs, are clearly able to differentiate between these two contributions. Practically, they allow us here to directly quantify the physical parameters relevant for the two mechanisms (a) and (b) mentioned above. These are, e.g., the resistance of the DW, the saturation current, the ideality factor, and the Schottky barrier height of the electrode/DW junction. Furthermore, the activation energies needed to initiate the thermally-activated electronic transport along the DWs, can be extracted. In addition, we show that electronic transport along LiNbO\({}_{3}\) DWs can be elegantly viewed and interpreted in an adapted semiconductor picture based on a double-diode/double-resistor equivalent circuit model, the R2D2 model. Finally, our R2D2 model was checked for its universality by fitting the DWC data not only to \(z\)-cut LNO bulk DWs, but equally to z-cut thin-film LNO DWs, and DWC from x-cut DWs as reported in literature.
## I Introduction
Since the early prediction of enhanced electrical conductivity along charged ferroelectric domain walls in the 1970s [1], the intriguing phenomenon of domain wall conductivity (DWC) has been reported in a number of ferroelectric materials during the last decade, which opens a unique perspective for designing integrated functional nanoelectronic elements [2; 3]. The enormous scientific interest is reflected in several review articles treating fundamental [4; 5; 6] and technological [7; 8; 9] challenges in understanding and exploiting this type of quasi-2-dimensionally confined electronic transport, which competes with other highly topical low-dimensional electronic systems like graphene, oxide interfaces, or heterointerfaces of classical semiconductors. Notably, DW based approaches bear the unique possibility to write and erase the conducting paths at will within one and the same crystal or thin film.
Among others, conducting DWs in the ferroelectric model system lithium niobate (LiNbO\({}_{3}\), LNO) have attracted concerted interests, since (a) their conductivity can exceed the corresponding bulk values by many orders of magnitude, (b) they are stable across a broad temperature range, and (c) they have already been well described in various previous works [10; 11; 12; 13; 14]. Here, _fundamental aspects_ such as the inherent relationship between the DW's geometrical inclination and the resulting electrical conduction, the role of the contact material, the typically non-ohmic nature of the respective current-voltage (IU) characteristics, or signatures for the thermally-activated nature of DW electrical transport have been reported for selected samples, however, for a rather narrow temperature range so far [10; 12; 15; 16; 17]. In parallel, there is a plethora of very recent _application-related_ results already demonstrating single _electronic DW-based functionality_ in either LNO single crystals or thin films, ranging from simple rectifying junctions [18; 19; 20; 12] towards more complex logic gates [21; 20], memristors [22; 23], or transistors [24; 25; 26], to name a few.
Nevertheless, there are at least two crucial preconditions to meet for proper operation of any reliable LiNbO\({}_{3}\) based DW device. First, the related device-specific parameters within an appropriate equivalent circuit have to be quantified, including the evaluation of both their general reproducibility and their temperature-dependence. Second, an in-depth understanding and modeling of the underlying electronic transport mechanism, which is not addressed comprehensively so far, has to be achieved. This is exactly the starting point of this work. In order to meet the first aspect, we present room-temperature current-voltage characteristics of a set of four identically prepared DWs in single crystalline 5 mol% MgO-doped LiNbO\({}_{3}\), postulate an equivalent circuit model consist
ing of a parallel connection of two resistor/diode pairs (the R2D2 model), extract the corresponding resistance, saturation current, and ideality factor values, and evaluate their reproducibility. For dealing with the second aspect, we analyze the temperature-dependent IU characteristics \(I(U,T)\) of two exemplary DWs in detail, which allows us to extract the activation energies for the intrinsic DW transport, and the Schottky barrier heights of the electrode-DW junction diodes.
Finally, we test the general applicability of our R2D2 model by analyzing the domain wall conductivity (DWC) data not only to z-cut bulk LNO single crystals, but equally to DWC observed in z-cut thin-film LNO (TFLN) and literature data on DWC in x-cut LNO.
## II Experimental
### Preparation of LiNbO\({}_{3}\) domain walls with enhanced electrical conductivity
For the present comparative study, four samples were cut from a commercial monodomain, 5 mol% MgO-doped, congruent, 200 um-thick, z-cut LiNbO\({}_{3}\) wafer, purchased from _Yamaju Ceramics Co., Ltd._, polished to optical quality. These crystal pieces measure \(5\times 6\) mm\({}^{2}\) along their crystallographic x- and y-axis, respectively. In the following, these four samples are labeled DW-01 to DW-04. Realizing the protocols described in detail earlier [13; 27] and in the SI-sec. A, hexagonally-shaped domains were grown by laser-assisted poling, imaged by polarization-sensitive optical microscopy, and - after vapor-deposition of macroscopic Cr electrodes onto both crystal surfaces covering the DWs completely (cf. fig. 2(a)(a)) - electrically tested by acquiring \(\pm 10\) V standard current-voltage characteristics, which revealed a very low, nearly bulk-like conductivity with currents in the 0.1 pA-range.
Subsequently, the DW conductivity was _enhanced_ by ramping up a high voltage, provided by the voltage source of a _Keithley 6517B_ electrometer, while simultaneously monitoring the current flow [SI-fig. S1(d)] according to Godau _et al._[13]. As a result, the resistance of the DWs decreased significantly by up to seven orders of magnitude, as shown exemplarily again for sample DW-03 in fig. 1(a) by the respective IU characteristics; corresponding data sets for all four samples were recorded as well and will be discussed later in sec. III.1 in detail. A stabilization process of the conductivity towards its final magnitude is observed on the time scale of several hours and shown in fig. 1(b). Thereafter, the conductivity was rechecked and proven to be time independent for at least one month. The exact measurement parameters (voltage sweep velocity, voltage increments, metallic measurement box) were the same as for acquiring the _as-grown_ IU curves described in the SI-sec. A.2.
Additionally, the 3-dimensional internal DW structure of one of the samples (_DW-02_) was imaged by Cherenkov second-harmonic generation (CSHG) microscopy [28; 29], which clearly verified the known relationship between DW inclination and enhanced conduction. For experimental details and images we refer to SI-sec. B and SI-fig. S2, the latter showing a decisively altered, i.e., shrinked and inclined domain wall shape.
### Quantitative analysis of room temperature IU-characteristics: the R2D2 model
Since the typical _post-enhancement_ IU characteristics of a LiNbO\({}_{3}\) domain wall with its two Cr electrodes exhibits the shape as shown in fig. 1(a), including obviously (i) non-ohmic, diode-like regions for low voltages, but (ii) linear behavior for higher measurement voltages, with (iii) an additional clear asymmetry with respect to the voltage polarity, we heuristically postulate a _parallel connection of two diode-resistor pairs_, sketched in fig. 2(b), as the related equivalent circuit, where one pathway describes the "forward" and one the "backward" behavior along the DW. The four circuit elements are characterized by their resistances \(R\), saturation currents \(I_{s}\), and ideality factors \(n\), each in forward and backward direction (symbolized by the indices \(f\) and \(b\)), respectively. To calculate the electric current through the circuit according to Kirchhoff's current law [30], the currents at the nodes with the intermediate potentials \(U_{f}\) and \(U_{b}\) are considered. Due to charge conservation the currents flowing through the respective resistors and diodes must be equal at these nodes. Formally expressed, there exists
Figure 1: Current-voltage characteristics and current stabilization subsequent of the _conductivity enhancement procedure_ according to Godau _et al._[13], shown for sample _DW-03_. (a) First (black) and last (blue) IU cycle obtained directly after the conductivity enhancement procedure and 9 h later. (b) Evolution of the absolute value of the maximum current at +10 V (red) and \(-10\) V (green) as a function of the number of measurement cycles. The IU cycles were acquired between \(-10\) V and +10 V, setting the measuring voltage in steps of \(\Delta U=0.5\) V with time intervals of 2 s.
at least one1 voltage value \(U_{f}\in[U_{z-},U_{z+}]\), for which the following relation holds:
Footnote 1: In case of a resistor and a diode that both have a monotonous IU characteristic, it is exactly one voltage value that exists.
\[I_{resistor}=\frac{U_{z+}-U_{f}}{R_{f}}=I_{diode}(I_{s,f};n_{f};U_{f}-U_{z-}). \tag{1}\]
Thereby \(I_{diode}\) is represented by the well-established Shockley equation [31]:
\[I_{diode}(I_{s};n;U)=I_{s}\left[\exp\left(\frac{U}{nk_{B}T}\right)-1\right], \tag{2}\]
with \(k_{B}\) being the Boltzmann constant, and \(T\) the absolute temperature. This choice of \(U_{f}\) ensures that no charges accumulate at the intermediate node, and that the current flow is time-independent. For \(U_{b}\) and the circuit elements in _backward_ direction, analogous considerations are taken into account.
Thus the 2-resistors/2-diodes (R2D2)-model exhibits six free parameters: \(R_{f}\); \(R_{b}\); \(I_{s,f}\); \(I_{s,b}\); \(n_{f}\); \(n_{b}\), that can be fitted by numerical treatment of eq. (1). Since the characteristic IU curves were defined by more than 40 experimentally obtained measurement points (cf. SI-sec. A.2 and sec. II.1), the convergence of the optimization process is granted. Before applying the fitting routine, parameters were manually adjusted to the right order of magnitude, to ensure convergence. To take account for the possibly broad intervals for the fitting parameters, the logarithm to the base 10 of the parameters' values was optimized instead of the parameters themselves. The optimization was performed using a trust region reflective algorithm [32] with least-squares cost function, as implemented in the _python3_ library _scipy_[33].
### Investigation of the electrical-transport mechanism by temperature-dependent IU-curves
In order to (i) figure out the precise electrical transport mechanism through the DW, (ii) to derive the corresponding characteristic parameters such as the activation energy or the barrier height, and (iii) to check for the temperature stability of the circuit parameters in general, temperature dependent IU measurements from 320 K down to around 80 K were performed with two of the four samples, i.e. samples _DW-01_ and _DW-04_. A liquid nitrogen bath cryostat (_Optistat DN_ by _Oxford Instruments_) was used, controlling the temperature by two independent Pt-100 platinum resistance sensors, one positioned at the heat exchanger and the other directly next to the sample. The operation of the cryostat, comprising gas flow regulation, heating control, and temperature reading at the heat exchanger, were accomplished via an _ITC 503_ temperature controller by _Oxford Instruments_, while the Pt-100 sensor near the sample was read out by a _Keithley 196_ digital multimeter. Full IU characteristics in the \(\pm\)10 V range were acquired for 40 different logarithmically distributed temperatures with a _Keithley 6517B_ electrometer in steps of \(\Delta U=0.5\) V with \(dU/dt=0.5\) V/s in two-point geometry with wires shielded up to the probe head. The temperature set-points were changed stepwise and three IU cycles were recorded, however, always after reaching thermal equilibrium, i.e., after a waiting period of around 30 minutes when having set a new setpoint temperature. To eliminate transient effects, all subsequent calculations were performed with the third IU cycle only. While changing the temperature, the two electrodes were short-circuited via the electrometer to achieve an equalization of py
Figure 2: (a) Sketch of the configuration in the LiNbO\({}_{3}\) crystal with an inclined and thus conducting domain wall structure between two Cr electrodes at the crystal’s z+ and z- surfaces. (b) Proposed R2D2 equivalent circuit consisting of a parallel connection of two diode-resistor combinations, which describes the IU curves of LiNbO\({}_{3}\) domain walls contacted by Cr electrodes on both (z+/z-) crystal sides. The circuit elements can be quantitatively characterized by a non-linear nodal analysis at the intermediate potential nodes \(U_{f}\) and \(U_{b}\) based on Kirchhoff’s law [30] in the way that the resistances \(R_{f}\) and \(R_{b}\) of the resistors as well as saturation currents (\(I_{s,f}\), \(I_{s,b}\)) and ideality factors (\(n_{s,f}\), \(n_{s,b}\)) of the two diodes are extracted from curve fitting procedures based on eqs. (1) and (2).
roelectrically generated charges. Furthermore, spurious temperature fluctuation during IU measurements were proven to be less than \(0.01\,\mathrm{K}\). Thus pyroelectric effects are neglected in all following evaluations.
In sum, a "3D" data field \(I(U,T)\) with current values \(I\) measured at \(40\times 40\) voltage-temperature combinations \((U,T)\) was collected (fig. 3). The IU characteristics at fixed temperature were evaluated analogously to the processing of the room temperature curves described in sec. II.2. As a result, the temperature dependences of DW resistances \([R_{f}(T);R_{b}(T)]\), diode saturation currents \([I_{s,f}(T);I_{s,b}(T)]\), and ideality factors \([n_{f}(T);n_{b}(T)]\) could be established. First, the \(R(T)\) characteristics were brought to the form of Arrhenius plots \([\ln(R)(1/T)]\), which allowed us extracting the activation energy \(E_{a}\). In order to decide to which precise \(R(T)\) curve form the data should be fit to extract \(E_{a}\), a preliminary, tentative evaluation step was carried out. Thereby, a number of electrical-transport models, such as thermally-activated hopping and different variable-range hopping models, were tested for the exemplary case of sample _DW-01_ with the results that _simple thermally-activated transport_ with the following temperature dependence of the conductivity \(\sigma\) (being equivalent to \(R\)):
\[\sigma(T)=\tilde{\sigma}_{0}\exp\left(-\frac{E_{a}}{k_{B}T}\right), \tag{3}\]
where \(\tilde{\sigma}_{0}\) symbolizes a constant prefactor related to the sample geometry, appears to be the most probable process here, which fully agrees with assumptions used by other authors before [12; 23]. This allows for a linear fitting of the Arrhenius plots with \(-E_{a}/k_{B}\) being the slope. The full analysis including a listing and a short explanation of all considered models is found in SI-sec. C.
Second, the curves for the saturation currents \(I_{s}(T)\) were fitted using the theoretical relationship derived from the thermionic emission model (see e.g. Rhoderick and Williams [31]):
\[I_{s}=A^{\star}T^{2}\exp\left(\frac{-q\Phi_{eff}}{k_{B}T}\right), \tag{4}\]
with \(\Phi_{eff}\) being the effective potential barrier height, and \(A^{\star}\) a material-specific parameter, known as the Richardson constant. Thus, this evaluation supplied us with estimates for the effective Schottky barrier heights of the DW-metal contacts.
## III Results and discussion
### Quantifying resistor and diode parameters from room-temperature IU characteristics
As one key result, fig. 4 comparatively displays the IU characteristics of the four identically-prepared conductive domain walls in bulk LNO for this room-temperature study. All curves show clear non-ohmic behavior for low measuring voltages and a rather linear progression for larger applied voltages, with a clear asymmetry towards the polarity of the measuring voltage. As indicated in sec. II.2, we fit all four curves according to the R2D2 double-diode/double-resi
Figure 3: Temperature-dependent current-voltage data, depicted as a _heat map_ heat map with \(40\times 40\) measured current values as a function of both measuring voltage and temperature (data from _DW-04_).
Figure 4: Room-temperature current-voltage characteristics of a set of four domain walls in single-crystal LiNbO\({}_{3}\), contacted macroscopically with vapor-deposited Cr electrodes, all reproducibly revealing a non-ohmic and asymmetric DWC. The inset depicts the corresponding semilogarithmic plot (see also SI-fig. S3 for a close-up view). For both the geometric interpretation and the assigned R2D2 equivalent circuit, which combine ohmic and diode-like character in a single concept, see fig. 2(b) again.
model (fig. 2(b), sec. II.2) and obtain the numerical values for the six free parameters as summarized in table 1 (cf. SI-table S2 for uncertainties), which we discuss more closely in the following.
* **Resistances:** Typical values between 2.6 and 7.2 M\(\Omega\), which lie all within the same order of magnitude, are observed for \(R_{f}\) and \(R_{b}\). This is an expected and desired result due to the equivalent sample preparation procedure, which should lead to a similar domain wall geometry, particularly a comparable domain wall inclination with respect to the z axis \(\sin\alpha=\vec{n}\cdot\vec{e}_{z}\), and thus to similar conduction properties.
* **Saturation currents:** Here, a nominally large variation over five orders of magnitude between 10 pA and 1 pA turns out at first glance. However, when excluding the \(I_{s,f}\) value for _DW-03_ which is probably caused by a peculiarity in the real structure of the electrode-DW junction as clearly seen from the logarithmic current plot in the inset of fig. 4 and in SI-fig. S3, the saturation currents cover only two orders of magnitude.
* **Ideality factors:** The values for \(n_{f}\), \(n_{b}\) are much larger than two, indicating significant differences between conductance along domain walls and conventional semiconductors (with the latter having typical \(n\) values between 0.5 and 2). However, this phenomenon of anomalously high \(n\) values also occurs for highly-doped semiconductors (for silicon above \(N_{d}\approx 10^{19}\,\mathrm{m}^{-3}\) at 300 K) and is described by the field emission case of the thermionic-emission theory [31]. Transferred to the domain wall, it indicates a high hopping site density inside the domain wall that is in agreement with former theoretical calculations on LiNbO\({}_{3}\)[34]. Notably, the ideality factors in forward direction \(n_{f}\) are much larger as compared to the backward direction \(n_{b}\). A natural reason of their inequality is the geometrical asymmetry between z+ and z- side due to the domain growth process starting on the z+ towards the z- side, and, on an even more fundamental level, the general intrinsic asymmetry of the two different LiNbO\({}_{3}\) facets being reflected in their different surface terminations and the subsequent dramatically different ionization energies, as shown experimentally in the past [35] and supported by theoretical calculations as well [36].
The rather high coefficient of determination \(R^{2}\) achieved for all samples indicates that the R2D2 equivalent-circuit model describes the conduction parameters adequately well.
In addition to the two considered current paths, two further conductance pathways may contribute to the overall conductance, one having two diodes in opposite direction and one consisting of a single resistor only. While the first path can only weakly conduct due to the reverse-biased diode, the second path would exhibit a purely ohmic IU characteristic, as was observed by Werner _et al._[12] and Godau _et al._[13]. Both turned out to be of minor influence in our experiments but can not be excluded in general.
One may speculate whether there is a fundamental reason that a _parallel_ connection of two current paths appears to be the most suited here. Such a reasoning is also motivated by previous results obtained by Godau _et al._[13] and Wolba _et al._[37] on the nonuniform local distribution of the conductance. Thus, the current is bound to distinct channels (preferably along the domain wall corners) that are separated from each other.
Apart from the above more electrotechnical viewpoint focusing on the circuit-element quantification, we now proceed with the physical interpretation of the suggested circuit elements in the R2D2 model, shown in fig. 2(b), distributed to the two separate current paths. Heuristically, the diodes represent the Schottky barriers between the metal electrodes and the DW, while the resistors reflect the intrinsic DWC. In the following section, we obtain two more characteristic parameters of the two transport contributions by temperature dependent IU measurements: (i) the activation energy \(E_{a}\) for the transport along the DW, and (ii) the barrier height \(\Phi_{eff}\) for the metal-DW junction.
Analysis of the underlying carrier transport processes through temperature dependent DW current measurements
To achieve an in-depth understanding of both the transport across the Schottky barrier at the two electrode-DW interfaces and along the DW itself, IU characteristics for samples _DW-01_ and _DW-04_ at different temperatures between 80 and 320 K were acquired, as exemplarily shown for selected temperatures in fig. 5(a). Obviously, the current level decreases with decreasing temperature, as is typical for a _semiconducting_ material, while the general shape of the IU characteristics does not significantly change with temperature, showing the same features as discussed in the previous section. Due
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline sample & \(R_{f}\) & \(I_{s,f}\) & \(n_{f}\) & \(R_{b}\) & \(I_{s,b}\) & \(n_{b}\) & \(R^{2}\) \\ & [M\(\Omega\)] & [pA] & & [M\(\Omega\)] & [pA] & & \\ \hline _DW-01_ & 3.68 & \(1.23\times 10^{5}\) & 35.7 & 3.44 & 430 & 5.33 & 0.990 \\ _DW-02_ & 7.16 & \(9.83\times 10^{4}\) & 23.6 & 5.51 & 100 & 5.25 & 0.980 \\ _DW-03_ & 2.84 & 12.8 & 33.6 & 3.07 & 211 & 6.41 & 0.951 \\ _DW-04_ & 2.59 & \(3.41\times 10^{5}\) & 240 & 4.48 & 4913 & 19.8 & 0.957 \\ \hline \end{tabular}
\end{table}
Table 1: Equivalent circuit parameters obtained by modeling the IU characteristics shown in fig. 4 according to the R2D2 model (fig. 2) via a least-square fit based on Kirchhoff’s current law. Note that \(R^{2}\) in the last column denotes the coefficient of determination here. See also SI-table S2 that additionally tabulates the uncertainties for all fit parameters.
to the latter fact, we extracted the \(R\), \(I_{s}\), \(n\) values analogously to the room temperature parameters, but can plot them now as a function of temperature, as displayed in figs. 5(b) and 5(c), as well as in the SI-fig. S4, respectively.
First, the obtained resistances \(R\) [fig. 5(b)] follow an Arrhenius-like temperature characteristics, as expected. The Arrhenius law is observed across the full temperature range, proving the stability of the intrinsic conduction process, which might be easy to account for in a potential DW nanoelectronic device. The numerically extracted activation energies \(E_{a}\) (shown in table 2) match well between forward and backward direction for each sample, but differ significantly between them. The spreading might be attributed to local chemical variations that determine, i.e., the defect site density.
Second, the saturation currents \(I_{s}\) of the diode component [fig. 5(c)] can be satisfactorily fitted by eq. (4) reflecting the validity of the thermionic emission model for the electrode-DW Schottky contact. The Schottky barrier potential difference \(\Phi_{eff}\) is estimated between 0.1 to 0.5 eV (also listed in table 2). A likely source of this rather large range are variations of the metal-domain wall interface introduced during the metal-electrode deposition and conductivity enhancement procedure. An interpretation of the extract Richardson constants \(A^{\star}\), which vary over three orders of magnitude, is not easily possible, since they depend on several barely known quantities such as the relative electron mass \(m^{\star}\) and the barrier cross section width. In all four regarded cases, the uncertainty of \(A^{\star}\) seems to be heavily overestimated due to the exponential transformation, while \(\log_{10}(A^{\star})\) is still a well defined quantity with a relative uncertainty of less than 10 %.
Third, the evaluation of the ideality factors \(n\) shows a more ambiguous picture (see SI-fig. S4). Based on the thermionic-emission theory, only a very weak temperature dependence is expected for the ideality factors due to changes of the effective electron mass [31; 38]. Furthermore, the ideality factor strongly depends on the effective hopping site density, which is independent of temperature. However, apart from the case of sample _DW-01_ in backward direction, which shows indeed a rather constant value over the covered temperature range, the results for the remaining cases exhibit rather strong fluctuations between 15 and larger than 80, caused by the fragile position of \(n\) within the argument of the exponential function. On the other hand, even the quite scattered data supports the trend towards \(n\) values being considerably larger as compared to standard semiconductor diodes.
As an interesting and illustrative side note, the acquired 3D-data set \(I(U,T)\) - plotted as a heat map in fig. 3 - allows extracting the activation energy \(E_{a}\) directly as the slope from the \(\ln(I)(1/T)\) current-vs.-temperature curves. These values are shown in fig. 6, plotted as a function of measuring voltage \(U\), together with the partial fits to the theoretical \(E_{a}(U)\) dependence. Based on eq. (3), the latter is obtained by calculating the partial
Figure 5: (a) Temperature dependent current-voltage curves in logarithmic representation, exemplarily shown for _DW-04_. Equivalent circuit parameters (b) \(R\) (dots: experimental data, lines: Arheninus-law fits) and (c) \(I_{s}\) [dots: experimental data, lines: fits according to the thermionic emission model, cf. eq. (4)] as a function of temperature, which confirm the semiconductor-like intrinsic conductivity in LNO domain walls between 110 and 320 K, providing estimates for the activation energies \(E_{a}\) and the effective Schottky barrier heights \(\Phi_{eff}\) via the respective fit parameters, see table 2. Note the reciprocal scaling of the temperature axis in panel (b).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline sample & \(E_{a}\) [eV] & \(A^{\star}\) [nA K\({}^{-2}\)] & \(\Phi_{eff}\) [eV] \\ \hline DW-01 (f) & \(0.2291\pm 0.0010\) & \(775\pm 1954\) & \(0.50\pm 0.05\) \\ DW-01 (b) & \(0.2290\pm 0.0016\) & \(19\pm 32\) & \(0.305\pm 0.029\) \\ DW-04 (f) & \(0.1008\pm 0.0019\) & \(0.16\pm 0.28\) & \(0.106\pm 0.020\) \\ DW-04 (b) & \(0.107\pm 0.005\) & \(2.1\pm 2.89\) & \(0.203\pm 0.022\) \\ \hline \end{tabular}
\end{table}
Table 2: Activation energy \(E_{a}\), Richardson constant \(A^{\star}\), and Schottky barrier height \(\Phi_{eff}\) tabulated for the two inspected bulk DWs in LNO, as derived from the curve fits of \(R_{f,b}(T)\) and \(I_{s;f,b}(T)\) in figs. 5(b) and 5(c).
derivative of \(\ln I\) with respect to \(1/T\), as worked out in detail in SI-sec. G, using eqs. (2) and (4), finally resulting in:
\[E_{a}:=-k_{B}\frac{\partial\ln I}{\partial 1/T}=E_{0}-A\cdot\frac{U/U_{c}}{1-\exp(-U/U_{c})}, \tag{5}\]
with the fit parameters \(E_{0}\), \(A\), and \(U_{c}\). However, the qualitative agreement of the experimental \(E_{a}(U)\) curve with the (partial) fits according to eq. (5) is convincing for both samples. The curves show a characteristic strong increase of \(E_{a}\) at low voltages, which clearly supports our central assumption that at low fields the barrier at the electrode-DW junction dominates the transport behavior of the electrode-DW system. The significant difference of the constant activation energy at large electric fields between 0.1 and 0.26 eV for the two tested DWs is astonishing at first glance, but is in full agreement with the activation energies derived from the Arrhenius plots of the resistances before, which show nearly the same rather different values for the two inspected samples. We refrain from discussing the fit parameters \(E_{0}\), \(A\), and \(U_{c}\) in detail, since they refer to a kind of an effective temperature and thus their strict physical meaning is not straightforward to determine.
### Applying the R2D2 model to domain wall conductance in thin-film LiNbO\({}_{3}\)
To test whether or not our R2D2 model is of general use for interpreting domain wall conductance (DWC), we analyze below the DWC from two distinctly different samples:
* DW arrays that have been written into a z-cut, 500 nm thin TFLN sample using a larger bias voltage applied to the tip of a scanning force microscope (for details see SI-sec. H and ref. [39]). We processed these current-voltage characteristics of the DW array shown in fig. 7 in the same way as accomplished for the IU curves in fig. 4. The curve fitting (for a summary of all results refer to SI-table S3) succeeded with an excellent \(R^{2}\) value of 0.999 95, resulting in resistances being a factor of ten larger than observed for the single-crystal DWCs, i.e., \(R_{f}=18\,\mathrm{M}\Omega\) and \(R_{b}=94\,\mathrm{M}\Omega\), and diode saturation currents being smaller on trend, namely \(I_{s,f}=16\,\mathrm{pA}\) and \(I_{s,b}=507\,\mathrm{pA}\). The extracted values of the forward and backward diode ideality factors, which are \(n_{f}=23\) and \(n_{b}=76\), surprised in comparison to the single-crystal results, where \(n_{f}\) appears to be systematically larger than \(n_{b}\). Thus, the data might indicate a different mechanism responsible for the directional asymmetry in the thin-film sample, which is supported by the findings by Suna _et al._[20] showing that near surface domain wall bending results in a significant contribution to the diode like response. Nevertheless, an in-depth clarification needs a more systematic approach, which is out of scope for the study here.
* We applied the R2D2 model to literature DWC data that were recorded at DWs in an x-cut TFLN sample by Qian _et al._[19]. The analysis of this data
Figure 6: Activation energies, as derived from the currents’ Arrhenius plots, as a function of the measuring voltage: dots reflect the experimental data, solid lines the fit curves according to eq. (5).
Figure 7: Current-voltage characteristics of thin-film LiNbO\({}_{3}\) induced by an AFM tip, contacted with Ag electrodes, revealing a non-ohmic and asymmetric current, depicted as linear and semilogarithmic (inset) plot.
results in resistances in the G\(\Omega\) range, \(I_{s}\) values in the pA range, and similarly high ideality factors as above, all of them with rather satisfying \(R^{2}\) values as well (see SI-sec. F for the numerical values).
## IV Summary and Outlook
In this study, ferroelectric conductive domain walls (CDWs) were engineered into \(200\,\mathrm{\SIUnitSymbolMicro m}\)-thick \(5\,\mathrm{mol}\%\) MgO-doped LiNbO\({}_{3}\) single crystals and contacted by macroscopic vapor-deposited chromium electrodes at both crystal sides. Current-voltage (IU) characteristics in the \(\pm 10\,\mathrm{V}\) range were recorded comparatively at a set of four such CDWs, which exhibited reproducibly asymmetric non-ohmic characteristics. Thus, an equivalent-circuit model, the R2D2 model, consisting of a parallel connection of two resistor/diode pairs was postulated empirically, which allowed us to fit the IU curves using Kirchoff's current law together with Shockley's diode equation, ending up in a systematic quantification of typical resistance ranges and diode parameters (saturation current, ideality factor) for this specific DW-electrode configuration in forward and backward direction, which indeed showed different values due to the strongly asymmetric crystal symmetry. The model was also successfully applied to exemplary IU characteristics of differently created DWs in thin-film lithium niobate and might be generally usable within a standardized analysis routine of domain-wall related IU characteristics in the future.
From additional temperature dependent IU recordings at two selected CDWs, we empirically assigned (i) the diodic (nonlinear) part around zero measuring voltage to the influence of the CDW-electrode junction showing thermionic emission in the vicinity of a Schottky barrier, and (ii) the ohmic (linear) part at higher bias voltages to the intrinsic conduction within the domain wall. The latter was further identified to behave thermally-activated semiconductor-like, with activation energies between 100 and 250 meV, as derived from the slope of the linear Arrhenius plots of resistances. Finally the Schottky barrier heights of the DW-electrode junctions were derived from the temperature dependence of the diode saturation currents.
Our results raise a number of questions to be addressed in the future. First, the microscopic nature of the electric-current paths along the CDWs was not completely clarified due to the usage of macroscopic electrodes. Here, a scanning probe microscopy based complementary investigation, especially employing conductive atomic force microscopy to directly contact different regions of the domain wall by the tip and capture local IU characteristics, are needed, which could potentially lead to a more generalized equivalent circuit model. Second, from a statistical point of view, only limited conclusions on the reproducibility of the circuit parameters can be drawn from a set of four samples. At this point, an investigation of a broader set of CDWs including IU and CSHG microscopy data of all specimen would allow us to substantiate functional structure-property relationships as \(R_{f,b}=R(\alpha)\), which quantitatively correlate for example the equivalent-circuit parameters to the domain wall inclination angle \(\alpha\). After having disentangled two different conduction contributions, a third future challenge is the control and optimization of the electrode-DW junction by varying the contact metal on the one hand and by a higher degree of automatization during the preparation process on the other hand.
## Acknowledgements
We acknowledge financial support by the Deutsche Forschungsgemeinschaft (DFG) through the CRC 1415 (ID: 417590517), the FOR 5044 (ID: 426703838; [https://www.for5044.de](https://www.for5044.de)), as well as through the Wurzburg-Dresden Cluster of Excellence on "Complexity and Topology in Quantum Matter" - ct.qmat (EXC 2147, ID: 39085490). M.Z. acknowledges funding from the Deutsche Forschungsgemeinschaft via the Transregional Collaborative Research Center TRR 360, the German Academic Exchange Service via a Research Grant for Doctoral Students (ID: 91849816), the Studienstiftung des Deutschen Volkes via a Doctoral Grant and the State of Bavaria via a Marianne-Plehn scholarship.
|
2308.11861 | Supervised learning for robust quantum control in composite-pulse
systems | In this work, we develop a supervised learning model for implementing robust
quantum control in composite-pulse systems, where the training parameters can
be either phases, detunings, or Rabi frequencies. This model exhibits great
resistance to all kinds of systematic errors, including single, multiple, and
time-varying errors. We propose a modified gradient descent algorithm for
adapting the training of phase parameters, and show that different sampling
methods result in different robust performances. In particular, there is a
trade-off between high fidelity and robustness for a given number of training
parameters, and both can be simultaneously enhanced by increasing the number of
training parameters (pulses). For its applications, we demonstrate that the
current model can be used for achieving high-fidelity arbitrary superposition
states and universal quantum gates in a robust manner. This work provides a
highly efficient learning model for fault-tolerant quantum computation by
training various physical parameters. | Zhi-Cheng Shi, Jun-Tong Ding, Ye-Hong Chen, Jie Song, Yan Xia, X. X. Yi, Franco Nori | 2023-08-23T01:37:13Z | http://arxiv.org/abs/2308.11861v2 | # Supervised Learning for Robust Quantum Control in Composite-Pulse Systems
###### Abstract
In this work, we develop a supervised learning model for implementing robust quantum control in composite-pulse systems, where the training parameters can be either phases, detunings, or Rabi frequencies. This model exhibits great resistance to all kinds of systematic errors, including single, multiple, and time-varying errors. We propose a modified gradient descent algorithm for adapting the training of phase parameters, and show that different sampling methods result in different robust performances. In particular, there is a tradeoff between high-fidelity and robustness for a given number of training parameters, and both of them can be simultaneously enhanced by increasing the number of training parameters (pulses). For its applications, we demonstrate that the current model can be used for achieving high-fidelity arbitrary superposition states and universal quantum gates in a robust manner. This work provides a highly efficient learning model for fault-tolerant quantum computation by training various physical parameters.
## I Introduction
Quantum control with high precision is an essential prerequisite for the implementation of reliable quantum computation [1]. Here, high precision has two meanings: one for perfect control, with knowledge of given physical parameters, and the other for being as free as possible from the effects of various systematic errors. The former is readily available, for instance, by adopting the familiar resonant pulse technique [2]. The latter is truly difficult and requires a great deal of attention. One of the main reasons is that we cannot really capture the complicated nature of systematic errors, such as the inevitable parameter perturbations and decoherence noises. Thus far, numerous approaches have been put forward to address them, including: adiabatic passage [3; 4; 5; 6; 7], dynamical decoupling [8; 9; 10], quantum feedback [11; 12; 13; 14; 15], single-shot shaped pulses [16; 17], derivative removal by adiabatic gate [18; 19; 20], and sampling-based algorithms [21; 22; 23; 24; 25; 26].
The emergence of machine learning [27] offers an alternative and powerful way to tackle the issue of robustness against various errors. Essentially, machine learning is a process of using data to learn the rules applicable to it, and then utilizing the learned rules to make predictions on new data. Typical algorithms involved in machine learning are decision trees, neural networks, support vector machines, random forests, and collaborative filtering, to name a few. Broadly speaking, machine learning can be divided into three main categories: unsupervised learning, supervised learning, and reinforcement learning. Over the last few years, machine learning [28; 29; 30; 31; 32], especially supervised learning, has achieved great success in many fields of physics, such as the study of nonequilibrium quantum dynamics problems [33; 34; 35; 36; 37; 38; 39; 40; 41; 42], the classification of quantum states [43; 44; 45; 46; 47; 48], and the design of control fields for high-fidelity quantum gate operations [49; 50; 51; 52; 53; 54]. To date, there are a variety of methods used to update control variables in machine learning, including the Krotov [55], gradient ascent pulse engineering (GRAPE) [56], evolutionary [57; 58], and variational quantum algorithms [59]. Among them, the high-efficiency GRAPE algorithm [56] and its variants [60; 61; 62; 63; 64; 65; 66] have been very successful in designing the parameters of control fields to sharply suppress various errors, such as amplitude uncertainty and frequency drift.
In this work, we provide a systematic methodology for robust quantum control through constructing a supervised learning model in composite-pulse systems. This model is quite universal and very robust to all kinds of systematic errors, including single error, multiple types of errors, time-varying errors, and so forth. Specifically, we establish the cost function of the supervised learning model, and then propose a modified version of the GRAPE algorithm to train different physical parameters (e.g., phases or detunings).
In this model, one of the most critical steps is to ascertain the _sampling_ method, reflecting in the sampling range and distribution, and we demonstrate that it is necessary to select a suitable sampling range to avoid overfitting and underfitting phenomena. Furthermore, the generalization ability of this model is significantly enhanced by increasing the number of pulses. We finally extend this model to train any physical parameters to implement robust quantum control. Our method
paves an efficient way toward the establishment of high-precision and reliable quantum gates for fault-tolerant quantum computation.
The rest of this paper is organized as follows. In Sec. II, we first present the physical system for constructing a supervised learning model, and show how to design the cost function for this model. Next, we propose a modified gradient descent algorithm to train the phase parameters, and then investigate in detail the sampling methods and the generalization ability of this model. In Sec. III, we provide some applications of this model, including the implementation of arbitrary superposition states and universal quantum gates, which are robust against all kinds of systematic errors. In Sec. IV, we finally illustrate that this model has the ability of training any physical parameters to achieve robust quantum control. Conclusions are given in Sec. V.
## II Supervised learning model
### Physical model
Consider a qubit coherently driven by a control field, and the general form of the Hamiltonian in the interaction picture reads (\(\hbar=1\))
\[H(\theta)=\Delta\sigma_{z}+\Omega e^{-i\theta}\sigma_{+}+\Omega e^{i\theta} \sigma_{-}, \tag{1}\]
where \(\Delta\) represents the detuning between the transition frequency of the qubit and the carrier frequency of the control field, \(\Omega\) is the Rabi frequency, \(\theta\) denotes the phase, and \(\sigma_{\pm}=\frac{1}{2}(\sigma_{x}\pm i\sigma_{y})\), with \(\sigma_{\nu}\) (\(\nu=x,y,z\)) being Pauli operators.
To achieve universal single-qubit gates, the most intuitive and fastest method is to adopt a resonant pulse, i.e., \(\Delta=0\). Then, the evolution operator of this system becomes
\[U(\theta)=e^{-iH(\theta)t}=\left[\begin{array}{cc}\cos A&-ie^{i\theta}\sin A \\ -ie^{-i\theta}\sin A&\cos A\end{array}\right], \tag{2}\]
with pulse area \(A=\Omega t\). Through simply choosing different pulse areas and phases, we can effortlessly obtain a multitude of single-qubit gates. However, the major disadvantage of the resonant pulse is its high susceptibility to systematic errors. To significantly enhance the robustness against systematic errors, one can turn to the composite pulses [67; 68; 69], a train of pulses with identical amplitudes and relative phases to be addressed. A general expression for the total evolution operator of composite pulses can be formulated as follows:
\[U_{\rm tot}=U(\theta_{N})U(\theta_{N-1})\cdots U(\theta_{2})U(\theta_{1}), \tag{3}\]
where \(U(\theta_{n})=\exp[-iH(\theta_{n})T]\) is the evolution operator of the \(n\)th pulse with pulse duration \(T\) and Hamiltonian \(H(\theta_{n})\), \(n=1,\ldots,N\).
There are numerous methods to determine the values of the phase parameters \(\theta_{n}\) (\(n=1,\ldots,N\)). Among them, the most commonly used is to employ the Taylor expansion in the vicinity of the error \(\epsilon=0\)[70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82], which have been experimentally demonstrated in various physical platforms [83; 84; 85; 86]. This method has its own limitations. Although there are some analytical solutions for phases in some simple cases of a single error [87; 88; 89; 90; 91; 92; 93], more often, the expressions of the coefficients in Taylor series become particularly cumbersome for multiple types of systematic errors. As a result, even if the long and complicated expressions are given, only numerical solutions for phases are accessible [94; 95; 96; 97; 98; 99; 100; 101]. Worse still, the numerical solutions may not exist sometimes, and thus one requires the aid of various cost functions [102; 103].
Here, we would ascertain the phase parameters \(\theta_{n}\) (\(n=1,\ldots,N\)) by training the samples using the supervised learning model, which can effectively settle the aforementioned issues. Not only that, this model can also solve problems that traditional composite pulses are incapable of handling, e.g., for time-varying errors [104; 105; 106; 107; 108; 109; 110; 111]. For briefness, from now on, we adopt the vector \(\mathbf{\theta}\) to represent all phase parameters, i.e., \(\mathbf{\theta}=(\theta_{1},\ldots,\theta_{N})\).
### Cost function
We begin by constructing the cost function for robust quantum control (operations) in the supervised learning model. Given a set of training data \(\{\mathbf{x}_{k},y_{k}\}\), the goal of supervised learning is to learn a mapping from \(\mathbf{x}\) to \(y\), where \(y_{k}\) is called the label or target of the sample \(\mathbf{x}_{k}\) (usually a vector) in the data set. Here, the phases \(\mathbf{\theta}\) are the training parameters to be learned to make quantum operations as immune to errors as possible.
In the absence of errors, one can acquire perfect quantum operations by composite pulses. Nevertheless, the fidelity of quantum operations declines to varying degrees when the system exhibits different types of errors. Under these circumstances, the fidelity of quantum operations can be regarded as a function of errors. Therefore, the samples come from various errors of this system, i.e.,
\[\mathbf{x}_{k}=\mathbf{\epsilon}_{k}=(\epsilon_{k}^{1},\ldots,\epsilon_{k}^{l},\ldots, \epsilon_{k}^{L}), \tag{4}\]
where \(L\) represents the maximum dimension of errors, and the components \(\epsilon_{k}^{l}\) characterize different types (times) of errors, \(l=1,\ldots,L\). In other words, the errors can be unknown constant or time-varying noises in this model. The label we adopt here is the expected fidelity \(\hat{F}(\mathbf{\epsilon}_{k})\) of quantum operations,
\[y_{k}=\hat{F}(\mathbf{\epsilon}_{k}). \tag{5}\]
In ideal situations, we naturally expect the system to be completely resistant to all errors, i.e., \(\hat{F}(\mathbf{\epsilon}_{k})=1\) for all samples \(\mathbf{\epsilon}_{k}\), reflecting in the nominal fact that quantum
operations are fully accurate in the presence of a variety of errors.
For each sample \(\mathbf{\epsilon}_{k}\), we define the following loss function
\[\mathcal{L}_{k}=\hat{F}(\mathbf{\epsilon}_{k})-F(\mathbf{\epsilon}_{k}), \tag{6}\]
where \(F(\mathbf{\epsilon}_{k})\) is the actual fidelity of quantum operations in the presence of the error \(\mathbf{\epsilon}_{k}\). Note that the expression for \(F(\mathbf{\epsilon}_{k})\) can take different forms for different quantum operations. For quantum state preparations, \(F(\mathbf{\epsilon}_{k})\) can be given by
\[F(\mathbf{\epsilon}_{k})=|\langle\Psi_{\mathrm{T}}|\Psi\rangle|^{2}, \tag{7}\]
where \(|\Psi_{\mathrm{T}}\rangle\) is the target state, and \(|\Psi\rangle\) is the final state of the system after performing composite pulses. With regard to quantum gate operations, \(F(\mathbf{\epsilon_{k}})\) can be written as
\[F(\mathbf{\epsilon}_{k})=\frac{1}{M}\mathrm{tr}(U_{\mathrm{T}}^{ \dagger}U), \tag{8}\]
where \(U_{\mathrm{T}}\) is the target quantum gate, \(M\) is the dimension of \(U_{\mathrm{T}}\), and \(U\) denotes the total evolution operator for composite pulses. Then, the cost function of the supervised learning model is defined by
\[J=\frac{1}{K}\sum_{k=1}^{K}\mathcal{L}_{k}=\frac{1}{K}\sum_{k=1}^ {K}\left[\hat{F}(\mathbf{\epsilon}_{k})-F(\mathbf{\epsilon}_{k})\right], \tag{9}\]
which represents the average of all loss functions, and \(K\) is the sample size.
### Training and testing
Given the cost function (9), we are going to train the phase parameters \(\mathbf{\theta}\) to yield its minimum value, and then test the robust performance for composite pulses by the phase parameters learned. Namely, we divide the learning procedure into two steps: _training_ and _testing_. It is important to stress that the errors \(\mathbf{\epsilon}\) are continuous variables in this model. Hence, we sample \(K\) samples for errors in the training stage, while we consider all errors instead of resampling them in the testing stage.
In the _training stage_, we first select \(K\) samples of errors through some common probability distributions, such as the Gaussian distribution, the uniform distribution, the Beta distribution, and the exponential distribution. With \(K\) samples, we calculate the value of the cost function \(J\) via a random set of initial phase parameters, and then update them using the modified gradient descent algorithm, which is introduced in the next section. Consequently, the cost function \(J\) gradually diminishes with the passage of iterations.
For example, in the one-dimensional data space, we update the training parameters \(\mathbf{\theta}\) to make the red dotted curve as close to the orange solid line as possible, as shown in Fig. 1(a). The training parameters in high-dimensional data spaces are updated in the same way as in one-dimension, where the label \(\hat{F}(\mathbf{\epsilon_{k}})=1\) becomes a hyperplane rather than a line; see Fig. 1(b). It is worth mentioning that the update of the training parameters does not end until seeking out the minimum value or reaching a certain threshold of the cost function \(J\).
In the _testing stage_, we apply the learned phase parameters \(\mathbf{\theta}\) in the training stage to all systematic errors in the interval \([\epsilon_{-},\epsilon_{+}]\), and then evaluate the robust performance for this composite pulse sequence in terms of the average fidelity. If the average fidelity reaches the expected value, we accept this learned phase parameters and terminate the learning procedure. Otherwise, the learning procedure is a failure and we must retrain the phase parameters to obtain another solution, i.e., returning to the training stage with a new set of samples
Figure 1: Schematic diagram of the data space. (a) 2-dimensional space composed of the error \(\epsilon\) and the fidelity \(F(\epsilon)\), where the symbols \(\bullet\) and \(\blacklozenge\) represent the expected and actual fidelities for samples, respectively. According to the labelled training set \(\{\epsilon_{k},\hat{F}(\epsilon_{k})\}\), the phase parameters \(\mathbf{\theta}\) are trained to make the red dotted curve fit by the data points as close as possible to the orange solid line, where \(k=1,\dots,K\), and \(K\) is the sample size. (b) \((L+1)\)-dimensional space, where the \(L\)-dimensional hyperplane \(\hat{F}(\epsilon^{1},\dots,\epsilon^{l},\dots,\epsilon^{L})=1\) represents an ideal situation in which the fidelity of quantum operations is completely unaffected by all types (times) of errors. We have to train the phase parameters \(\mathbf{\theta}\) so that the red hypersurface fit by the symbols \(\blacklozenge\) approaches as much as possible the orange hyperplane \(\hat{F}(\epsilon^{1},\dots,\epsilon^{l},\dots,\epsilon^{L})=1\).
or initial phase parameters.
### Modified gradient descent algorithm
In the original GRAPE algorithm [56], the control variables are _amplitudes_ of control fields, but they become _phases_ here. As a result, the original GRAPE algorithm cannot be directly applied in the current model, and a modified version of the GRAPE algorithm is urgently required to accommodate this situation. We next explain this modified GRAPE algorithm.
At first, we make a proper deformation for the Hamiltonian of the \(n\)-th pulse (\(n=1,\ldots,N\))
\[H(\theta_{n}) = e^{-i\theta_{n}}\sigma_{+}+e^{i\theta_{n}}\sigma_{-} \tag{10}\] \[= \cos\theta_{n}\sigma_{x}+\sin\theta_{n}\sigma_{y}\] \[= u_{n,x}H_{x}+u_{n,y}H_{y},\]
where \(H_{x(y)}=\sigma_{x(y)}\), and \(\Omega\) has been regarded as unit. In Eq. (10), two virtual variables \(u_{n,x}\) and \(u_{n,y}\) are exploited to substitute for the phase parameter \(\theta_{n}\), and they satisfy the relation
\[\theta_{n}=\arctan\frac{u_{n,y}}{u_{n,x}}. \tag{11}\]
After this substitution, we can use the gradient descent algorithm to update the virtual variables \(u_{n,x}\) and \(u_{n,y}\), i.e.,
\[u^{\prime}_{n,x} = u_{n,x}-\alpha_{x}\frac{\delta J}{\delta u_{n,x}}, \tag{12}\] \[u^{\prime}_{n,y} = u_{n,y}-\alpha_{y}\frac{\delta J}{\delta u_{n,y}},\]
where both \(\alpha_{x}\) and \(\alpha_{y}\) are some prescribed learning rates. Then, the new phase parameters in the next iteration read
\[\theta^{\prime}_{n}=\arctan\frac{u^{\prime}_{n,y}}{u^{\prime}_{n,x}}. \tag{13}\]
It is worth stressing that \(u_{n,x}\) and \(u_{n,y}\) are only introduced as a mathematical tool, and they do not have to possess physical meaning. Figure 2 illustrates the geometric interpretation of the gradient descent principle.
It is important to note that the modified GRAPE (m-GRAPE) algorithm is pretty crude, not because we cannot find a solution of the training parameters \(\mathbf{\theta}\), but because there are very many locally optimal solutions. For example, in Fig. 3, we plot different excitation profiles using 100 groups of the learned parameters \(\mathbf{\theta}\), obtained through different initializations and samples in the supervised learning model. To be specific, the effective training rates are about 22% and 24% for pulse numbers \(N=5\) and 7, respectively, where the effective trainings are accounted only when the average fidelity \(\bar{F}(-0.1,0.1)\) is larger than 0.9999 [the definition of the average fidelity can be found in Eq. (16)]. Obviously, most of the trainings failed, because the corresponding solutions do not make the system robust against the pulse area error; see also in Fig. 3. Moreover, it is difficult to distinguish which solutions are global optimal. Therefore, we have to properly amend the m-GRAPE algorithm.
To search global optimal solutions of the training parameters (probably more than one), two common methods of "escaping" or "restarting" are usually adopted; namely, searching in other directions based on the current solution, or ignoring the current solution and
Figure 2: Geometric interpretation of the modified GRAPE algorithm. First, the phase \(\theta_{n}\) (\(n=1,\ldots,N\)) is mapped onto the unit circle to yield a unit vector (the green line). By orthogonal decomposition of this vector, we obtain two variables \(u_{n,x}\) and \(u_{n,y}\). Then, we use the gradient descent algorithm to calculate the increment in both directions, and thus acquire the increment vector (the gray line). According to the parallelogram law of vectors, we obtain a new vector (the orange line) whose direction is actually the gradient descent direction. As a result, we have the new phase \(\theta^{\prime}_{n}\) in the next iteration. The iteration process continues until reaching the threshold of the gradient.
searching again in a new parameter space.
During the learning process, we need to evaluate the performance of the training each time, which can be qualified by the average fidelity. Let us start with the case of one-dimensional error. Given the learned phase parameters \(\mathbf{\theta}\), the definition of the average fidelity \(\bar{F}\) is expressed by
\[\bar{F}=\int\!\rho(\epsilon)\;F_{\mathbf{\theta}}(\epsilon)\;d\epsilon, \tag{14}\]
where \(F_{\mathbf{\theta}}(\epsilon)\) represents the fidelity of quantum operations in the presence of the error \(\epsilon\), and \(\rho(\epsilon)\) is the probability density distribution of the error. In reality, it is difficult to exactly predetermine the specific distribution of \(\rho(\epsilon)\). The usual way to tackle it is to use _a priori_ density distribution such as the uniform distribution or the Gaussian distribution. Here, \(\rho(\epsilon)\) is simply adopted by a uniform distribution, i.e., all errors in the interval \([\epsilon_{-},\epsilon_{+}]\) are considered equally weighted in the testing stage. Therefore, the form of \(\rho(\epsilon)\) is
\[\rho(\epsilon)=\left\{\begin{array}{ll}\frac{1}{\epsilon_{+}- \epsilon_{-}},&\epsilon\in[\epsilon_{-},\epsilon_{+}],\\ 0,&\text{others}.\end{array}\right. \tag{15}\]
As a result, Eq. (14) becomes
\[\bar{F}(\epsilon_{-},\epsilon_{+})=\frac{1}{\epsilon_{+}-\epsilon_{-}}\int_{ \epsilon_{-}}^{\epsilon_{+}}F_{\mathbf{\theta}}(\epsilon)\;d\epsilon. \tag{16}\]
And its discrete version reads
\[\bar{F}(\epsilon_{-},\epsilon_{+})=\frac{1}{\epsilon_{+}-\epsilon_{-}}\sum_{ k}F_{\mathbf{\theta}}(\epsilon_{k})\;\Delta\epsilon_{k}. \tag{17}\]
Note that the value of \(\bar{F}(\epsilon_{-},\epsilon_{+})\) does not have to be set manually, but is self-adaptive during the learning process. Alternatively, we can use the generalization error to characterize the robust performance of this supervised learning model, which is defined by
\[G(\epsilon_{-},\epsilon_{+})=\frac{1}{\epsilon_{+}-\epsilon_{-}} \int_{\epsilon_{-}}^{\epsilon_{+}}\!\!\left[1\!-\!F_{\mathbf{\theta}}(\epsilon) \right]\,d\epsilon=1\!-\!\bar{F}(\epsilon_{-},\epsilon_{+}). \tag{18}\]
Moreover, we can define the robust width \(W(\xi)\) for this model as well, i.e.,
\[W(\xi)=\epsilon_{\text{max}}-\epsilon_{\text{min}}, \tag{19}\]
where all the points in the interval \([\epsilon_{\text{min}},\epsilon_{\text{max}}]\) satisfy
\[\max\big{\{}1-F_{\mathbf{\theta}}(\epsilon)\big{\}}\leq\xi,\quad\epsilon\in[ \epsilon_{\text{min}},\epsilon_{\text{max}}], \tag{20}\]
with a given error threshold \(\xi\). Remarkably, these definitions are easily generalized to the case of multi-dimensional errors.
The _escaping_ method is manipulated as follows. We first employ the m-GRAPE algorithm for learning \(M\) groups of phase parameters \(\mathbf{\theta}_{m}\) as a starting point, where \(m=1,\ldots,M\). To escape the locally optimal solution, we artificially add a stochastic gradient to each phase so as to generate new input variables. By making use of the m-GRAPE algorithm again, we would obtain the new learned parameters \(\mathbf{\theta}_{m}^{\prime}\) and the corresponding average fidelity \(\tilde{F}_{m}^{\prime}(\epsilon_{-},\epsilon_{+})\), where the subscript \(m\) represents the \(m\)-th group of training. If \(\tilde{F}_{m}^{\prime}(\epsilon_{-},\epsilon_{+})\leq\tilde{F}_{m}(\epsilon_ {-},\epsilon_{+})\), it means that the new learned parameters \(\mathbf{\theta}_{m}^{\prime}\) are not as good as the old ones. As a result, they should be abandoned, and we still keep the original learned parameters \(\mathbf{\theta}_{m}\). However, if \(\tilde{F}_{m}^{\prime}(\epsilon_{-},\epsilon_{+})>\tilde{F}_{m}(\epsilon_{-}, \epsilon_{+})\), it represents that the solution has jumped out of locally optimal regions, and thus we accept the new ones. After updating each training group \(\mathcal{N}\) times, we next calculate the total average fidelity for all training groups,
\[\bar{F}_{\text{tot}}=\frac{1}{M}\sum_{m=1}^{M}\bar{F}_{m}(\epsilon_{-}, \epsilon_{+}). \tag{21}\]
Figure 3: Fidelity \(F(\epsilon_{A})\) vs the pulse area error \(\epsilon_{A}\) for pulse numbers \(N=5\) (top panel) and \(N=7\) (bottom panel). Each excitation profile is plotted according to the learned phase parameters \(\mathbf{\theta}\), which are obtained by training different samples in the supervised learning model. The pulse area of each pulse is set to \(\pi/2\). The samples \(\epsilon_{k}^{A}\) are sampled uniformly and randomly from the interval \([-0.3,0.3]\) with sample size \(K=1000\).
Then, the learned parameters whose average fidelity are lower than \(\bar{F}_{\rm tot}\) are discarded. This iteration process continues until the threshold of the total average fidelity is reached. At the end, the optimal solutions for robust quantum control will be left.
**Escape-based GRAPE algorithm**
1. Use m-GRAPE algorithm to learn \(M\) group parameters \(\mathbf{\theta}_{m}\) and calculate the average fidelity \(\bar{F}_{m}(\epsilon_{-},\epsilon_{+})\).
2. Add stochastic gradient \(\delta\theta_{n}^{m}\) to the learned parameters: \(\theta_{n}^{m}\!\!\leftarrow\!\theta_{n}^{m}\!\!+\!\delta\theta_{n}^{m}\), regarded as new input variables.
3. Reuse the m-GRAPE algorithm to learn new parameters \(\mathbf{\theta}_{m}^{\prime}\) and the average fidelity \(\bar{F}_{m}^{\prime}(\epsilon_{-},\epsilon_{+})\).
4. If \(\bar{F}_{m}(\epsilon_{-},\epsilon_{+})>\bar{F}_{m}^{\prime}(\epsilon_{-}, \epsilon_{+})\), then \(\mathbf{\theta}_{m}\leftarrow\mathbf{\theta}_{m}^{\prime}\) and \(\bar{F}_{m}(\epsilon_{-},\epsilon_{+})\leftarrow\bar{F}_{m}^{\prime}(\epsilon _{-},\epsilon_{+})\).
5. Repeat Steps 2 through 4 \(\mathcal{N}\) times.
6. Calculate the total average fidelity \(\bar{F}_{\rm tot}\).
7. Discard learned parameters \(\mathbf{\theta}_{m}\) when \(\bar{F}_{m}(\epsilon_{-},\epsilon_{+})<\bar{F}_{\rm tot}\).
8. Go to Step 2 until \(\left(\max\{\bar{F}_{m}(\epsilon_{-},\epsilon_{+})\}-\bar{F}_{\rm tot}\right)<\varepsilon\).
For the _restarting_ method, we must increase the number of training groups to a large enough size, and each group is trained individually. After accomplishing the training, we measure the performance of each group using the average fidelity defined by Eq. (16). Then, the phase parameters that make the average fidelity reach the desired value are left, and the rest is simply abandoned.
Note that the restarting method is very practical when the dimension of training parameters is small. However, if the dimension is sufficiently large, it may be difficult to choose enough groups of training parameters. In this situation, it is more suitable for adopting the escaping method to obtain the optimal solutions of training parameters.
### Sampling methods
In the supervised learning model, one of the most important parts is to acquire enough samples, and the samples should be representative. If the sample size is too small, it is easy to observe underfitting, where the parameter law of the model may not be learned effectively. Furthermore, a highly concentrated distribution of samples may result in a lack of representativeness, increasing the risk of overfitting and ultimately degrading the forecast accuracy of new samples. Neither of these situations is capable of unearthing the maximum potential of this model. Therefore, how to efficiently sample enough samples becomes particularly critical in supervised learning. Specifically, the usage of different samples for training will produce different training parameters, and such discrepancies can significantly impact the quality of robust quantum control. During the sampling process, two factors have to be concerned, i.e., sampling range and distribution. In this section, we focus on this issue.
Figure 4(a) shows the relationship between the generalization error \(\bar{G}(\epsilon_{+}^{\prime},\epsilon_{+}^{\prime})\) and the total number of samples. It is readily found that the generalization error is relatively large and oscillates when the sample size \(K\) is not large enough. With the increasing of the sampling size, the generalization error gradually decreases and stabilizes at a relatively low level. This implies that having sufficient samples are imperative for successfully training the parameters. Furthermore, a comparison of points with different shapes shows that increasing the number of training groups also facilitates parameter training, making it easier to find the optimal solutions; see the points with the asterisk.
In Fig. 4(b), we plot trends in the generalization error by using different sampling
Figure 4: (a) Generalization error \(G(\epsilon_{-}^{\prime},\epsilon_{+}^{\prime})\) vs the sampling size \(K\). The objective is to robustly attain population inversion for a pulse area error, where we uniformly sample the samples from the interval \([0,0.3]\), set the pulse area of each pulse to \(\pi/2\), choose the pulse number \(N=7\), adopt a learning rate of \(0.001\), and randomly select the initial values of the training parameters \(\mathbf{\theta}\) from the range of \([-\pi,\pi]\). (b) Generalization error \(G(\epsilon_{-}^{\prime},\epsilon_{+}^{\prime})\) vs the sampling boundary \(\epsilon_{+}^{\prime}\), where the samples are uniformly sampled from the interval \([0,\epsilon_{+}^{\prime}]\) with a sample size of \(K=1000\).
uniformly sampled from the interval \([0,\epsilon_{+}^{\prime}]\), where the symbol \(\prime\) is added in \(\epsilon_{+}\) to distinguish the sampling boundary from the error boundary. We can see that the generalization error \(\tilde{G}(-\epsilon_{+}^{\prime},\epsilon_{+}^{\prime})\) tends to increase on the whole when increasing the sampling range. This result means that a small sampling range is beneficial to obtain relatively high fidelity of quantum operations. Nevertheless, smaller sampling ranges are not always superior, because an excessively small sampling range may be counterproductive to improving the robust width, as explained in Sec. II.6. Thus, an appropriate sampling range favors attaining high fidelity and a broad robust width.
We next investigate the influence of different sampling distributions on the robust performance of quantum control. Figure 5 demonstrates robust performances for various learned phase parameters \(\mathbf{\theta}\), which are trained by some common sampling distributions, including the uniform distribution \(U(\epsilon_{-},\epsilon_{+})\) with sampling interval \([\epsilon_{-},\epsilon_{+}]\), the Gaussian distribution \(G(\mu,\nu)\) with expectation \(\mu\) and variance \(\nu\), the exponential distribution \(E(\lambda)\) with rate parameter \(\lambda\), and the Beta distribution \(B(\alpha,\beta)\) with parameters \(\alpha\) and \(\beta\). Obviously, different sampling distributions exhibit different robust performances of quantum control.
First, the training effect is relatively poor for symmetric sampling with respect to null error, e.g., the distributions \(U(-0.3,0.3)\) and \(G(0,0.2)\), because at least half of the average fidelity \(\tilde{F}(-0.2.0.2)\) is lower than \(0.9\), as shown by the orange bar in Figs. 5(a) and 5(c). To shed light on it more specifically, at the top of Fig. 5, we also plot the fidelity as a function of the pulse area error for different learned phase parameters \(\mathbf{\theta}\). It can be easily observed that a large number of excitation profiles do not present a plateau in the vicinity of the pulse area error \(\epsilon=0\), implying that these trainings failed. The reason behind these results is that the solutions of the training parameters \(\mathbf{\theta}\) converge more easily to local optimality and hardly escape from these regions.
When we employ asymmetric sampling, as shown in the insets of Figs. 5(b), 5(d), 5(e) and 5(f), almost all excitation profiles exhibit a high-fidelity window, and thus most of the training is valid. A closer look at those excitation profiles reveals that different sampling distributions lead to slightly different robust features. Among them, the exponential distribution performs the worst in terms of the amount of high-fidelity [\(\tilde{F}(-0.2,0.2)>0.9999\)]; see the blue bar in Fig. 5. This is mainly attributed to the fact that the training process takes into account a certain number of samples with a large error magnitude.
For the uniform distribution, it performs the best in the high-fidelity window [\(\tilde{F}(-0.2,0.2)>0.9999\)], because only samples with low error magnitude are used in this case. For the remaining two distributions, the effects of training are similar. Therefore, selecting a proper sampling distribution can facilitate the training of parameters. In the following, we tend to train the phase parameters \(\mathbf{\theta}\) with an asymmetric uniform distribution, unless otherwise specified.
Figure 5: Robust performances for different sampling methods. We obtain \(500\) groups of phase parameters after training \(1000\) samples by using the uniform distribution \(U(\epsilon_{-},\epsilon_{+})\) with sampling interval \([\epsilon_{-},\epsilon_{+}]\), the Gaussian distribution \(G(\mu,\nu)\) with expectation \(\mu\) and variance \(\nu\), the exponential distribution \(E(\lambda)\) with rate parameter \(\lambda\), and the Beta distribution \(B(\alpha,\beta)\) with parameters \(\alpha\) and \(\beta\), respectively. The bar chart at the bottom counts the average fidelity \(\tilde{F}(-0.2,0.2)\) for different intervals, and the panels at the top depict the corresponding fidelity versus the pulse area error by using \(20\) groups of phase parameters that have been learned. The pulse number is \(N=7\), and other parameters are the same as in Fig. 4.
### Generalization ability
The generalization ability is a key indicator to measure the quality of supervised learning models. It refers to the ability to generalize from the training set to the testing set. Generally speaking, the supervised learning model possesses strong generalization ability when executing well in predicting samples that have never been seen before. In the current model, bounded by the sampling boundary \(\epsilon^{\prime}_{+}\), we divide the never-before-seen samples into two parts: internal samples and external samples. As a result, the generalization ability is primarily manifested in two aspects:
(_i_) Generalization of errors within the sampling range, which can be quantified by the generalization error \(G(\epsilon^{\prime}_{-},\epsilon^{\prime}_{+})\) in Eq. (18). Specifically, a greater generalization ability within the sampling range is reflected by a smaller generalization error \(G(\epsilon^{\prime}_{-},\epsilon^{\prime}_{+})\). In other words, a small \(G(\epsilon^{\prime}_{-},\epsilon^{\prime}_{+})\) represents that the learned phase parameters \(\mathbf{\theta}\) have strong robustness against the errors within the sampling range.
(_ii_) Generalization of the error range. For a given error threshold \(\xi\), it is very necessary to know within what error range rather than sampling range the learned phase parameters \(\mathbf{\theta}\) have good robustness, and this quantity can be measured by the robust width \(W(\xi)\) in Eq. (19). In fact, this quantity is mainly used to characterize whether or not the learned phase parameters \(\mathbf{\theta}\) work well when the errors are beyond the sampling range.
Before elaborating on the generalization ability, we stress that there are some differences between the traditional supervised learning model [112] and the current one. In traditional supervised learning [112], when the complexity of the model exceeds a certain level, the algorithm is easy to overfit, and thus affects the accuracy of sample prediction. Namely, more training parameters is not always better. On the contrary, the more complex the current model, the stronger generalization ability the system has, because a more complex model implies that _more phase parameters can be used for training_. Namely, model complexity (i.e., _increasing pulse numbers_) is conducive to promoting the generalization ability. The reason is that the rule that needs to be learned is already known, i.e., the hyperplane \(\hat{F}(\epsilon^{1},\dots,\epsilon^{l},\dots,\epsilon^{L})=1\) as shown in Fig. 1(b). We next study this issue in detail.
Figure 6 displays the generalization error \(G(-\epsilon_{+},\epsilon_{+})\) for different pulse numbers and sampling ranges. As shown by the dark blue bar in Fig. 6(a), i.e., \(G(-0.2,0.2)\ll 10^{-5}\), the generalization of errors within the sample range is excellent for small sampling range. In addition to this, the generalization of the error range does not behave well, as it fails to exhibit a substantial reduction in correlation with an increase in the error range; see the light blue bars in Fig. 6(a). Particularly, there is not much improvement in the robust performance when increasing the number of pulses. Those results reveal that when the sampling range is too small, the phase parameters may not be effectively trained, especially for a large number of pulses. When increasing the sampling range, the generalization of the error range is gradually enhanced, but at the cost of a loss in the average fidelity; see also the dark blue bars in Figs. 6(a)-6(d). Hence, a suitable sampling range enables better training of phase parameters, thereby enhancing the robustness of this model. As an example, for pulse number \(N=9\) and sampling range \([0,0.6]\), as shown in Fig. 6(c), the generalization error \(G(-0.6,0.6)\) is on the order of \(10^{-5}\), less than the fault tolerance threshold \(10^{-4}\)[1]. This implies that the fidelity of population inversion is much larger than \(0.9999\) even though there is an \(\pm 60\%\) pulse area error in this system.
In Fig. 7(a), we show the relationship between the robust width \(W(\xi)\) and the sampling boundary \(\epsilon^{\prime}_{+}\), where error threshold \(\xi\) is equal to \(10^{-4}\). In cases where the sampling range is not particularly large, e.g., \(\epsilon^{\prime}_{+}\leq 0.2\), the robust width does not vary greatly depending on the number of pulses. This is mainly due to the fact that the sampling range is too narrow to effectively represent all errors. As a result, although complex models may possess strong generalization abilities, they are prone to overfitting, a phenomenon of overlearning on the training samples. To overcome the overfitting phenomenon and enhance the robustness against errors over a broader range, we require to either expand the sampling range or change the sampling method (e.g., employing the Gaussian distribution to consider a small amount of errors with large magnitude) to have more representative
Figure 6: Generalization error \(G(-\epsilon_{+},\epsilon_{+})\) vs the pulse number \(N\) for different sample ranges. The samples \(\epsilon_{k}\) are sampled by the uniform distributions (a) \(U(0,0.2)\), (b) \(U(0,0.4)\), (c) \(U(0,0.6)\), and (d) \(U(0,0.8)\), where other parameters are the same as in Fig. 4.
samples.
Broadly speaking, for a given pulse number, the robust width exhibits a positive correlation with an escalation of the sampling range; see also in Fig. 7(a). While the sampling range exceeds a certain threshold, the robust width drops sharply. The main reason lies in the fact that, in such circumstances, this model may appear underfitting, a phenomenon in which the model inadequately fits the training samples. Hence, it is necessary to select a suitable sampling range to maximize the generalization ability of this model prior to the onset of underfitting. On the other hand, the threshold of the sampling range varies depending on the number of pulses; see distinct positions of the maximum robust width in Fig. 7(a). Remarkably, increasing the pulse number leads to an increase in the sampling range, verifying that a more sophisticated model tends to possess a stronger generalization ability.
Figure 7(b) shows the generalization error as a function of the error boundary, where different color curves represent different sampling ranges, and we set the pulse number \(N=9\). It can be seen that the generalization errors change slowly and their values are relatively small when the error falls within the sampling range. This indicates that the generalization ability of this model is very strong within the sampling range. Nevertheless, once the error goes beyond the sampling range, there is a sharp growth in the generalization error, signifying an unsatisfactory generalization performance in this range.
Thereby, for a particular number of training parameters (or equivalently, pulses), generalization of errors within the sampling range and generalization of the error range are always contradictory in the current model. In other words, there is a tradeoff between high fidelity and wide robust range of quantum control for a given pulse number. High fidelity usually leads to a reduction in robust width, and vice versa. To promote these two generalization abilities, a feasible way is to increase the complexity of the supervised learning model (equivalently, the pulse number) so that more phase parameters can be involved in the training. By doing this, both high fidelity and wide robust range can be simultaneously attained in the current model. This is also confirmed in Fig. 6 showing that the generalization error generally decreases when increasing the pulse number.
## III Applications: Robust Quantum Control
### Single type of error
We begin by illustrating a scenario in which the system exhibits only one type of error. That is, the samples are one-dimensional. Indeed, we have detailedly explained in Sec. II how to successfully train the phase parameters \(\mathbf{\theta}\) in the presence of the pulse area error \(\epsilon_{A}\). We next demonstrate that this supervised learning model is equally applicable to suppress other types of errors, such as the detuning error.
Figure 8 shows the performance of the fidelity in the presence of the detuning error \(\epsilon_{\Delta}\) by the learned phase parameters, where the samples \(\epsilon_{k}^{\Delta}\) are sampled from the uniform distribution \(U(0,0.3)\). An inspection of Fig. 8 reveals that it is also feasible to make the system robust with respect to the detuning error by training on the phase parameters.
#### iii.1.1 Arbitrary superposition state
So far, we illustrate the supervised learning model with the example of population inversion. Actually, this model is suitable for implementing arbitrary population transfer in a robust manner as well. To this end, it is imperative to transform the target state into a superposition state we desire to prepare, whose general form is given by
\[|\psi_{T}\rangle=\cos\phi|0\rangle+e^{i\varphi}\sin\phi|1\rangle, \tag{22}\]
Figure 7: (a) Robust width \(W(\xi)\) vs the sampling boundary \(\epsilon_{+}^{\prime}\) for different pulse numbers. (b) Generalization error \(G(-\epsilon_{+},\epsilon_{+})\) vs the error boundary \(\epsilon_{+}\) for \(N=9\). The samples \(\epsilon_{k}\) are sampled by the uniform distribution \(U(0,\epsilon_{+}^{\prime})\), the error threshold is set to \(\xi=10^{-4}\), and other parameters are the same as in Fig. 4.
where \(\phi\) and \(\varphi\) can be arbitrary. After establishing the target state, we then employ Eq. (7) to train the phase parameters to accomplish this goal.
Figure 9 shows the time-evolution trajectories of the system state by the learned phase parameters \(\mathbf{\theta}\). We can see that when there is an pulse area error in this system, the trajectories (the dashed and dot-dashed curves) gradually deviate from the exact one (the solid curve) during the evolution process. Despite this, those three trajectories eventually come together later on. Therefore, the learned phase parameters \(\mathbf{\theta}\) do ensure that the target state \(|\psi_{T}\rangle\) is still obtained with an ultrahigh fidelity even in the presence of systematic errors.
#### iv.2.2 Quantum gate
This supervised learning model can be used not only to prepare arbitrary superposition states, but also to implement universal quantum gates. The expression can be written as
\[\mathcal{U}(\phi,\varphi,\Phi)=\left[\begin{array}{cc}e^{i\Phi}\cos\phi&-e^ {-i(\varphi+\Phi)}\sin\phi\\ e^{i(\varphi+\Phi)}\sin\phi&e^{-i\Phi}\cos\phi\end{array}\right], \tag{23}\]
where \(\phi,\varphi\), and \(\Phi\) are arbitrary real numbers based on the desired quantum gate. In the current model, there are two methods to robustly obtain the universal quantum gate given by Eq. (23). Next, we present each of these two methods individually.
(_i_) _State-based method._ Given that the initial state of the system is \(|0\rangle\), we first train the phase parameters \(\mathbf{\theta}\) to robustly obtain a superposition state \(|\psi_{T}\rangle=\cos\phi|0\rangle+e^{i\varphi}\sin\phi|1\rangle\), where the values of \(\phi\) and \(\varphi\) are determined by the desired quantum gate. Then, we shift the learned phase parameters by an appropriate angle:
\[\mathbf{\theta}\leftarrow\mathbf{\theta}+\Phi, \tag{24}\]
where the shift angle \(\Phi\) is dependent on the desired quantum gate as well. We emphasize that this phase shift manipulation does not change the probability amplitude of the matrix elements in Eq. (23); rather, it modifies only the relative phase between different matrix elements. Therefore, the final learned phase parameters for implementing the universal quantum gate in a robust manner become \(\mathbf{\theta}+\Phi\). As an example, the learned phase parameters \(\mathbf{\theta}\) adopted in Fig. 9(a) can also be used to implement a Hadamard gate with a specific phase, and the phase of the Hadamard gate is properly modulated by the phase shift \(\Phi\).
(_ii_) _Evolution operator-based method._ We directly solve the Schrodinger equation for the evolution operator \(U(t)\) of this system,
\[\dot{U}(t)=-iH(t)U(t), \tag{25}\]
where \(H(t)\) is the system Hamiltonian, and \(U(0)=\mathbf{1}\) is an identity operator at the initial time. The target quantum gate \(\mathcal{U}(\phi,\varphi,\Phi)\) now can be reached via an evolution operator after \(N\) pulses, i.e.,
\[U(NT)=\mathcal{U}(\phi,\varphi,\Phi). \tag{26}\]
In this situation, we adopt Eq. (8) to calculate the corresponding gradients.
Suppose now that we intend to execute a Hadamard
Figure 8: Fidelity \(F(\epsilon_{\Delta})\) vs the detuning error \(\epsilon_{\Delta}\). Each excitation profile is plotted according to the learned phase parameters \(\mathbf{\theta}\), where the detuning errors \(\epsilon_{k}^{\Delta}\) are sampled from the uniform distribution \(U(0,0.3)\), and other parameters are the same as in Fig. 4.
Figure 9: Visualization of preparing different superposition states on the Bloch sphere. Left: \(\phi=\pi/4\) and \(\varphi=0\). Right: \(\phi=\pi/3\) and \(\varphi=0\). The phase parameters \(\mathbf{\theta}\) are attained by training in the supervised learning model, where the sampling distribution takes the form \(U(-0.1,0.3)\). The blue (red) arrow denotes the Bloch vector pointing to the initial (final) state. The solid, dashed, and dot-dashed curves represent the trajectories of the system state in the presence of the pulse area error \(\epsilon_{A}=0\), \(+0.1\), and \(-0.1\), respectively, and the dynamical evolution movie can be found in the supplementary material. The pulse number is chosen as \(N=7\), with the pulse area of each pulse being \(\pi/4\), and other parameters are the same as in Fig. 4.
gate:
\[\mathrm{H}=\frac{1}{\sqrt{2}}\left[\begin{array}{cc}1&1\\ 1&-1\end{array}\right]. \tag{27}\]
To this goal, we proceed to train the phase parameters \(\mathbf{\theta}\) through minimizing the cost function given by Eq. (9). Figure 10 shows the variation of tomography results after different iterations. We can see that the evolution operator of the system gradually approaches the Hadamard gate as the number of iterations increases, and eventually reaches the goal with ultrahigh fidelity.
Different from the case of population inversion, the robust performance of quantum gates is strongly correlated with the sampling distribution of samples here. In other words, by training on different ranges of samples, we would learn different phase parameters. And these phase parameters exhibit different robust performance for systematic errors.
In Fig. 11, we plot the fidelity of the Hadamard gate versus the pulse area error using various learned phase parameters attained from distinct sampling ranges. It is shown that the learned phase parameters manifest varying degrees of robustness to the pulse area error \(\epsilon_{A}\) under different sampling distributions.
### Multiple types of errors
When the system simultaneously exhibits different types of errors (e.g., the pulse area error and the detuning error), it is also convenient to employ this supervised learning model to implement robust quantum control. In this situation, the sample space becomes multidimensional, where each dimension represents a specific type of errors. As a result, the training set can be represented as \(\{\epsilon_{k}^{A},\epsilon_{k}^{\Delta},\dots,\hat{F}(\epsilon_{k}^{A}, \epsilon_{k}^{\Delta},\dots)\}\), where \(\hat{F}(\epsilon_{k}^{A},\epsilon_{k}^{\Delta},\dots)=1\) is a hyperplane, \(k=1,\dots,K\). In Fig. 12, we plot the fidelity as a function of the pulse area error and the detuning error by training different numbers of phase parameters, and the results demonstrate that the training methodology is able to maintain its efficacy even when confronted with multiple types of errors.
Figure 11: Robust performances for the pulse area error by using different ranges of sampling distributions. Parameters except for the sampling distribution are the same as in Fig. 10.
Figure 10: Real and imaginary parts of the evolution operator \(U(NT)\) after different iterations. We set the pulse area of each pulse to \(\pi/4\), the pulse number to \(N=9\), and sample the pulse area error \(\epsilon_{k}^{A}\) from the uniform distribution \(U(-0.1,0.3)\), while other parameters are the same as in Fig. 4.
### Time-varying errors
In this section, we manage to train the phase parameters when the errors are time-dependent. As we know, it is impossible to exactly give out the Taylor expansion for time-varying errors by using traditional composite pulse technologies [113], because the Hamiltonian generally cannot commute with itself at different times. One common method of tackling time-varying errors is to take advantage of appropriate unitary transformations to formally divide the evolution operator of the system into error-free and error components, and then design related parameters to minimize the influence of the error term [114; 115; 116; 117; 118; 119; 120]. While this method is effective, it is quite cumbersome.
Here we show that the supervised learning model is still feasible to address the case of time-varying errors. In this case, the training set needs to be modified as \(\{\epsilon_{k}^{l},\ldots,\epsilon_{k}^{l_{L}},\hat{F}(\epsilon_{k}^{l},\ldots,\epsilon_{k}^{l_{L}})\}\), where \(\hat{F}(\epsilon_{k}^{1},\ldots,\epsilon_{k}^{l_{L}})=1\) represents a hyperplane, and \(\epsilon_{k}^{l_{L}}\) denotes the error magnitude in the time interval \([t_{l},t_{l+1}]\), \(l=1,\ldots,L\), \(k=1,\ldots,K\). Namely, the time-dependent nature of errors is reflected in the fact that errors in different time intervals are regarded as distinct dimensions of the samples.
Unlike the case of time-independent errors, we cannot account for all time-varying errors in the testing stage. To validate the learned phase parameters \(\mathbf{\theta}\) obtained from the training stage, we require to sample the time-varying errors for different time intervals to obtain enough testing samples. The fidelity of each testing sample is plotted in Fig. 13(a) by using the learned phase parameters \(\mathbf{\theta}\), where the Gaussian distribution is \(G(0.1,0.02)\). It is clearly shown that the learned phase parameters \(\mathbf{\theta}\) perform very well in the vast majority of testing samples, where a total average fidelity of \(99.59\%\) is achieved. Furthermore, we demonstrate in Fig. 13(b) how to successfully obtain robust population inversion by training the phase parameters \(\mathbf{\theta}\) in the presence of time-varying errors. For a comparison, Fig. 13(b) also
Figure 12: Contour plots of the fidelity \(F(\epsilon_{A},\epsilon_{\Delta})\) vs the pulse area error \(\epsilon_{A}\) and the detuning error \(\epsilon_{\Delta}\) for different numbers of phase parameters. (a) \(N=7\). (b) \(N=9\). Both the pulse area error \(\epsilon_{k}^{\Delta}\) and the detuning error \(\epsilon_{k}^{\Delta}\) are sampled from the uniform distribution \(U(0,0.22)\), and other parameters are the same as in Fig. 4.
Figure 13: (a) Testing performance for the learned phase parameters \(\mathbf{\theta}\), where the five-dimensional samples are sampled from the Gaussian distribution \(G(0.1,0.02)\), and the average fidelity is \(99.59\%\) for \(1000\) testing samples. (b) \(F(\epsilon_{A})\) vs the pulse area error \(\epsilon_{A}\), where the pulse area error suffers from time-dependent Gaussian noises with the expectation \(0.1\epsilon_{A}\) and the variance \(0.1\epsilon_{A}\). All pulse area errors \(\epsilon_{k}^{l_{L}}\) are sampled from the Gaussian distribution \(G(0.1,0.02)\), \(N=5\) pulses, the time interval \(t_{l+1}-t_{l}=T\), and other parameters are the same as in Fig. 4.
shows the fidelity as a function of the time-independent errors. We can observe that the robustness with respect to time-independent errors is much better than that of time-varying errors.
## IV Further extensions
Up to now, we have regarded the phases as training parameters in the supervised learning model. Actually, the phases do not have unique specifications, and thus other physical parameters (e.g., detunings or Rabi frequencies) are also able to serve as training parameters instead. In the following, we investigate the feasibility of utilizing the detuning \(\Delta_{n}\) (\(n=1,\ldots,N\)) of each pulse as training parameters to implement robust quantum control. Undoubtedly, the training of Rabi frequencies can also be accomplished through a comparable procedure. To simplify the notation, all detuning parameters \(\Delta_{n}\) are collectively represented by a vector \(\mathbf{\Delta}\), i.e., \(\mathbf{\Delta}=(\Delta_{1},\ldots,\Delta_{N})\).
Figure 14(a) shows the relationship between the cost function and the number of iterations for different number of detuning parameters in the training process. During the procedure of iterations, the detuning \(\Delta_{n}\) is reinitialized if \(\Delta_{n}/\Omega_{n}>10\). The reason for this can be found as follows. From a physical point of view, the pulse becomes highly detuned and primarily influences the phase rather than probability amplitude of quantum states when \(\Delta_{n}\) is excessively large. As a result, the pulse with large \(\Delta_{n}\) makes little contribution to population transfer, and thus needs to be reinitialized.
It is shown in Fig. 14(a) that the value of the cost function for \(N=9\) is larger than that of other two situations. This is because the uniform distributions are chosen from different ranges for different pulse numbers. As explained in Sec. II.5, a large sampling range leads to a large value of the cost function. Thus, what we care about is not the value of the cost function, but its convergence. When the cost function converges to a small steady value, we can claim that a set of solutions for the training parameters is obtained. Clearly, the training is successful in Fig. 14(a), since the cost function gradually diminishes with each subsequent iteration and finally tends to an extremely small value.
We can also see from the inset of Fig. 14(a) that the learned detuning parameters \(\mathbf{\Delta}\) perform particularly well in terms of robustness against the pulse area error. Furthermore, similar to the case of the phase parameters, the system becomes more resilient to the pulse area error when increasing the number of detuning parameters, as shown by different high-fidelity windows in the inset of Fig. 14(a).
So far, the supervised learning model has only been demonstrated to work well in the two-dimension Hilbert space. Actually, this model can also be applied in higher-dimension Hilbert spaces. Next, we take three dimensions as an example to briefly show robust population inversion between the states \(|1\rangle\) and \(|3\rangle\), while the treatment in higher dimensions is similar. Assume that a three-level system is resonantly driven by two external fields, and the Hamiltonian under the rotating-wave approximation can be written as
\[H(\theta_{a},\theta_{b})=\Omega_{a}e^{-i\theta_{a}}|1\rangle\langle 2|+\Omega_{ b}e^{-i\theta_{b}}|2\rangle\langle 3|, \tag{28}\]
where \(\Omega_{a(b)}\) denotes the Rabi frequency between the system and the first (second) external field with the corresponding phase \(\theta_{a(b)}\). For simplicity, we again regard the phases \(\theta_{a}\) and \(\theta_{b}\) as training parameters, i.e., \(\mathbf{\theta}=(\theta_{1}^{a},\theta_{1}^{b},\ldots,\theta_{N}^{a},\theta_{N}^{ b})\). As shown in Fig. 14(b), there
Figure 14: (a) Cost function vs the number of iterations for different numbers of training parameters. We train 500 groups of samples with \(K=1000\), and display the group with the highest degree of robustness (the remaining groups are not shown in the figure). Inset: Robust performances for the pulse area error by distinct learned detuning parameters \(\mathbf{\Delta}\). (b) Fidelity vs the pulse area error by using the learned phase parameters \(\mathbf{\theta}\) in a three-level system, where we set \(\Omega_{a}=\Omega_{b}=\Omega\) for simplicity. The pulse area errors \(\epsilon_{k}^{A}\) (\(k=1,\ldots,K\)) are sampled from different uniform distributions depending on the pulse number. Specifically, the uniform distributions used are \(U(0,0.2)\) for \(N=5\), \(U(0,0.3)\) for \(N=7\), and \(U(0,0.4)\) for \(N=9\). The initial values of \(\Delta_{n}\) and \(\theta_{n}^{a(b)}\) are randomly selected from the range of \([-3,3]\) and \([-\pi,\pi]\), respectively, the learning rate is \(0.1\), and the duration of the \(n\)th pulse is chosen as (a) \(T_{n}=\pi/(2\sqrt{\Omega_{n}^{2}+\Delta_{n}^{2}})\) and (b) \(T_{n}=\pi/\sqrt{2}\Omega\).
is a high-fidelity window for population inversion in the presence of the pulse area error, verifying that robust quantum control is accessible in a high-dimension Hilbert space by using this supervised learning model.
## V Conclusion
In conclusion, we have proposed a supervised learning model used for implementing robust quantum control in composite-pulse systems. We first construct the cost function for this model, and demonstrate how this model works. By introducing virtual variables, we put forward a modified gradient descent algorithm to train phase parameters. To avoid falling into locally optimal solutions, we further amend the modified algorithm by using escaping or restarting methods. We find that the robust performance of the system depends heavily on sampling methods, embodying in sampling range and distribution. To be specific, the generalization error tends to enlarge as the sampling range increases, and different sampling distributions result in different robust performances.
Afterwards, we characterize the generalization ability of this model. The results demonstrate that this model is susceptible to overfitting when the sampling range is too small, whereas it may produce underfitting once the sampling range exceeds a certain threshold. In particular, there is a corresponding strengthening of the generalization ability when increasing the model complexity (i.e., increasing the pulse number). Therefore, for a given number of pulses, the suitable sampling range is conductive to maximize the generalization ability of the model.
The current supervised learning model exhibits a powerful robustness to errors, and thus has a wide range of applications, e.g., the robust realization of arbitrary superposition states as well as universal quantum gates. Specifically, the learned phase parameters in this model perform very well in robustness against various systematic errors, such as single error, multiple errors, and time-varying errors. It is particularly important that all kinds of situations (including single, multiple, and time-varying errors) can be well tackled by training the same model, and the only difference lies in the dimension of the samples. To implement universal quantum gates in a robust manner, we proposed two different methods, the state-based and evolution operator-based methods, to train the phase parameters and its robust property depends on the sampling range. We finally demonstrate that this model can be applied in higher-dimensional Hilbert spaces, and suitable for training any physical parameters to attain robust quantum control. It is believed that this supervised learning model provides a universal and high-efficiency plateau for realizing reliable quantum information processing in different quantum systems.
###### Acknowledgements.
We would like to thank the valuable suggestions of Dr. Ye-Xiong Zeng, Dr. Yanming Che, and Dr. Clemens Gneiting. This work is supported by the Natural Science Foundation of Fujian Province under Grant No. 2021J01575, the Natural Science Funds for Distinguished Young Scholar of Fujian Province under Grant No. 2020J06011, and the Project from Fuzhou University under Grant No. JG202001-2. F.N. is supported in part by: Nippon Telegraph and Telephone Corporation (NTT) Research, the Japan Science and Technology Agency (JST) [via the Quantum Leap Flagship Program (Q-LEAP), and the Moonshot R&D Grant Number JPMJMS2061], the Asian Office of Aerospace Research and Development (AOARD) (via Grant No. FA2386-20-1-4069), the Office of Naval Research (ONR Global), and the Foundational Questions Institute Fund (FQXi) via Grant No. FQXi-IAF19-06.
|
2305.11645 | The fast reduced QMC matrix-vector product | We study the approximation of integrals $\int_D f(\boldsymbol{x}^\top A)
\mathrm{d} \mu(\boldsymbol{x})$, where $A$ is a matrix, by quasi-Monte Carlo
(QMC) rules $N^{-1} \sum_{k=0}^{N-1} f(\boldsymbol{x}_k^\top A)$. We are
interested in cases where the main cost arises from calculating the products
$\boldsymbol{x}_k^\top A$. We design QMC rules for which the computation of
$\boldsymbol{x}_k^\top A$, $k = 0, 1, \ldots, N-1$, can be done fast, and for
which the error of the QMC rule is similar to the standard QMC error. We do not
require that $A$ has any particular structure.
For instance, this approach can be used when approximating the expected value
of a function with a multivariate normal random variable with a given
covariance matrix, or when approximating the expected value of the solution of
a PDE with random coefficients.
The speed-up of the computation time is sometimes better and sometimes worse
than the fast QMC matrix-vector product from [Dick, Kuo, Le Gia, and Schwab,
Fast QMC Matrix-Vector Multiplication, SIAM J. Sci. Comput. 37 (2015)]. As in
that paper, our approach applies to (polynomial) lattice point sets, but also
to digital nets (we are currently not aware of any approach which allows one to
apply the fast method from the aforementioned paper of Dick, Kuo, Le Gia, and
Schwab to digital nets).
Our method does not use FFT, instead we use repeated values in the quadrature
points to derive a reduction in the computation time. This arises from the
reduced CBC construction of lattice rules and polynomial lattice rules. The
reduced CBC construction has been shown to reduce the computation time for the
CBC construction. Here we show that it can also be used to also reduce the
computation time of the QMC rule. | Josef Dick, Adrian Ebert, Lukas Herrmann, Peter Kritzer, Marcello Longo | 2023-05-19T12:51:00Z | http://arxiv.org/abs/2305.11645v1 | # The fast reduced QMC matrix-vector product
###### Abstract
We study the approximation of integrals of the form \(\int_{D}f(\boldsymbol{x}^{\top}A)\,\mathrm{d}\mu(\boldsymbol{x})\), where \(A\) is a matrix, by quasi-Monte Carlo (QMC) rules \(N^{-1}\sum_{k=0}^{N-1}f(\boldsymbol{x}_{k}^{\top}A)\). We are interested in cases where the main computational cost in the approximation arises from calculating the products \(\boldsymbol{x}_{k}^{\top}A\). We design QMC rules for which the computation of \(\boldsymbol{x}_{k}^{\top}A\), \(k=0,1,\ldots,N-1\), can be done in a fast way, and for which the approximation error of the QMC rule is similar to the standard QMC error. We do not require that the matrix \(A\) has any particular structure.
Problems of this form arise in some important applications in statistics and uncertainty quantification. For instance, this approach can be used when approximating the expected value of some function with a multivariate normal random variable with some given covariance matrix, or when approximating the expected value of the solution of a PDE with random coefficients.
The speed-up of the computation time of our approach is sometimes better and sometimes worse than the fast QMC matrix-vector product from [Josef Dick, Frances Y. Kuo, Quoc T. Le Gia, and Christoph Schwab, Fast QMC Matrix-Vector Multiplication, SIAM J. Sci. Comput. 37 (2015), no. 3, A1436-A1450]. As in that paper, our approach applies to lattice point sets and polynomial lattice point sets, but also applies to digital nets (we are currently not aware of any approach which allows one to apply the fast QMC matrix-vector paper from the aforementioned paper of Dick, Kuo, Le Gia, and Schwab to digital nets).
The method in this paper does not make use of the fast Fourier transform, instead we use repeated values in the quadrature points to derive a significant reduction in the computation time. Such a situation naturally arises from the reduced CBC construction of lattice rules and polynomial lattice rules. The reduced CBC construction has been shown to reduce the computation time for the CBC construction. Here we show that it can additionally be used to also reduce the computation time of the underlying QMC rule. One advantage of the present approach is that it can be combined with random (digital) shifts, whereas this does not apply to the fast QMC matrix-vector product from the earlier paper of Dick, Kuo, Le Gia, and Schwab.
**Keywords:** Matrix-vector multiplication, quasi-Monte Carlo, high-dimensional integration, lattice rules, polynomial lattice rules, digital nets, PDEs with random coefficients. **2020 MSC:** 65C05, 65D30, 41A55, 11K38.
## 1 Introduction and problem setting
We are interested in approximating integrals of the form
\[\int_{D}f(\boldsymbol{x}^{\top}A)\,\mathrm{d}\mu(\boldsymbol{x}), \tag{1}\]
for a domain \(D\subseteq\mathbb{R}^{s}\), an \(s\times\tau\)-matrix \(A\in\mathbb{R}^{s\times\tau}\), and a function \(f:D\to\mathbb{R}\), by quasi-Monte Carlo (QMC) integration rules of the form
\[Q_{N}(f)=\frac{1}{N}\sum_{k=0}^{N-1}f(\mathbf{x}_{k}^{\top}A), \tag{2}\]
where we use deterministic cubature points \(\mathbf{x}_{0},\mathbf{x}_{1},\ldots,\mathbf{x}_{N-1}\in D\). We write \(\mathbf{x}_{k}=(x_{1,k},x_{2,k},\ldots,x_{s,k})^{\top}\) for \(0\leq k\leq N-1\). In most instances, \(D=[0,1]^{s}\) and the measure \(\mu\) is the Lebesgue measure (or \(D=\mathbb{R}^{s}\) and \(\mu\) is the measure corresponding to the normal distribution).
Furthermore, define the \(N\times s\)-matrix
\[X=\begin{pmatrix}\mathbf{x}_{0}^{\top}\\ \mathbf{x}_{1}^{\top}\\ \vdots\\ \mathbf{x}_{N-1}^{\top}\end{pmatrix}\in\mathbb{R}^{N\times s}, \tag{3}\]
whose \(N\) rows consist of the different cubature nodes. We are interested in situations where the main computational cost of computing (2) arises from the vector-matrix multiplication \(\mathbf{x}_{k}^{\top}A\) for all \(N\) points, i.e., we need to compute \(XA\), which requires \(\mathcal{O}(Ns\,t)\) operations. Let \(A=(\mathbf{A}_{1},\mathbf{A}_{2},\ldots,\mathbf{A}_{\tau})\), where \(\mathbf{A}_{i}\in\mathbb{R}^{s}\) is the \(i\)-th column vector of \(A\). The main idea is to construct QMC rules for which the matrix \(X\) given in (3) has some structure such that QMC matrix-vector product \(X\mathbf{A}_{i}\) can be computed very efficiently and the integration error of the underlying QMC rule has similar properties as for other QMC rules. Note that our approach works for any matrix \(A\) as we do not use any structure of the matrix \(A\).
To motivate the problem addressed in this paper, note that such computational problems arise naturally in certain settings. For instance, consider approximating the expected value
\[\mathbb{E}(f)=\int_{\mathbb{R}^{s}}f(\mathbf{y}^{\top})\frac{\exp\left(-\frac{1}{ 2}\mathbf{y}^{\top}\Sigma^{-1}\mathbf{y}\right)}{\sqrt{(2\pi)^{s}\det(\Sigma)}}\, \mathrm{d}\mathbf{y},\]
where \(\Sigma\) is symmetric and positive definite. Using the substitution \(\mathbf{y}=A^{\top}\mathbf{x}\), where \(A\) factorizes \(\Sigma\), i.e. \(\Sigma=A^{\top}A\), we arrive at the integral
\[\mathbb{E}(f)=\int_{\mathbb{R}^{s}}f(\mathbf{x}^{\top}A)\underbrace{\frac{\exp \left(-\frac{1}{2}\mathbf{x}^{\top}\mathbf{x}\right)}{(2\pi)^{s/2}}\,\mathrm{d}\mathbf{x} }_{=:\,\mathrm{d}\mu(\mathbf{x})}.\]
Such problems arise for instance in statistics when computing expected values with respect to a normal distribution, and in mathematical finance, e.g., for pricing financial products whose payoff depends on a basket of assets.
Another setting where such problems arise naturally comes from PDEs with random coefficients in the context of uncertainty quantification (see for instance [12] for more details). Without stating all the details here, the main computational cost in this context comes from computing
\[D_{k}=\sum_{j=1}^{s}x_{j,k}C_{j},\quad\text{for }k=0,1,\ldots,N-1, \tag{4}\]
where \(C_{j}\in\mathbb{R}^{M\times M}\) are matrices (whose size depends on \(s\) and \(N\)). Let \(C_{j}=(c_{j,u,v})_{1\leq u,v\leq M}\) and define the column vectors \(\boldsymbol{c}_{u,v}=(c_{1,u,v},c_{2,u,v},\ldots,c_{s,u,v})^{\top}\in\mathbb{R}^ {s}\), for \(1\leq u,v\leq M\). Then we can compute the matrices given by (4) by computing
\[X\boldsymbol{c}_{u,v},\quad\text{for }1\leq u,v\leq M. \tag{5}\]
In this approach we do not compute the matrices in (4) for each \(k\) separately, hence this approach requires us to store the results of (5) first.
It was shown in [4] that when using particular types of QMC rules, such as (polynomial) lattice rules or Korobov rules, the cost to evaluate \(Q_{N}(f)\), as given in (2), can be reduced to only \(\mathcal{O}(\tau\,N\log N)\) operations provided that \(\log N\ll s\). This drastic reduction in computational cost is achieved by a fast matrix-matrix multiplication exploiting the fact that for the chosen point sets the matrix \(X\) can be re-ordered to be of circulant structure. The fast multiplication is then realized by the use of the fast Fourier transformation (FFT).
Here, we will explore a different method which can also drastically reduce the computation cost of evaluating \(Q_{N}(f)\), as given in (2). The reduction in computational complexity is achieved by using point sets which possess a certain repetitiveness in their components. In particular, the number of different values of the components \(x_{k,j}\) (for \(0\leq k\leq N-1\)) is in general smaller than \(N\) and decreases when \(j\in\{1,\ldots,s\}\) increases. As a particular type of such QMC point sets, we will consider (polynomial) lattice point sets that have been obtained by the so-called reduced CBC construction as in [2], and we will also consider similarly reduced versions of digital nets obtained from digital sequences such as Sobol' or Niederreiter sequences. The corresponding QMC point sets will henceforth be called reduced (polynomial) lattice point sets or reduced digital nets.
The idea of our approach, which will be made more precise in the following sections, works as follows.
Assume that we have \(N\) samples of the form \((x_{1,k},x_{2,k},\ldots,x_{s,k})\), \(0\leq k\leq N-1\). We reduce the number of different values by choosing the number of samples differently for each coordinate, say \(N_{j}\) for the \(j\)-th coordinate, where \(N_{j}\) divides \(N_{j-1}\). E.g., if \(N_{1}=4\), \(N_{2}=2\), and \(N_{3}=1\), then we generate the points
\[(y_{1,0},y_{2,0},y_{3,0}),(y_{1,1},y_{2,1},y_{3,0}),(y_{1,2},y_{2,0},y_{3,0}), (y_{1,3},y_{2,1},y_{3,0}). \tag{6}\]
Here, there are 4 different values for the first coordinate, 2 different values for the second coordinate, and the values for the last coordinate are all the same.
What is the advantage of this construction? The advantage can be seen when we compute \(\boldsymbol{x}^{\top}A\). Let \(\boldsymbol{a}_{1},\boldsymbol{a}_{2},\ldots,\boldsymbol{a}_{s}\) denote the rows of \(A\). If all coordinates are different, we need \(\mathcal{O}(Ns)\) operations. For instance, in the example above we have 4 points in the 3-dimensional space, so we need to compute
\[x_{1,k}\boldsymbol{a}_{1}+x_{2,k}\boldsymbol{a}_{2}+x_{3,k}\boldsymbol{a}_{3}, \quad\text{for}\quad k\in\{1,2,3,4\}.\]
However, if we use the points (6) then we only need to compute
\[y_{1,k}\boldsymbol{a}_{1}+y_{2,\lfloor k/2\rfloor}\boldsymbol{a}_{2}+y_{3,0} \boldsymbol{a}_{3},\quad\text{for}\quad k\in\{1,2,3,4\}.\]
The last computation can be done recursively, by first computing \(y_{3,0}\boldsymbol{a}_{3}\), then \(y_{2,0}\boldsymbol{a}_{2}+y_{3,0}\boldsymbol{a}_{3}\) and \(y_{3,1}\boldsymbol{a}_{2}+y_{3,0}\boldsymbol{a}_{3}\), and then finally the remaining vectors. By storing and reusing these
intermediate results, we only compute \(y_{3,0}\mathbf{a}_{3}\), and \(y_{2,0}\mathbf{a}_{2}+y_{3,0}\mathbf{a}_{3}\) and \(y_{2,1}\mathbf{a}_{2}+y_{3,0}\mathbf{a}_{3}\) once (rather than recomputing the same result as in the straightforward computation).
By applying this idea in the general case, we obtain a similar cost saving as for the fast QMC matrix-vector product in [4]. However, the present method behaves differently in some situations which can be beneficial. One advantage is that it allows us to use random shifts, which is not possible for the fast QMC matrix-vector product.
Before we proceed, we would like to introduce some notation. We will write \(\mathbb{Z}\) to denote the set of integers, \(\mathbb{Z}_{*}\) to denote the set of integers excluding \(0\), \(\mathbb{N}\) to denote the positive integers, and \(\mathbb{N}_{0}\) to denote the nonnegative integers. Furthermore, we write \([s]\) to denote the index set \(\{1,\ldots,s\}\). To denote sets of components we use fraktur font, e.g., \(\mathfrak{u}\subseteq[s]\). For a vector \(\mathbf{x}=(x_{1},\ldots,x_{s})\in[0,1]^{s}\) and for \(\mathfrak{u}\subseteq[s]\), we write \(\mathbf{x}_{\mathfrak{u}}=(x_{j})_{j\in\mathfrak{u}}\in[0,1]^{|\mathfrak{u}|}\) and \((\mathbf{x}_{\mathfrak{u}},\mathbf{0})\in[0,1]^{s}\) for the vector \((y_{1},\ldots,y_{s})\) with \(y_{j}=x_{j}\) if \(j\in\mathfrak{u}\) and \(y_{j}=0\) if \(j\not\in\mathfrak{u}\). For integer vectors \(\mathbf{h}\in\mathbb{Z}^{s}\), and \(\mathfrak{u}\subseteq[s]\), we analogously write \(\mathbf{h}_{\mathfrak{u}}\) to denote the projection of \(\mathbf{h}\) onto those components with indices in \(\mathfrak{u}\).
The rest of the paper is structured as follows. Below we introduce lattice rules and polynomial lattice rules and the relevant function spaces. In Section 1.3 we state the relevant results on the convergence of the reduced lattice rules. In Section 2 we outline how to use reduced rules for computing matrix products efficiently. In Section 3 we discuss a version of the fast reduced QMC matrix-vector multiplication for digital nets and prove a bound on the weighted discrepancy. In Section 4 we explain how these ideas can also be applied to the plain Monte Carlo algorithm. Numerical experiments in Section 5 conclude the paper.
### Lattice point sets and polynomial lattice point sets
In this section, we would like to give the definitions of the classes of QMC point sets considered in this paper.
We start with (rank-1) lattice point sets. For further information, we refer to, e.g., [3, 5, 13, 15] and the references therein.
For a natural number \(N\in\mathbb{N}\) and a vector \(\mathbf{z}\in\{1,2,\ldots,N-1\}^{s}\), a lattice point set consists of points \(\mathbf{x}_{0},\mathbf{x}_{1},\ldots,\mathbf{x}_{N-1}\) of the form
\[\mathbf{x}_{k}=\left\{\frac{k}{N}\mathbf{z}\right\}\quad\text{ for }\quad k=0,1, \ldots,N-1.\]
Here, for real numbers \(y\geq 0\) we write \(\{y\}=y-\lfloor y\rfloor\) for the fractional part of \(y\). For vectors \(\mathbf{y}\) we apply \(\{\cdot\}\) component-wise.
In this paper, we assume that the number of points \(N\) is a prime power, i.e., \(N=b^{m}\), with prime \(b\) and \(m\in\mathbb{N}\).
The second class of point sets considered here are so-called polynomial lattice point sets, whose definition is similar to that of lattice point sets, but based on arithmetic over finite fields instead of integer arithmetic. To introduce them, let \(b\) again be a prime, and denote by \(\mathbb{F}_{b}\) the finite field with \(b\) elements and by \(\mathbb{F}_{b}[x]\) the set of all polynomials in \(x\) with coefficients in \(\mathbb{F}_{b}\). We will use a special instance of polynomial lattice point sets over \(\mathbb{F}_{b}\). For a prime power \(N=b^{m}\) and \(\mathbf{g}=(g_{1},\ldots,g_{s})\in(\mathbb{F}_{b}[x])^{s}\), a polynomial lattice point
set consists of \(N\) points \(\mathbf{x}_{0},\mathbf{x}_{1},\ldots,\mathbf{x}_{N-1}\) of the form
\[\mathbf{x}_{k}:=\left(\nu\left(\frac{k(x)\ g_{1}(x)}{x^{m}}\right),\ldots,\nu\left( \frac{k(x)\ g_{s}(x)}{x^{m}}\right)\right)\quad\text{ for }\ k\in\mathbb{F}_{b}[x]\ \ \text{with}\ \deg(k)<m,\]
where for \(f\in\mathbb{F}_{b}[x]\), \(f(x)=a_{0}+a_{1}x+\cdots+a_{r}x^{r}\), with \(\deg(f)=r\), the map \(\nu\) is given by
\[\nu\left(\frac{f(x)}{x^{m}}\right):=\frac{a_{\min(r,m-1)}}{b^{m-\min(r,m-1)}} +\cdots+\frac{a_{1}}{b^{m-1}}+\frac{a_{0}}{b^{m}}\in[0,1).\]
Note that \(\nu(f(x)/x^{m})=\nu((f(x)\pmod{x^{m}}))/x^{m})\). We refer to [7, Chapter 10] for further information on polynomial lattice point sets.
Lattice point sets are used in QMC rules referred to as lattice rules, and analogously for polynomial lattice point sets.
### Korobov spaces and related Sobolev spaces
As pointed out above, lattice point sets are commonly used as node sets in lattice rules, and they are frequently studied in the context of numerical integration of functions in Korobov spaces and certain Sobolev spaces, which we would like to describe in the present section. Let us consider first a weighted Korobov space with general weights as studied in [8, 14].
In several applications, we may have the situation that different groups of variables have different importance, and this can also be reflected in the function spaces under consideration. Indeed, the importance of the different components or groups of components of the functions in the Korobov space to be defined is specified by a set of positive real numbers \(\mathbf{\gamma}=\{\gamma_{\mathfrak{u}}\}_{u\subseteq[s]}\), where we may assume that \(\gamma_{\emptyset}=1\). In this context, larger values of \(\gamma_{\mathfrak{u}}\) indicate that the group of variables corresponding to the index set \(\mathfrak{u}\) has relatively stronger influence on the computational problem, whereas smaller values of \(\gamma_{\mathfrak{u}}\) mean the opposite.
The smoothness of the functions in the space is described by a parameter \(\alpha>1/2\).
Product weights are a common special case of the weights \(\mathbf{\gamma}\) where \(\gamma_{\mathfrak{u}}=\prod_{j\in\mathfrak{u}}\gamma_{j}\) for \(u\subseteq[s]\) and where \((\gamma_{j})_{j=1,2,\ldots,s}\) is a sequence of positive real numbers.
The weighted Korobov space, denoted by \(\mathcal{H}(K_{s,\alpha,\mathbf{\gamma}})\), is a reproducing kernel Hilbert space with kernel function
\[K_{s,\alpha,\mathbf{\gamma}}(\mathbf{x},\mathbf{y}) = 1+\sum_{\emptyset\neq\mathfrak{u}\subseteq[s]}\gamma_{\mathfrak{ u}}\prod_{j\in\mathfrak{u}}\left(\sum_{h\in\mathbb{Z}_{*}}\frac{\exp(2\pi \mathfrak{i}h(x_{j}-y_{j}))}{|h|^{2\alpha}}\right)\] \[= 1+\sum_{\emptyset\neq u\subseteq[s]}\gamma_{\mathfrak{u}}\sum_{ \mathbf{h}_{\mathfrak{u}}\in\mathbb{Z}_{*}^{[u]}}\frac{\exp(2\pi\mathfrak{i}\mathbf{ h}_{\mathfrak{u}}\cdot(\mathbf{x}_{\mathfrak{u}}-\mathbf{y}_{\mathfrak{u}}))}{\prod_{j\in \mathfrak{u}}|h_{j}|^{2\alpha}}.\]
The corresponding inner product is
\[\langle f,g\rangle_{K_{s},\alpha,\mathbf{\gamma}}=\sum_{u\subseteq[s]}\gamma_{ \mathfrak{u}}^{-1}\sum_{\mathbf{h}_{\mathfrak{u}}\in\mathbb{Z}_{*}^{[u]}}\left( \prod_{j\in\mathfrak{u}}|h_{j}|^{2\alpha}\right)\widehat{f}((\mathbf{h}_{ \mathfrak{u}},\mathbf{0}))\widehat{\overline{g}((\mathbf{h}_{\mathfrak{u}},\mathbf{0}))},\]
where \(\widehat{f}(\mathbf{h})=\int_{[0,1]^{s}}f(\mathbf{t})\exp(-2\pi\mathrm{i}\mathbf{h}\cdot\mathbf{t} )\,\mathrm{d}\mathbf{t}\) is the \(\mathbf{h}\)-th Fourier coefficient of \(f\). For \(\mathfrak{u}=\emptyset\), the empty sum is defined as \(\widehat{f}(\mathbf{0})\widehat{\widehat{g}(\mathbf{0})}\).
For \(h\in\mathbb{Z}_{*}\), we define \(\rho_{\alpha}(h)=|h|^{-2\alpha}\), and for \(\mathbf{h}=(h_{1},\dots,h_{s})\in\mathbb{Z}_{*}^{s}\) let \(\rho_{\alpha}(\mathbf{h})=\prod_{j=1}^{s}\rho_{\alpha}(h_{j})\).
It is known (see, e.g., [8]) that the squared worst-case error of a lattice rule generated by a vector \(\mathbf{z}\in\mathbb{Z}^{s}\) in the weighted Korobov space \(\mathcal{H}(K_{s,\alpha,\gamma})\) is given by
\[e_{N,s,\gamma}^{2}(\mathbf{z})=\sum_{\emptyset\neq\mathfrak{u}\subseteq[s]}\gamma _{\mathfrak{u}}\sum_{\mathbf{h}_{u}\in\mathcal{D}_{\mathfrak{u}}}\rho_{\alpha}(\bm {h}_{\mathfrak{u}}), \tag{7}\]
where
\[\mathcal{D}_{\mathfrak{u}}=\mathcal{D}_{\mathfrak{u}}(\mathbf{z}):=\left\{\mathbf{h}_ {\mathfrak{u}}\in\mathbb{Z}_{*}^{[\mathfrak{u}]}\ :\ \mathbf{h}_{\mathfrak{u}}\cdot\mathbf{z}_{\mathfrak{u}}\equiv 0 \ (\mathrm{mod}\,N)\right\}\]
is called the dual lattice of the lattice generated by \(\mathbf{z}\).
The worst-case error of lattice rules in a Korobov space can be related to the worst-case error in certain Sobolev spaces. Indeed, consider a tensor product Sobolev space \(\mathcal{H}^{\mathrm{sob}}_{s,\gamma}\) of absolutely continuous functions whose mixed partial derivatives of order \(1\) in each variable are square integrable, with norm (see [10])
\[\|f\|_{\mathcal{H}^{\mathrm{sob}}_{s,\gamma}}=\left(\sum_{\mathfrak{u}\subseteq [s]}\gamma_{\mathfrak{u}}^{-1}\int_{[0,1]^{|\mathfrak{u}|}}\left(\int_{[0,1]^{ s-|\mathfrak{u}|}}\frac{\partial^{|\mathfrak{u}|}}{\partial\mathbf{x}_{\mathfrak{u}}}f( \mathbf{x})\,\mathrm{d}\mathbf{x}_{[s]\setminus\mathfrak{u}}\right)^{2}\,\mathrm{d} \mathbf{x}_{\mathfrak{u}}\right)^{1/2},\]
where \(\partial^{|\mathfrak{u}|}f/\partial\mathbf{x}_{\mathfrak{u}}\) denotes the mixed partial derivative with respect to all variables \(j\in\mathfrak{u}\). As pointed out in [5, Section 5], the root mean square worst-case error \(\widehat{e}_{N,s,\gamma}\) for QMC integration in \(\mathcal{H}^{\mathrm{sob}}_{s,\gamma}\) using randomly shifted lattice rules \((1/N)\sum_{k=0}^{N-1}f\left(\left\{\frac{k}{N}\mathbf{z}+\mathbf{\Delta}\right\}\right)\), i.e.,
\[\widehat{e}_{N,s,\gamma}(\mathbf{z})=\left(\int_{[0,1]^{s}}e_{N,s,\gamma}^{2}( \mathbf{z},\mathbf{\Delta})\,\mathrm{d}\mathbf{\Delta}\right)^{1/2},\]
where \(e_{N,s,\gamma}(\mathbf{z},\mathbf{\Delta})\) is the worst-case error of QMC integration in \(\mathcal{H}^{\mathrm{sob}}_{s,\gamma}\) using a shifted integration lattice, is essentially the same as the worst-case error \(e_{N,s,\gamma}^{(1,\mathrm{kor})}\) in the weighted Korobov space \(\mathcal{H}(K_{s,1,\gamma})\) using the unshifted version of the lattice rules. In fact, we have
\[\widehat{e}_{N,s,2\pi^{2}\gamma}(\mathbf{z})=e_{N,s,\gamma}^{(1,\mathrm{kor})}( \mathbf{z}), \tag{8}\]
where \(2\pi^{2}\mathbf{\gamma}\) denotes the weights \(((2\pi^{2})^{|\mathfrak{u}|}\gamma_{\mathfrak{u}})_{\emptyset\neq\mathfrak{u} \subseteq[s]}\). For a connection to the so-called anchored Sobolev space see, e.g., [11, Section 4].
In a slightly different setting, the random shift can be replaced by the tent transformation \(\phi(x)=1-|1-2x|\) in each variable. For a vector \(\mathbf{x}\in[0,1]^{s}\) let \(\phi(\mathbf{x})\) be defined component-wise. Let \(\widetilde{e}_{N,s,\gamma}(\mathbf{z})\) be the worst-case error in the unanchored weighted Sobolev space \(\mathcal{H}^{\mathrm{sob}}_{s,\gamma}\) using the QMC rule \((1/N)\sum_{k=0}^{N-1}f\left(\phi\left(\left\{\frac{k}{N}\mathbf{z}\right\}\right)\right)\). Then it is known due to [6] and [1] that
\[\widetilde{e}_{N,s,\pi^{2}\gamma}(\mathbf{z})\leq e_{N,s,\gamma}^{(1,\mathrm{kor}) }(\mathbf{z}), \tag{9}\]
where \(\pi^{2}\mathbf{\gamma}=(\pi^{2|\mathfrak{u}|}\gamma_{\mathfrak{u}})_{\emptyset\neq \mathfrak{u}\subseteq[s]}\), and that the CBC construction with the quality criterion given by the worst-case error in the Korobov space \(\mathcal{H}(K_{s,1,\gamma})\) can be used to construct
tent-transformed lattice rules which achieve the almost optimal convergence order in the space \(\mathcal{H}^{\text{sob}}_{\mathbf{s},\pi^{2}\gamma}\) under appropriate conditions on the weights \(\boldsymbol{\gamma}\) (see [1, Corollary 1]). Hence we also have a direct connection between integration in the Korobov space using lattice rules and integration in the unanchored Sobolev space using tent-transformed lattice rules.
Thus, results shown for the integration error in the Korobov space can, by a few simple modifications, be carried over to results that hold for anchored and unanchored Sobolev spaces, respectively, by using Equations (8) and (9).
### Reduced (polynomial) lattice point sets
In [2], the authors introduced so-called reduced lattice point sets and reduced polynomial lattice point sets. The original motivation for these concepts was to make search algorithms for excellent QMC rules faster for situations where the dependence of a high-dimensional integration problem on its variable \(j\) decreases fast as the index \(j\) increases. Such a situation might occur in various applications and is modelled by assuming that the weights in the weighted spaces, such as those introduced in Section 1.2, decay at a certain speed.
The "reduction" in the search for good lattice point sets is achieved by shrinking the sizes of the sets that the different components of the generating vector \(\boldsymbol{z}\) are chosen from. In the present paper, we will make use of the same idea, but with a different aim, namely that of increasing the speed of computing the matrix product \(XA\), as outlined above.
Recall that we assume \(N\) to be a prime power, \(N=b^{m}\). A reduced rank-1 lattice point set is obtained by introducing an integer sequence \(\boldsymbol{w}=(w_{j})_{j=1}^{s}\in\mathbb{N}_{0}^{s}\) with \(0=w_{1}\leq w_{2}\leq\cdots\leq w_{s}\). We will refer to the integers \(w_{j}\) as reduction indices. Additionally, for integer \(w\geq 0\), we introduce the set
\[\mathbb{U}_{b^{m-w}}:=\begin{cases}\{z\in\{1,2,\ldots,b^{m-w}-1\}:\gcd(z,b)=1 \}&\text{if $w<m$},\\ \{1\}&\text{if $w\geq m$}.\end{cases}\]
Note that \(\mathbb{U}_{b^{m-w}}\) is the group of units of integers modulo \(b^{m-w}\) for \(w<m\), and in this case the cardinality of the set \(\mathbb{U}_{b^{m-w}}\) equals \((b-1)b^{m-w-1}\). For the given sequence \(\boldsymbol{w}\) we then define \(s^{*}\) as \(s^{*}:=\max\{j\in\mathbb{N}\colon w_{j}<m\}\).
The generating vector \(\boldsymbol{z}\in\mathbb{Z}^{s}\) of a reduced lattice rule as in [2] is then of the form
\[\boldsymbol{z}=(b^{w_{1}}z_{1},b^{w_{2}}z_{2},\ldots,b^{w_{s}}z_{s})=(z_{1},b^ {w_{2}}z_{2},\ldots,b^{w_{s}}z_{s}),\]
where \(z_{j}\in\mathbb{U}_{b^{m-w_{j}}}\) for all \(j=1,\ldots,s\). Note that for \(j>s^{*}\) we have \(w_{j}\geq m\) and \(z_{j}=1\). In this case the corresponding components of \(\boldsymbol{z}\) are multiples of \(N\). The resulting \(b^{m}\) points of the reduced lattice point set are given by
\[\boldsymbol{x}_{k} = \left(\left\{\frac{kz_{1}b^{w_{1}}}{N}\right\},\left\{\frac{kz_{2 }b^{w_{2}}}{N}\right\},\ldots,\left\{\frac{kz_{s}b^{w_{s}}}{N}\right\}\right)\] \[= \left(\frac{kz_{1}\bmod b^{m-\min(w_{1},m)}}{b^{m-\min(w_{1},m)} },\frac{kz_{2}\bmod b^{m-\min(w_{2},m)}}{b^{m-\min(w_{2},m)}},\ldots,\frac{kz _{s}\bmod b^{m-\min(w_{s},m)}}{b^{m-\min(w_{s},m)}}\right)\]
with \(k=0,1,\ldots,N-1\). Therefore it is obvious that the components \(x_{k,j}\), \(k=0,1,\ldots,N-1\) belong to the set \(\{0,1/b^{m-\min(w_{j},m)},\ldots,(b^{m-\min(w_{j},m)}-1)/b^{m-\min(w_{j},m)}\}\) and each of the values is attained exactly \(b^{\min(w_{j},m)}\) times for all \(j=1,\ldots,s\). In particular, all \(x_{k,j}\) equal \(0\) for \(j>s^{*}\).
Regarding the performance of reduced lattice rules for numerical integration in the Korobov space \(\mathcal{H}(K_{s,\alpha,\boldsymbol{\gamma}})\), the following result was shown in [2]. For a proof of this result and further background information, we refer to the original paper [2].
**Theorem 1**.: _Let \(\boldsymbol{w}=(w_{j})_{j=1}^{s}\in\mathbb{N}_{0}^{s}\) be a sequence of reduction indices, let \(\alpha>1/2\), and consider the Korobov space \(\mathcal{H}(K_{s,\alpha,\gamma})\). Using a computer search algorithm, one can construct a generating vector \(\boldsymbol{z}=(z_{1}b^{w_{1}},\ldots,z_{s}b^{w_{s}})\in\mathbb{Z}^{s}\) such that, for any \(d\in[s]\) and any \(\lambda\in(1/(2\alpha),1]\), the following estimate on the squared worst-case error of integration in \(\mathcal{H}(K_{d,\alpha,\boldsymbol{\gamma}})\) holds._
\[e_{N,s,\boldsymbol{\gamma}}^{2}((z_{1}b^{w_{1}},\ldots,z_{d}b^{w_{d}}))\leq \left(\sum_{\emptyset\neq\mathfrak{u}\subseteq[d]}\gamma_{\mathfrak{u}}^{ \lambda}\frac{2(2\zeta(2\alpha\lambda))^{|\mathfrak{u}|}}{b^{\max\{0,m-\max_{ j\in\mathfrak{u}}w_{j}\}}}\right)^{\frac{1}{\lambda}}.\]
Let us briefly illustrate the motivation for introducing the numbers \(w_{1},w_{2},\ldots,w_{s}\). Assume we have product weights \(\gamma_{1}\geq\gamma_{2}\geq\cdots\geq\gamma_{s}>0\). We have
\[\sum_{\emptyset\neq\mathfrak{u}\subseteq[d]}\gamma_{\mathfrak{u}}^ {\lambda}\frac{2(2\zeta(2\alpha\lambda))^{|\mathfrak{u}|}}{b^{\max\{0,m-\max_{ j\in\mathfrak{u}}w_{j}\}}}\leq b^{-m}\left(-1+2\prod_{j=1}^{d}\left(1+\gamma_{j}^{\lambda}2\zeta(2\alpha \lambda)b^{\min\{m,w_{j}\}}\right)\right).\]
Further assume that we want to have a bound independent of the dimension. In the non-reduced (classical) case we have \(w_{1}=w_{2}=\cdots=w_{s}=0\) and hence
\[\prod_{j=1}^{d}\left(1+\gamma_{j}^{\lambda}2\zeta(2\alpha\lambda)\right)=\exp \left(\sum_{j=1}^{d}\log\left(1+\gamma_{j}^{\lambda}2\zeta(2\alpha\lambda) \right)\right)\leq\exp\left(2\zeta(2\alpha\lambda)\sum_{j=1}^{d}\gamma_{j}^{ \lambda}\right),\]
where we used \(\log(1+z)\leq z\) for \(z\geq 0\). If \(\sum_{j=1}^{\infty}\gamma_{j}^{\lambda}<\infty\), we get a bound which is independent of the dimension \(d\).
For illustration, say \(\gamma_{j}^{1/(2\alpha)}=j^{-4}\), then the infinite sum is finite and we get a bound independent of the dimension. However, a significantly slower converging sequence would still be enough to give us a bound independent of the dimension. So if we introduce \(w_{1}\leq w_{2}\leq\cdots\), where \(w_{j}=\log_{b}j^{2}\) for instance, then we still have
\[\sum_{j=1}^{\infty}\gamma_{j}^{\lambda}b^{w_{j}}<\infty.\]
In [2] we have shown how the \(w_{j}\) can be used to reduce the construction cost of the CBC construction by reducing the size of the search space from \(b^{m}\) to \(b^{\max\{0,m-w_{j}\}}\) in component \(j\). In this paper we show that the \(w_{j}\) can also be used to reduce the computation cost of computing \(XA\), where the rows of \(X\) are the lattice points of a reduced lattice rule. The speed-up which can be achieved this way will depend on the weights \(\{\gamma_{\mathfrak{u}}\}_{\mathfrak{u}\subseteq[s]}\) (and the \(w_{1}\geq w_{2}\geq\cdots\)). This is different from the fast QMC matrix-vector product in [4], which works independently of the weights and does not influence tractability properties.
It is natural to expect that one can use an analogous approach for polynomial lattice rules leading to similar results.
The fast reduced matrix product computation
### The basic algorithm
We first present some observations which lead us to an efficient algorithm for computing \(XA\).
Let \(X=[\mathbf{x}_{0}^{\top},\mathbf{x}_{1}^{\top},\ldots,\mathbf{x}_{N-1}^{\top}]^{\top}\) be the \(N\times s\)-matrix whose \(k\)-th row is the \(k\)-th point of the reduced lattice point set (written as a row vector). Let \(\mathbf{\xi}_{j}\) denote the \(j\)-th column of \(X\), i.e. \(X=[\mathbf{\xi}_{1},\mathbf{\xi}_{2},\ldots,\mathbf{\xi}_{s}]\). Let \(A=[\mathbf{a}_{1},\mathbf{a}_{2},\ldots,\mathbf{a}_{s}]^{\top}\), where \(\mathbf{a}_{j}\in\mathbb{R}^{1\times\tau}\) is the \(j\)-th row of \(A\). Then we have
\[XA=[\mathbf{\xi}_{1},\mathbf{\xi}_{2},\ldots,\mathbf{\xi}_{s}]\begin{pmatrix}\mathbf{a}_{1}\\ \mathbf{a}_{2}\\ \vdots\\ \mathbf{a}_{s}\end{pmatrix}=\mathbf{\xi}_{1}\mathbf{a}_{1}+\mathbf{\xi}_{2}\mathbf{a}_{2}+\cdots+ \mathbf{\xi}_{s}\mathbf{a}_{s}. \tag{10}\]
In order to illustrate the inherent repeititiveness of a reduced lattice point set, consider a reduction index \(0<w_{j}<m\) and the corresponding component \(z_{j}\) of the generating vector. The \(j\)-th component of the \(N=b^{m}\) points of the reduced lattice point set (i.e., the \(j\)-th column \(\mathbf{\xi}_{j}\) of \(X\)) is then given by
\[\mathbf{\xi}_{j} := \left(\frac{0\cdot z_{j}\bmod b^{m-w_{j}}}{b^{m-w_{j}}},\frac{1 \cdot z_{j}\bmod b^{m-w_{j}}}{b^{m-w_{j}}},\ldots,\frac{(b^{m}-1)\cdot z_{j} \bmod b^{m-w_{j}}}{b^{m-w_{j}}}\right)^{\top}\] \[= \underbrace{\left(X_{j},\ldots,X_{j}\right)^{\top}}_{b^{w_{j}} \text{ times}},\]
where
\[X_{j}=\left(0,\frac{z_{j}\bmod b^{m-w_{j}}}{b^{m-w_{j}}},\ldots,\frac{(b^{m- w_{j}}-1)z_{j}\bmod b^{m-w_{j}}}{b^{m-w_{j}}}\right)^{\top}.\]
We will exploit this repetitive structure within the reduced lattice points to derive a fast matrix-vector multiplication algorithm.
Based on the above observations, it is possible to formulate the following algorithm to compute (10) in an efficient way. Note that for \(j>s^{*}\) the \(j\)-th column of \(X\) consists only of zeros, so there is nothing to compute for the entries of \(X\) corresponding to these columns.
**Input:** Matrix \(A\in\mathbb{R}^{s\times\tau}\), integer \(m\in\mathbb{N}\), prime \(b\), reduction indices \(0=w_{1}\leq w_{2}\leq\cdots\leq w_{s}\), corresponding generating vector of reduced lattice rule, \(\mathbf{z}=(z_{1},b^{w_{2}}z_{2},\ldots,b^{w_{s}}z_{s})\).
Set \(N=b^{m}\) and set \(P_{s^{*}+1}=\mathbf{0}_{1\times\tau}\in\mathbb{R}^{1\times\tau}\).
**for \(j=s^{*}\) to \(1\) do**
\(\bullet\) Compute the \(b^{m-w_{j}}\) reduced lattice points
\[X_{j}=\left(0,\frac{z_{j}\bmod b^{m-w_{j}}}{b^{m-w_{j}}},\ldots,\frac{(b^{m-w _{j}}-1)z_{j}\bmod b^{m-w_{j}}}{b^{m-w_{j}}}\right)^{\top}\in\mathbb{R}^{b^{m- w_{j}}\times 1}.\]
\(\bullet\) Compute \(P_{j}\) as
\[P_{j}=\begin{cases}\begin{pmatrix}P_{j+1}\\ P_{j+1}\\ \vdots\\ P_{j+1}\end{pmatrix}+X_{j}\mathbf{a}_{j}\in\mathbb{R}^{b^{m-w_{j}}\times\tau}, \end{cases}\]
where \(\mathbf{a}_{j}\in\mathbb{R}^{1\times\tau}\) denotes the \(j\)-th row of the matrix \(A\).
**end for**
Set \(P=P_{1}\).
**Return:** Matrix product \(P=XA\).
**Algorithm 1** Fast reduced matrix product
The following theorem gives an estimate of the computational cost of Algorithm 1, which shows that by using a reduced point set we can obtain an improved computation time over that in [4], which only depends on the index \(s^{*}\), but not on \(s\) anymore.
**Theorem 2**.: _Let a matrix \(A\in\mathbb{R}^{s\times\tau}\), an integer \(m\in\mathbb{N}\), a prime \(b\), and reduction indices \(0=w_{1}\leq w_{2}\leq\cdots\leq w_{s}\) be given. Furthermore, let \(\mathbf{z}=(z_{1},b^{w_{2}}z_{2},\ldots,b^{w_{s}}z_{s})\) be the generating vector of a reduced lattice rule corresponding to \(N=b^{m}\) and the given reduction indices \((w_{j})_{j=1}^{s}\). Then the matrix product \(P=XA\) can be computed via Algorithm 1 using_
\[\mathcal{O}\left(\tau\,N\sum_{j=1}^{s^{*}}b^{-w_{j}}\right)\]
_operations and requiring \(\mathcal{O}(N\tau)\) storage. Here, \(X\) is the \(N\times s\)-matrix whose rows are the \(N\) reduced lattice points._
Proof.: In the \(j\)-th step the generation of the \(b^{m-w_{j}}\) lattice points requires \(\mathcal{O}(b^{m-w_{j}})\) operations and storage. The most costly operation in each step is the product \(X_{j}\mathbf{a}_{j}\) which requires \(\mathcal{O}(b^{m-w_{j}}\,\tau)\) operations, but this step only needs to be carried out for those \(j\) with \(j\leq s^{*}\). Summing over all \(j=1,\ldots,s^{*}\), the computational complexity amounts to
\[\mathcal{O}\left(\sum_{j=1}^{s^{*}}b^{m-w_{j}}\,\tau\right)=\mathcal{O}\left( \tau\,b^{m}\sum_{j=1}^{s^{*}}b^{-w_{j}}\right)\]
operations. Furthermore, storing the matrix \(P_{j}\) requires \(\mathcal{O}(b^{m-w_{j}}\,\tau)\) space, which attains a maximum of \(\mathcal{O}(b^{m}\,\tau)\) for \(w_{1}=0\). Note that in an efficient implementation the matrices \(P_{j}\) are overwritten in each step and do not all have to be stored.
In the next section we discuss the fast reduced QMC matrix-vector product where the number of points is a power of 2.
### An optimized algorithm
Recall that, for a sequence \(\boldsymbol{w}\) of reduction indices, we have \(s^{*}=\max\{j\in\mathbb{N}\colon w_{j}<m\}\). Since for \(j>s^{*}\) the \(j\)-th column of \(X\) consists only of zeros, we can restrict our considerations in this section to the product \(\widetilde{X}\widetilde{A}\), where \(\widetilde{X}\) is an \(N\times s^{*}\)-matrix, and \(\widetilde{A}\) is an \(s^{*}\times\tau\)-matrix.
Assume that \(0=w_{1}\leq w_{2}\leq\cdots\leq w_{s^{*}}<m\) and define, for \(I\in\{0,1,\ldots,m-1\}\), the quantity
\[\tau_{I}:=\#\{j\in\{1,\ldots,s^{*}\}\ |\ w_{j}=I\},\]
which denotes the number of \(w_{j}\) which equal \(I\). Obviously, we then have that \(\sum_{I=0}^{m-1}\tau_{I}=s^{*}\).
Consider then the following alternative fast reduced matrix product algorithm.
**Input:** Matrix \(\widetilde{A}\in\mathbb{R}^{s^{*}\times\tau}\), integer \(m\in\mathbb{N}\), prime \(b\), reduction indices \(0=w_{1}\leq w_{2}\leq\cdots\leq w_{s^{*}}<m\), the corresponding generating vector of a reduced lattice rule, \(\boldsymbol{z}=(z_{1},b^{w_{2}}z_{2},\ldots,b^{w_{s^{*}}}z_{s^{*}})\).
Set \(N=b^{m}\), set \(\widetilde{P}_{m}=\boldsymbol{0}_{1\times\tau}\in\mathbb{R}^{1\times\tau}\), and \(w_{s^{*}+1}=m\).
**for \(I=m-1\) to \(0\) do**
\(\bullet\) Compute the matrix
\[\widetilde{X}_{I}=(W_{1}^{I},\ldots,W_{\tau_{I}}^{I})\in\mathbb{R}^{b^{m-I} \times\tau_{I}},\]
whose columns are the reduced lattice points
\[W_{r}^{I}=\left(0,\frac{z_{j_{r}}\bmod b^{m-I}}{b^{m-I}},\ldots,\frac{(b^{m-I }-1)z_{j_{r}}\bmod b^{m-I}}{b^{m-I}}\right)^{\top}\in\mathbb{R}^{b^{m-I} \times 1},\ 1\leq r\leq\tau_{I},\]
and where the \(j_{r}\), \(1\leq r\leq\tau_{I}\), are those indices for which \(w_{j_{r}}=I\). If \(\tau_{I}=0\), then set \(\widetilde{X}_{I}=\boldsymbol{0}_{b^{m-I}\times\tau_{I}}\).
\(\bullet\) Compute \(\widetilde{P}_{I}\) as
where \(\widetilde{A}_{I}\in\mathbb{R}^{\tau_{I}\times\tau}\) denotes the rows of the matrix \(\widetilde{A}\) that correspond to the \(j\) with \(w_{j}=I\). If \(\tau_{I}=0\), then set \(\widetilde{A}_{I}=\boldsymbol{0}_{\tau_{I}\times\tau}\in\mathbb{R}^{\tau_{I} \times\tau}\).
**end for**
Set \(\widetilde{P}=\widetilde{P}_{0}\).
**Return:** Matrix product \(\widetilde{P}=\widetilde{X}\widetilde{A}\).
The next theorem provides an estimate on the computation time of Algorithm 2, which again is independent of \(s\).
**Theorem 3**.: _Let a matrix \(A\in\mathbb{R}^{s\times\tau}\), an integer \(m\in\mathbb{N}\), a prime \(b\), and reduction indices \(0=w_{1}\leq w_{2}\leq\cdots\leq w_{s^{*}}<m\) be given. Furthermore, let \(\mathbf{z}=(z_{1},b^{w_{2}}z_{2},\ldots,b^{w_{s}}z_{s})\) be the generating vector of a reduced lattice rule corresponding to \(N=b^{m}\) and the given reduction indices \((w_{j})_{j=1}^{s}\). Then the matrix product \(P=XA\) can be computed via Algorithm 2 using_
\[\mathcal{O}\left(\tau\,Nm\right)\]
_operations and requiring \(\mathcal{O}(N\tau)\) storage. Here, \(X\) is the \(N\times s\)-matrix whose rows are the \(N\) reduced lattice points._
Proof.: As outlined above, it is no relevant restriction to reduce the matrices \(X\) and \(A\) to an \(N\times s^{*}\)-matrix \(\widetilde{X}\) and an \(s^{*}\times\tau\)-matrix \(\widetilde{A}\), respectively, and then apply Algorithm 2.
In the \(I\)-th step of the algorithm, the generation of the \(\tau_{I}b^{m-I}\) lattice points requires \(\mathcal{O}(\tau_{I}b^{m-I})\) operations and storage. The most costly operation in each step is the product \(\widetilde{X}_{I}\widetilde{A}_{I}\) which, via the fast QMC matrix product in [4], requires \(\mathcal{O}((m-I)\,b^{m-I}\,\tau)\) operations. Summing over all \(I=0,\ldots,m-1\), the computational complexity amounts to
\[\mathcal{O}\left(\sum_{I=0}^{m-1}(m-I)\,b^{m-I}\,\tau\right)=\mathcal{O}\left( \tau\,b^{m}\sum_{I=0}^{m-1}\frac{m-I}{b^{I}}\right)=\mathcal{O}\left(\tau\,b^ {m}\frac{b^{2}m}{(b-1)^{2}}\right)=\mathcal{O}\left(\tau\,N\,m\right)\]
operations. Furthermore, storing the matrix \(\widetilde{P}_{j}\) requires \(\mathcal{O}(b^{m-I}\,\tau)\) space, which attains a maximum of \(\mathcal{O}(b^{m}\,\tau)\) for \(I=0\). Note that in an efficient implementation the matrices \(\widetilde{P}_{j}\) are overwritten in each step and do not all have to be stored.
### Transformations, shifting, and computation for transformation functions
In applications from mathematical finance or uncertainty quantification, the integral to be approximated is often not over the unit cube but over \(\mathbb{R}^{s}\) with respect to a normal distribution. In order to be able to use lattice rules in this context, one has to apply a transformation and use randomly shifted lattice rules. In the following we show that the fast reduced QMC matrix-vector product can still be used in this context.
We have noted before that projections \(P_{j}=\pi_{j}(P)\) of a reduced lattice point set \(P\) onto the \(j\)-th component possess a repetitive structure, that is,
\[P_{j}=\underbrace{(X_{j},\ldots,X_{j})^{\top}}_{b^{\min(w_{j},m) \text{ times}}}\]
with
\[X_{j}=\left(0,\frac{z_{j}\bmod b^{m-\min(w_{j},m)}}{b^{m-\min(w_{j},m)}}, \ldots,\frac{(b^{m-\min(w_{j},m)}-1)z_{j}\bmod b^{m-\min(w_{j},m)}}{b^{m-\min( w_{j},m)}}\right)^{\top}.\]
This repetitive structure is preserved when applying a mapping \(\varphi:[0,1]\to\mathbb{R}\) elementwise to the projection \(P_{j}\) since
\[\varphi(P_{j})=\underbrace{\left(\varphi(X_{j}),\ldots,\varphi(X_{j})\right)^ {\top}}_{b^{\min(w_{j},m)\text{ times}}}.\]
This approach also works for the map \(\psi:[0,1]\to[0,1]\) with \(\psi(x)=\{x+\Delta\}\), i.e, for shifting of the lattice points modulo one. In particular, this observation holds for componentwise maps of the form \(\varphi:[0,1]^{s}\to\mathbb{R}\) with \(\varphi(\boldsymbol{x})=(\varphi_{1}(x_{1}),\ldots,\varphi_{s}(x_{s}))\) that are applied simultaneously to all \(N\) elements of the lattice point set. For a map of this form Algorithm 1 can be easily adapted by replacing \(X_{j}\) by \(\varphi_{j}(X_{j})\). If we wish to apply Algorithm 2 instead, the matrices \(\widetilde{X}_{I}\) can be replaced by the correspondingly transformed matrices, however, the fast reduced QMC matrix vector product can only be used here if all components with indices in \(\tau_{I}\) use the same transformation.
## 3 Reduced digital nets
In this section we present a reduced point construction for so-called digital \((t,m,s)\)-nets. Typical examples are digital nets derived from Sobol', Faure, and Niederreiter sequences. In general, a \((t,m,s)\)-net is defined as follows.
Given an integer \(b\geq 2\), an elementary interval in \([0,1)^{s}\) is an interval of the form \(\prod_{j=1}^{s}[a_{j}b^{-d_{j}},(a_{j}+1)b^{-d_{j}})\) where \(a_{j},d_{j}\) are nonnegative integers with \(0\leq a_{j}<b^{d_{j}}\) for \(1\leq j\leq s\). Let \(t,m\), with \(0\leq t\leq m\), be integers. Then a \((t,m,s)\)-net in base \(b\) is a point set \(P_{m}\) in \([0,1)^{s}\) with \(b^{m}\) points such that any elementary interval in base \(b\) with volume \(b^{t-m}\) contains exactly \(b^{t}\) points of \(P_{m}\).
Note that a low \(t\)-value of a \((t,m,s)\)-net implies better equidistribution properties and usually also better error bounds for integration rules based on such nets. How to find nets with low \(t\)-values is an involved question, see, e.g., [7, 13]. Due to the important role of the \(t\)-value, one sometimes also considers a slightly refined notion of a \((t,m,s)\)-net, which is then referred to as a \(((t_{\mathfrak{u}})_{\mathfrak{u}\subseteq[s]},m,s)\)-net. The latter notion means that for any \(\mathfrak{u}\neq\emptyset\), \(\mathfrak{u}\subseteq[s]\), the projection of the net is a \((t_{\mathfrak{u}},m,|\mathfrak{u}|)\)-net.
The most common method to obtain \((t,m,s)\)-nets are so-called digital constructions, yielding digital \((t,m,s)\)-nets. These work as follows. Let \(b\) be a prime number and recall that \(\mathbb{F}_{b}\) denotes the finite field with \(b\) elements. We identify this set with the integers \(\{0,1,\ldots,b-1\}\). We denote the (unique) \(b\)-adic digits of some \(n\in\mathbb{N}\) by \(\vec{n}\in\mathbb{F}_{b}^{\mathbb{N}}\), ordered from the least significant, that is \(n=\vec{n}\cdot(1,b,b^{2},\ldots)\). Here, the sum is always finite as there are only finitely many non-zero digits in \(\vec{n}\). Thus, with a slight abuse of notation we write \(\vec{n}\in\mathbb{F}_{b}^{m}\) if \(n<b^{m}\). Analogously, we denote the \(b\)-adic digits of \(y\in[0,1)\) by \(\vec{y}\in\mathbb{F}_{b}^{\mathbb{N}}\), i.e. \(y=\vec{y}\cdot(b^{-1},b^{-2},\ldots)\), with the additional constraint that \(\vec{y}\) does not contain infinitely many consecutive entries equal to \(b-1\).
Given _generating matrices_\(C^{(j)}=\left(C^{(j)}_{p,q}\right)_{p,q=1}^{m}\in\mathbb{F}_{b}^{m\times m}\) for \(j=1,\ldots,s\), a _digital net_ is defined as \(P_{m}(\{C^{(j)}\}_{j}):=\{\boldsymbol{y}_{0},\ldots,\boldsymbol{y}_{b^{m}-1}\}\), where \(\boldsymbol{y}_{n}=(y_{1,n},\ldots,y_{s,n})\in[0,1)^{s}\),
\[\vec{y}_{j,n}:=C^{(j)}\vec{n}\quad\text{and }y_{j,n}=\vec{y}_{j,n}\cdot(b^{-1},b^{- 2},\ldots,b^{-m}). \tag{11}\]
From this definition, given reduction indices \(\boldsymbol{w}\), one can construct a reduced digital net by setting the last \(\min(w_{j},m)\) rows of \(C^{(j)}\) to \(\boldsymbol{0}\). To be more precise,
\[\widehat{C}^{(j)}_{p,q}:=\begin{cases}C^{(j)}_{p,q}&\text{if }p\in\{1,\ldots,m-\min(w_{j},m)\},\\ 0&\text{if }p\in\{m-\min(w_{j},m)+1,\ldots,m\},\end{cases} \tag{12}\]
and applying (11) with the latter choice \(\widehat{C}^{(j)}=\left(\widehat{C}^{(j)}_{p,q}\right)_{p,q=1}^{m}\) of the generating matrices. Note that \(\widehat{C}^{(j)}\) is just the zero matrix if \(j>s^{*}\). For the reduced digital net, we then write \(P_{m}(\{\widehat{C}^{(j)}\}_{j}):=\{\mathbf{z}_{0},\ldots,\mathbf{z}_{b^{m}-1}\}\).
The construction (12) allows us to generate a reduced digital net for any given digital net.
```
0: Prime \(b\), generating matrices \(C^{(j)}\in\mathbb{F}_{b}^{m\times m}\), \(j=1,\ldots,s\), reduction indices \(0=w_{1}\leq w_{2}\leq\cdots\leq w_{s}\). Set \(\mathbf{y}_{0}=(0,\ldots,0)\) for\(j=1,\ldots,s\)do for\(n=0,1,\ldots,b^{m}-1\)do Compute \(\vec{z}_{j,n}=\widehat{C}^{(j)}\vec{n}\), with the choice \(\widehat{C}^{(j)}\) from (12). Set \(z_{j,n}=\vec{z}_{j,n}\cdot(b^{-1},b^{-2},\ldots)\) endfor endfor Return:\(P_{m}(\{\widehat{C}^{(j)}\}_{j})=\{\mathbf{z}_{n}=(z_{1,n}\ldots,z_{s,n})\in[0,1)^{s} \colon n=0,\ldots,b^{m}-1\}\).
```
**Algorithm 3** Computation of a reduced digital net
The advantage of using a reduced digital net is that in component \(j\) all the values of the \(z_{j,n}\) are in the set \(\{0,1/b^{m-\min(w_{j},m)},2/b^{m-\min(w_{j},m)},\ldots,1-1/b^{m-\min(w_{j},m)}\}\). Since the digital net has \(b^{m}\) points, the values necessarily repeat \(b^{\min(w_{j},m)}\) times. This can be used to achieve a reduction in the computation of \(XA\) in the following way.
```
0: Matrix \(A\in\mathbb{R}^{s\times\tau}\) with \(j\)-th row vector \(\mathbf{a}_{j}\), \(j=1,2,\ldots,s\), integer \(m\in\mathbb{N}\), prime \(b\), reduction indices \(0=w_{1}\leq w_{2}\leq\cdots\leq w_{s}\). Let \(s^{*}\leq s\) be the largest index such that \(w_{s^{*}}<m\). Let \(\{\mathbf{y}_{n}=(y_{n,1},y_{n,2},\ldots,y_{n,s})^{\top}\in[0,1)^{s}:n=0,1,\ldots, b^{m}-1\}\) be a digital net. Set \(P_{s^{*}+1}=\mathbf{0}\in\mathbb{R}^{b^{m}\times\tau}\). for\(j=s^{*}\)to\(1\)do \(\bullet\) Compute the row vectors \[\mathbf{c}_{k}=\frac{k}{b^{m-w_{j}}}\mathbf{a}_{j},\quad k=0,1,\ldots,b^{m-w_{j}}-1.\] \(\bullet\) Compute \(P_{j}\) as \[P_{j}=P_{j+1}+\begin{pmatrix}\mathbf{c}_{[y_{0,j}b^{m-w_{j}}]}\\ \mathbf{c}_{[y_{1,j}b^{m-w_{j}}]}\\ \vdots\\ \mathbf{c}_{[y_{bm-1,j}b^{m-w_{j}}]}\end{pmatrix}\in\mathbb{R}^{b^{m}\times\tau}.\] endfor Set \(P=P_{1}\). Return: Matrix product \(P=XA\).
```
**Algorithm 4** Fast reduced matrix product for digital nets
Compared with computing \(XA\) directly, Algorithm 4 reduces the number of multiplications from \(\mathcal{O}(\tau b^{m})\) to \(\mathcal{O}(\tau b^{m-w_{j}})\) in coordinate \(j\) and to \(\mathcal{O}(\tau b^{m}\sum_{j=1}^{s^{*}}b^{-w_{j}})\) overall compared to \(\mathcal{O}(s\tau b^{m})\). The number of additions is the same in both instances.
The difference here to the approach for lattice point sets is that although component \(j\) has \(b^{w_{j}}\) repeated values, the repeating pattern in each component is different and so when we add up the vectors resulting from the different components, we do not have repetitions in general and so we do not get a reduced number of additions. The analogue to the method in Section 2.1 for lattice point sets applied to digital nets would be to delete columns of \(C^{(j)}\) (rather than rows as we did in this section). The problem with this approach is that if we delete columns, then the \((t,m,s)\)-net property of the digital net is not guaranteed anymore. A special construction of digital \((t,m,s)\)-nets with additional properties would be needed in this case.
For the case of reduced lattice point sets, we can use Theorem 1 to obtain an error bound on the performance of the corresponding QMC rule when using (2) to approximate (1). For the case of reduced digital nets, there is no existing error bound analogous to Theorem 1. We outline the error analysis in the subsequent section.
### Error analysis
Consider the case of digital nets from Algorithm 3. For this we fix \(m\in\mathbb{N}\). The _weighted star discrepancy_ is a measure of the worst-case quadrature error for a node set \(P_{m}\), with \(b^{m}\) nodes, defined as
\[D^{*}_{b^{m},\boldsymbol{\gamma}}(P_{m}):=\sup_{x\in(0,1]^{s}}\max_{ \emptyset\neq u\subseteq[s]}\gamma_{u}\left|\Delta_{P_{m},u}(\boldsymbol{x}) \right|, \tag{13}\]
where
\[\Delta_{P_{m},u}(\boldsymbol{x}):=\frac{\#\{(y_{1},\ldots,y_{s})\in P_{m}\colon y _{j}<x_{j},\,\forall j\in u\}}{b^{m}}-\prod_{j\in u}x_{j}. \tag{14}\]
We additionally write \(\Delta_{P_{m}}(\boldsymbol{x})=\Delta_{P_{m},[s]}(\boldsymbol{x})\). For all \(k\in\{0,\ldots,b^{m}-1\}\), define \(\vec{k}=(\vec{\kappa}_{0},\ldots,\vec{\kappa}_{m-1})\in\mathbb{F}_{b}^{m}\) its vector of \(b\)-adic digits, ordered from the least significant to the most significant. Moreover, define
\[\rho(k)=\begin{cases}1,&\text{if }k=0,\\ \frac{1}{b^{r}\sin(\pi\kappa_{r-1}/b)},&\text{if }k=\kappa_{0}+\kappa_{1}b+ \cdots+\kappa_{r-1}b^{r-1},\\ &\text{with }\kappa_{0},\ldots,\kappa_{r-2}\in\{0,1,\ldots,b-1\},\kappa_{r-1} \in\{1,\ldots,b-1\}.\end{cases}\]
In the following proposition, we prove a bound on \(\Delta_{P_{m},u}(\boldsymbol{x})\).
**Proposition 1**.: _Let \(\widehat{P}_{m}:=P_{m}(\{\widehat{C}^{(j)}\}_{j})=\{\boldsymbol{z}_{0},\ldots,\boldsymbol{z}_{b^{m}-1}\}\) be generated by Algorithm 3, and let \(\boldsymbol{x}\in(0,1]^{s}\). Let \(0=w_{1}\leq w_{2}\leq\cdots\leq w_{s}\) and let \(s^{*}\in[s]\) be the largest index such that \(w_{s^{*}}<m\). Then for any \(\mathfrak{u}\subseteq[s]\) with \(\mathfrak{u}\neq\emptyset\) we have_
\[\left|\Delta_{\widehat{P}_{m},\mathfrak{u}}(\boldsymbol{x})\right|\leq \begin{cases}1\\ 1-\prod_{j\in\mathfrak{u}}\left(1-\frac{1}{b^{m-w_{j}}}\right)+\sum_{ \begin{subarray}{c}\boldsymbol{k}_{\mathfrak{u}}\in\mathbb{N}_{0}^{[\mathfrak{ u}]}\setminus\{\boldsymbol{0}\}\\ k_{j}\in\{0,\ldots,b^{m-w_{j}}-1\}\\ \sum_{j\in\mathfrak{u}}(\widehat{C}^{(j)})^{\top}\vec{k}_{j}\equiv\vec{0}\pmod{ b}\end{subarray}}\prod_{j\in\mathfrak{u}}\rho(k_{j}),\quad\text{if }\mathfrak{u}\subseteq[s^{*}],\]
_where \(\widehat{C}^{(j)}\) is defined in (12)._
To prove Proposition 1, we need the next elementary lemma, extending [7, Lemma 3.18], which can be verified by induction on \(s\).
**Lemma 1**.: _Let \(J\) be a finite index set and assume \(u_{j},v_{j}\in[0,1]\), \(|u_{j}-v_{j}|\leq\delta_{j}\in[0,1]\) for all \(j\in J\). Then_
\[\left|\prod_{j\in J}u_{j}-\prod_{j\in J}v_{j}\right|\leq 1-\prod_{j\in J}(1- \delta_{j})\leq\sum_{j\in J}\delta_{j}.\]
Proof of Proposition 1.: The bound for the case when \(\mathfrak{u}\not\subseteq[s^{*}]\) is trivial. Hence we can now focus on the case when \(\mathfrak{u}\subseteq[s^{*}]\). We operate along the lines of the proof of [7, Theorem 3.28]. We define the mapping \(T:[0,1]^{|\mathfrak{u}|}\to[0,1]^{|\mathfrak{u}|}\) given by
\[T(\boldsymbol{x}_{\mathfrak{u}})=T((x_{j})_{j\in\mathfrak{u}})=((T_{m-w_{j}}( x_{j}))_{j\in\mathfrak{u}}),\]
where \(T_{v}(x)=\lceil xb^{v}\rceil b^{-v}\).
Now, let us assume that \(\boldsymbol{x}_{\mathfrak{u}}=(x_{j})_{j\in\mathfrak{u}}\in(0,1]^{|\mathfrak{ u}|}\) has been chosen arbitrarily but fixed. For short, we write \(\overline{\boldsymbol{x}}_{\mathfrak{u}}=(\overline{x}_{j})_{j\in\mathfrak{ u}}:=T(\boldsymbol{x}_{\mathfrak{u}})\).
Recall that, by the definition of the matrices \(\widehat{C}^{(j)}\), \(j\in\{1,\ldots,s\}\), the points \(\boldsymbol{z}_{n}=(z_{1,n},\ldots,z_{s,n})\) are such that the \(z_{j,n}\) have at most \(m-\min(w_{j},m)\) non-zero digits.
Using the triangle inequality we get
\[\left|\Delta_{\widehat{P}_{m},\mathfrak{u}}(\boldsymbol{x})\right|\leq\left| \Delta_{\widehat{P}_{m},\mathfrak{u}}(\boldsymbol{x})-\Delta_{\widehat{P}_{m },\mathfrak{u}}(\overline{\boldsymbol{x}})\right|+\left|\Delta_{\widehat{P}_ {m},\mathfrak{u}}(\overline{\boldsymbol{x}})\right|.\]
For any \(\boldsymbol{z}_{n}\in\widehat{P}_{m}\), we denote the \(b\)-adic digits of the \(j\)-th component \(z_{j,n}\) by \(z_{j,n,i}\), \(i\in\{1,\ldots,m\}\). By construction, we see that \(z_{j,n,i}=0\) for \(i>m-\min\{w_{j},m\}\). Hence
\[\widehat{P}_{m}\subseteq\left\{\left(h_{1}b^{-(m-\min\{w_{1},m\})},\ldots,h_{ s}b^{-(m-\min\{w_{s},m\})}\right):h_{j}\in\mathbb{N}_{0}\right\}.\]
This implies, as \(\overline{\boldsymbol{x}}_{\mathfrak{u}}=T(\boldsymbol{x}_{\mathfrak{u}})\),
\[\#\{(z_{1},\ldots,z_{s})\in\widehat{P}_{m}\colon z_{j}<x_{j},\,\forall j\in \mathfrak{u}\}=\#\{(z_{1},\ldots,z_{s})\in\widehat{P}_{m}\colon z_{j}<\overline {x}_{j},\,\forall j\in\mathfrak{u}\},\]
and thus
\[\left|\Delta_{\widehat{P}_{m},\mathfrak{u}}(\boldsymbol{x})-\Delta_{\widehat {P}_{m},\mathfrak{u}}(\overline{\boldsymbol{x}})\right|=\left|\prod_{j\in \mathfrak{u}}x_{j}-\prod_{j\in\mathfrak{u}}\overline{x}_{j}\right|=\prod_{j \in\mathfrak{u}}\overline{x}_{j}-\prod_{j\in\mathfrak{u}}x_{j}\leq 1-\prod_{j\in \mathfrak{u}}\left(1-\frac{1}{b^{m-w_{j}}}\right),\]
where we used Lemma 1 for the last inequality.
Since \([\boldsymbol{0},\overline{\boldsymbol{x}})\) is a disjoint union of intervals of the form
\[J=\prod_{j=1}^{s}[\frac{h_{j}}{b^{m-\min(w_{j},m)}},\frac{h_{j}+1}{b^{m-\min( w_{j},m)}}),\]
an application of [7, Lemma 3.9] implies that \(\widehat{\chi_{[\boldsymbol{0},\overline{\boldsymbol{x}})}}(\boldsymbol{k})=0\) for all \(\boldsymbol{k}\in\mathbb{N}_{0}^{s}\setminus\{\boldsymbol{0}\}\) such that \(k_{j}\geq b^{m-\min(w_{j},m)}\) for at least one \(j\). Here \(\chi_{[\boldsymbol{0},\overline{\boldsymbol{x}})}\) denotes the indicator function and \(\widehat{\chi_{[\boldsymbol{0},\overline{\boldsymbol{x}})}}(\boldsymbol{k})\) are the corresponding Walsh coefficients (we use similar notation as in [7, Chapter 2]). The
complete analogue of this observation holds if we consider the projection of \(J\), given by \(J_{\mathfrak{u}}=\prod_{j\in\mathfrak{u}}[\frac{h_{j}}{b^{m-\min(w_{j},m)}},\frac{ h_{j}+1}{b^{m-\min(w_{j},m)}})\), the projections \(\overline{\mathbf{x}}_{\mathfrak{u}}\) and \(\mathbf{k}_{\mathfrak{u}}\) of \(\overline{\mathbf{x}}\) and \(\mathbf{k}\), respectively, and the projections \(\mathbf{z}_{n,\mathfrak{u}}\) of the points \(\mathbf{z}_{n}\) in \(\widehat{P}_{m}\). Then, [7, Lemmas 3.29 and 4.75] yield
\[\left|\Delta_{\widehat{P}_{m},\mathfrak{u}}(\overline{\mathbf{x}}) \right| =\left|\frac{1}{b^{m}}\sum_{\begin{subarray}{c}\mathbf{k}_{\mathfrak{ u}}\in\mathbb{N}_{0}^{|\mathfrak{u}|}\setminus\{\mathbf{0}\}\\ k_{j}\in\{0,\ldots,b^{m-w_{j}}-1\}\end{subarray}}\widehat{\chi_{[\mathbf{0}, \overline{\mathbf{x}}_{\mathfrak{u}})}}(\mathbf{k}_{\mathfrak{u}})\sum_{n=0}^{b^{m}-1 }\mathrm{wal}_{\mathbf{k}_{\mathfrak{u}}}(\mathbf{z}_{n,\mathfrak{u}})\right|\] \[\leq\sum_{\begin{subarray}{c}\mathbf{k}_{\mathfrak{u}}\in\mathbb{N}_{0 }^{|\mathfrak{u}|}\setminus\{\mathbf{0}\}\\ k_{j}\in\{0,\ldots,b^{m-w_{j}}-1\}\end{subarray}}\prod_{j\in\mathfrak{u}}\rho (k_{j})\left|\frac{1}{b^{m}}\sum_{n=0}^{b^{m}-1}\mathrm{wal}_{\mathbf{k}}(\mathbf{z}_{n,\mathfrak{u}})\right|\] \[=\sum_{\begin{subarray}{c}\mathbf{k}_{\mathfrak{u}}\in\mathbb{N}_{0 }^{|\mathfrak{u}|}\setminus\{\mathbf{0}\}\\ k_{j}\in\{0,\ldots,b^{m-w_{j}}-1\}\\ \sum_{j\in\mathfrak{u}}(\widehat{C}^{(j)})^{\top}\bar{k}_{j}\equiv\vec{0} \bmod b\end{subarray}}\prod_{j\in\mathfrak{u}}\rho(k_{j}).\]
This completes the proof.
Let \(\mathbf{w}=(w_{j})_{j=1}^{s}\), \(0=w_{1}\leq w_{2}\leq\cdots\leq w_{s}\), and let \(\widehat{P}_{m}:=P_{m}(\{\widehat{C}^{(j)}\}_{j})=\{\mathbf{z}_{0},\ldots,\mathbf{z}_{ b^{m}-1}\}\) be generated by Algorithm 3. Let \(\mathfrak{u}\neq\emptyset\), \(\mathfrak{u}\subseteq[s]\) be given. We then define the reduced dual net,
\[P_{m,\mathfrak{u},\mathbf{w}}^{\perp}( \{\widehat{C}^{(j)}\}_{j})\] \[=\{\mathbf{k}_{\mathfrak{u}}\in\mathbb{N}_{0}^{|\mathfrak{u}|}:\,k_{j }\in\{0,\ldots,b^{m-\min(w_{j},m)}-1\}\,\forall j\in\mathfrak{u},\sum_{j\in \mathfrak{u}}(\widehat{C}^{(j)})^{\top}\bar{k}_{j}\equiv\vec{0}\bmod b\},\]
and we also define the reduced dual net without zero components,
\[P_{m,\mathfrak{u},\mathbf{w}}^{\perp,*}( \{\widehat{C}^{(j)}\}_{j})\] \[=\{\mathbf{k}_{\mathfrak{u}}\in\mathbb{N}^{|\mathfrak{u}|}:\,k_{j} \in\{1,\ldots,b^{m-\min(w_{j},m)}-1\}\,\forall j\in\mathfrak{u},\sum_{j\in \mathfrak{u}}(\widehat{C}^{(j)})^{\top}\bar{k}_{j}\equiv\vec{0}\bmod b\}.\]
Furthermore, we let
\[R_{\mathbf{w}}(\{\widehat{C}^{(j)}\}_{j\in\mathfrak{u}}):=\sum_{\mathbf{k}_{\mathfrak{ u}}\in P_{m,\mathfrak{u},\mathbf{w}}^{\perp}(\{\widehat{C}^{(j)}\}_{j}) \setminus\{\mathbf{0}\}}\prod_{j\in\mathfrak{u}}\rho(k_{j}). \tag{15}\]
Applying Proposition 1 to all projections of \(\widehat{P}_{m}\) onto the sets \(\mathfrak{u}\subseteq[s]\), \(\mathfrak{u}\neq\emptyset\), gives the following bound on the weighted discrepancy.
**Proposition 2**.: _Let \(m\in\mathbb{N}\), let \(\mathbf{w}\) be a given set of reduction indices with \(0=w_{1}\leq w_{2}\leq\cdots\leq w_{s}\), let \(s^{*}\in[s]\) be the largest index such that \(w_{s^{*}}<m\), and let \(\widehat{P}_{m}:=P_{m}(\{\widehat{C}^{(j)}\}_{j})\) be generated by Algorithm 3. Then,_
\[D_{b^{m},\gamma}^{*}(\widehat{P}_{m})\leq\max_{\emptyset\neq\mathfrak{u}\subseteq [s]}\gamma_{\mathfrak{u}}\begin{cases}1&\text{if }\mathfrak{u}\not\subseteq[s^{*}],\\ \left[1-\prod_{j\in\mathfrak{u}}\left(1-\frac{1}{b^{m-w_{j}}}\right)+R_{\mathbf{w} }(\{\widehat{C}^{(j)}\}_{j\in\mathfrak{u}})\right]&\text{if }\mathfrak{u}\subseteq[s^{*}].\end{cases} \tag{16}\]
We will now analyze the expressions occurring in the square brackets in (16) in greater detail. To this end, we restrict ourselves to product weights in the following, i.e., we assume weights \(\gamma_{\mathfrak{u}}=\prod_{j\in\mathfrak{u}}\gamma_{j}\) with \(\gamma_{1}\geq\gamma_{2}\geq\cdots>0\).
Then, using the second inequality of Lemma 1 yields for the first term for the case \(\mathfrak{u}\subseteq[s^{*}]\) in (16),
\[\gamma_{\mathfrak{u}}\left(1-\prod_{j\in\mathfrak{u}}\left(1-\frac{1}{b^{m-w_{ j}}}\right)\right)\leq\frac{1}{b^{m}}\gamma_{\mathfrak{u}}\sum_{j\in \mathfrak{u}}b^{w_{j}}\leq\frac{1}{b^{m}}\prod_{j\in\mathfrak{u}}\gamma_{j}(1+ b^{w_{j}}). \tag{17}\]
For the case \(\mathfrak{u}\not\subseteq[s^{*}]\) in (16), we use that \(w_{j}\geq m\) if \(j\in\mathfrak{u}\setminus[s^{*}]\), and obtain for \(\mathfrak{v}=\mathfrak{u}\cap[s^{*}]\) that
\[\gamma_{\mathfrak{u}}\leq\gamma_{\mathfrak{v}}\gamma_{\mathfrak{u}\setminus \mathfrak{v}}\frac{1}{b^{m}}\prod_{j\in\mathfrak{u}\setminus\mathfrak{v}}(1+ b^{w_{j}})\leq\frac{1}{b^{m}}\prod_{j\in\mathfrak{u}}\gamma_{j}(1+b^{w_{j}}). \tag{18}\]
Regarding the remaining term in (16), we show the following lemma.
**Lemma 2**.: _Let \(\mathbf{w}\) be a given set of reduction indices with \(0=w_{1}\leq w_{2}\leq\cdots\leq w_{s}\), and let \(\widehat{P}_{m}:=P_{m}(\{\widehat{C}^{(j)}\}_{j})\) be generated by Algorithm 3. Assume that the matrices \(\widehat{C}^{(1)},\ldots,\widehat{C}^{(s)}\in\mathbb{F}_{b}^{m\times m}\) are the generating matrices of a digital \(((t_{\mathfrak{u}})_{\mathfrak{u}\subseteq[s]},m,s)\)-net. As above, let \(s^{*}\in[s]\) be the largest number such that \(w_{s^{*}}<m\), and assume that \(\mathfrak{v}\neq\emptyset\), \(\mathfrak{v}\subseteq[s^{*}]\). Then,_
\[R_{\mathbf{w}}(\{\widehat{C}^{(j)}\}_{j\in\mathfrak{v}})\] \[\leq \sum_{\emptyset\neq\mathfrak{p}\subseteq\mathfrak{v}}\frac{b^{t_{ \mathfrak{p}}}}{b^{m}}\left[\frac{1}{b}\left(\frac{b^{2}+b}{3}\right)^{| \mathfrak{p}|}\max\left(\frac{(m-t_{\mathfrak{p}})^{|\mathfrak{p}|-1}}{(| \mathfrak{p}|-1)!},\frac{1}{b}\right)+\left(\frac{b^{2}-1}{3b}\right)^{| \mathfrak{p}|}\prod_{j\in\mathfrak{p}}(m-w_{j})\right].\]
Proof.: Recall that for each \(\widehat{C}^{(j)}\), \(j\in\mathfrak{v}\), only the first \(m-w_{j}\) rows of \(\widehat{C}^{(j)}\) are non-zero. Consequently,
\[R_{\mathbf{w}}(\{C^{(j)}\}_{j\in\mathfrak{v}}) =\sum_{\mathbf{k}_{\mathfrak{p}}\in P_{m,\mathfrak{v},\mathfrak{v}} ^{\perp}(\{\widehat{C}^{(j)}\})\setminus\{\mathbf{0}\}}\ \prod_{j\in\mathfrak{v}}\rho(k_{j})\] \[\leq\sum_{\mathbf{k}_{\mathfrak{p}}\in P_{m,\mathfrak{v},\mathfrak{v} }^{\perp}(\{\widehat{C}^{(j)}\})\setminus\{\mathbf{0}\}}\ \prod_{j\in\mathfrak{v}}\rho(k_{j})\] \[=\sum_{\emptyset\neq\mathfrak{p}\subseteq\mathfrak{v}}\sum_{\mathbf{k }_{\mathfrak{p}}\in P_{m,\mathfrak{v},\mathfrak{v}}^{\perp,*}(\{\widehat{C}^{ (j)}\})}\ \ \prod_{j\in\mathfrak{p}}\rho(k_{j}). \tag{20}\]
We use the estimate \((\sin(x))^{-1}\leq(\sin(x))^{-2}\) for \(0<x<\pi\) to estimate \(\rho(k_{j})\leq\frac{1}{b^{r}\sin^{2}(\pi k_{j,a_{j}-1}/b)}\) for positive \(k_{j}\), where we write \(k_{j}=\kappa_{j,0}+\kappa_{j,1}b+\cdots+\kappa_{j,a_{j}-1}b^{a_{j}-1}\), with \(\kappa_{j,a_{j}-1}\neq 0\). Now we can adapt the proof of [7, Lemma 16.40], where we replace \(\frac{1}{b^{2r}}\left(\frac{1}{\sin^{2}(\pi k_{j,a_{j}-1}/b)}-\frac{1}{3}\right)\) with \(\frac{1}{b^{r}\sin^{2}(\pi\kappa_{j,a_{j}-1}/b)}\), and \([s]\) by \(\mathfrak{p}\) to get the result.
To simplify the notation we prove an upper bound on the inner sum in (20) for the special case \(\mathfrak{p}=[s^{*}]\) and assume that the underlying point set generated by \(\{\widehat{C}_{j}\}_{j\in\mathfrak{p}}=\{\widehat{C}_{j}\}_{j\in[s^{*}]}\) is a digital \((t,m,s^{*})\)-net. Then we obtain
\[\sum_{\boldsymbol{k}_{\mathfrak{p}}\in P_{m,\mathfrak{p},0}^{ \perp,*}(\{\widehat{C}^{(j)}\})} \prod_{j\in\mathfrak{p}}\rho(k_{j})\] \[= \sum_{\boldsymbol{k}_{[s^{*}]}\in P_{m,[s^{*}]}^{\perp,*},(\{ \widehat{C}^{(j)}\})}\prod_{j=1}^{s^{*}}\rho(k_{j})\] \[\leq \sum_{a_{1}=1}^{m-w_{1}}\cdots\sum_{a_{s^{*}}=1}^{m-w_{s^{*}}}b^ {-a_{1}-\cdots-a_{s^{*}}}\underbrace{\sum_{k_{1}=b^{a_{1}-1}}^{b^{a_{1}-1}} \cdots\sum_{k_{s^{*}}=b^{a_{s^{*}}-1}}^{b^{a_{s^{*}}-1}}}_{(\widehat{\widehat{ C}^{(1)}})^{\top}\vec{k}_{1}+\cdots+(\widehat{C}^{(s^{*})})^{\top}\vec{k}_{s^{*}} \equiv\vec{0}\pmod{b}}\prod_{j=1}^{s^{*}}\frac{1}{\sin^{2}(\pi\kappa_{j,a_{j} -1}/b)}\] \[\leq \sum_{a_{1}=1}^{m-w_{1}}\cdots\sum_{a_{s^{*}}=1}^{m-w_{s^{*}}}b^ {-a_{1}-\cdots-a_{s^{*}}}\left(\sum_{\kappa=1}^{b-1}\frac{1}{\sin^{2}(\pi \kappa/b)}\right)^{s^{*}}\times\] \[\begin{cases}0&\text{if }a_{1}+\cdots+a_{s^{*}}\leq m-t,\\ 1&\text{if }m-t<a_{1}+\cdots+a_{s^{*}}\leq m-t+s^{*},\\ b^{a_{1}+\cdots+a_{s^{*}}-s^{*}-m+t}&\text{if }a_{1}+\cdots+a_{s^{*}}>m-t+s^{*}, \end{cases}\]
where the second inequality follows from estimating the number of solutions of the linear system \((\widehat{C}^{(1)})^{\top}\vec{k}_{1}+\cdots+(\widehat{C}^{(s^{*})})^{\top} \vec{k}_{s^{*}}\equiv\vec{0}\pmod{b}\), which was done in the proof of [7, Lemma 16.40]. From [7, Corollary A.23] we have \(\sum_{\kappa=1}^{b-1}\frac{1}{\sin^{2}(\pi\kappa/b)}=\frac{b^{2}-1}{3}\).
Set
\[\Sigma_{1}:= \left(\frac{b^{2}-1}{3}\right)^{s^{*}}\sum_{\begin{subarray}{c}a _{1}=1\\ a_{1}+\cdots+a_{s^{*}}>m-t+s^{*}\end{subarray}}^{m-w_{1}}b^{-a_{1}-\cdots-a_{s ^{*}}},\] \[\Sigma_{2}:= \left(\frac{b^{2}-1}{3}\right)^{s^{*}}\underbrace{\sum_{a_{1}=1} ^{m-w_{1}}\cdots\sum_{a_{s^{*}}=1}^{m-w_{s^{*}}}b^{-s^{*}-m+t}}_{a_{1}+\cdots+ a_{s^{*}}>m-t+s^{*}}b^{-s^{*}-m+t},\]
then the inner sum in (20) is bounded by \(\Sigma_{1}+\Sigma_{2}\).
If \(m-t+1-s^{*}\geq 0\), then we have
\[\Sigma_{1}= \left(\frac{b^{2}-1}{3b}\right)^{s^{*}}\sum_{b_{1}=0}^{m-w_{1}-1 }\cdots\sum_{b_{s^{*}}=0}^{m-w_{s^{*}}-1}b^{-b_{1}-\cdots-b_{s^{*}}}\] \[\leq \left(\frac{b^{2}-1}{3b}\right)^{s^{*}}\sum_{\ell=m-t-s^{*}+1}^{ m-t}b^{-\ell}{\ell+s^{*}-1\choose s^{*}-1}\] \[\leq \left(\frac{b^{2}-1}{3b}\right)^{s^{*}}\sum_{\ell=m-t-s^{*}+1}^{ \infty}b^{-\ell}{\ell+s^{*}-1\choose s^{*}-1}\]
\[\leq \left(\frac{b^{2}-1}{3b}\right)^{s^{*}}\frac{1}{b^{m-t-s^{*}+1}} \binom{m-t}{s^{*}-1}\left(\frac{b}{b-1}\right)^{s^{*}}\] \[= \left(\frac{b^{2}+b}{3}\right)^{s^{*}}\frac{b^{t}}{b^{m+1}}\,\frac{ (m-t)^{s^{*}-1}}{(s^{*}-1)!},\]
where we used [7, Lemma 13.24] to estimate the infinite sum.
If \(m-t+1-s^{*}<0\), then we have
\[\Sigma_{1}\leq \left(\frac{b^{2}-1}{3b}\right)^{s^{*}}\sum_{\ell=0}^{\infty} \binom{\ell+s-1}{s-1}b^{-\ell}\leq\left(\frac{b^{2}-1}{3b}\right)^{s^{*}} \left(\frac{b}{b-1}\right)^{s^{*}}\leq\left(\frac{b^{2}+b}{3}\right)^{s^{*}} \frac{b^{t}}{b^{m}}\frac{1}{b^{2}}.\]
For \(\Sigma_{2}\) we use the estimate
\[\Sigma_{2}\leq\left(\frac{b^{2}-1}{3b}\right)^{s^{*}}\frac{b^{t}}{b^{m}}\sum_ {a_{1}=1}^{m-w_{1}}\cdots\sum_{a_{s^{*}}=1}^{m-w_{s^{*}}}1\leq\left(\frac{b^{2 }-1}{3b}\right)^{s^{*}}\frac{b^{t}}{b^{m}}\prod_{j\in[s^{*}]}(m-w_{j}).\]
The argument for the case \(\mathfrak{p}=[s^{*}]\) can be repeated analogously for all \(\emptyset\neq\mathfrak{p}\subseteq\mathfrak{v}\) in (20), by adapting notation, and in particular by replacing \(t\) by \(t_{\mathfrak{p}}\). This yields the result claimed in the lemma, by plugging these estimates into (20).
Inserting the estimates in (17), (18), and (19) into (16) yields the following theorem.
**Theorem 4**.: _Let \(\mathbf{w}\) be a given set of reduction indices with \(0=w_{1}\leq w_{2}\leq\cdots\leq w_{s}\), and let \(\widehat{P}_{m}:=P_{m}(\{\widehat{C}^{(j)}\}_{j})\) be generated by Algorithm 3. Furthermore, assume product weights \(\gamma_{\mathfrak{u}}=\prod_{j\in\mathfrak{u}}\gamma_{j}\) with \(\gamma_{1}\geq\gamma_{2}\geq\cdots>0\). Then,_
\[D^{*}_{b^{m},\gamma}(\widehat{P}_{m}) \leq \max_{\emptyset\neq\mathfrak{u}\subseteq[s]}\left[\frac{1}{b^{m} }\prod_{j\in\mathfrak{u}}\gamma_{j}(1+b^{w_{j}})\right] \tag{21}\] \[+\max_{\emptyset\neq\mathfrak{p}\subseteq[s^{*}]}\left[\gamma_{ \mathfrak{v}}\sum_{\emptyset\neq\mathfrak{p}\subseteq\mathfrak{v}}\frac{b^{t_ {\mathfrak{p}}}}{b^{m}}\left[\frac{1}{b}\left(\frac{b^{2}+b}{3}\right)^{| \mathfrak{p}|}\max\left(\frac{(m-t_{\mathfrak{p}})^{|\mathfrak{p}|-1}}{(| \mathfrak{p}|-1)!},\frac{1}{b}\right)\right.\right.\] \[\left.\left.+\left(\frac{b^{2}-1}{3b}\right)^{|\mathfrak{p}|} \prod_{j\in\mathfrak{v}}(m-w_{j})\right]\right].\]
We impose that the term
\[\max_{\emptyset\neq\mathfrak{u}\subseteq[s]}\left[\frac{1}{b^{m}}\prod_{j\in \mathfrak{u}}\gamma_{j}(1+b^{w_{j}})\right]\]
in (21) be bounded by \(\kappa/b^{m}\) for some constant \(\kappa>0\) independent of \(s\). Let \(j_{0}\in\mathbb{N}\) be minimal such that \(\gamma_{j}\leq 1\) for all \(j>j_{0}\). Then we impose \(\prod_{j=1}^{s}\gamma_{j}(1+b^{w_{j}})\leq\gamma_{1}^{j_{0}}\prod_{j=1}^{s}(1 +\gamma_{j}b^{w_{j}})\leq\kappa\). Hence it is sufficient to choose \(\kappa>\gamma_{1}^{j_{0}}\) and for all \(j\in[s]\),
\[w_{j}:=\min\left(\left|\log_{b}\left(\frac{\left(\frac{\kappa}{\gamma_{1}^{j_{ 0}}}\right)^{1/s}-1}{\gamma_{j}}\right)\right|,m\right). \tag{22}\]
**Corollary 1**.: _Let \(\boldsymbol{\gamma}\) be product weights of the form \(\gamma_{\mathfrak{u}}=\prod_{j\in\mathfrak{u}}\gamma_{j}\) with \(\gamma_{1}\geq\gamma_{2}\geq\cdots>0\) such that \(\sum_{j=1}^{\infty}\gamma_{j}<\infty\). Let \(C^{(1)},\ldots,C^{(s)}\in\mathbb{F}_{b}^{m\times m}\) be the generating matrices of a digital \(((t_{\mathfrak{u}})_{\mathfrak{u}\subseteq[s]},m,s)\)-net. Let the reduction indices \(\boldsymbol{w}\) be chosen according to (22) and let \(s^{*}\in[s]\) be the largest number such that \(w_{s^{*}}<m\). Then there is a constant \(C>0\) independent of \(s\) and \(m\), such that_
\[D_{b^{m},\boldsymbol{\gamma}}^{*}(P_{m}(\{C^{(j)}\}_{j}))\leq \frac{C}{b^{m}}\] \[+\max_{\emptyset\neq\mathfrak{u}\subseteq[s^{*}]}\left[\gamma_{ \mathfrak{v}}\sum_{\emptyset\neq\mathfrak{v}\subseteq\mathfrak{v}}\frac{b^{ \mathfrak{t}_{\mathfrak{v}}}}{b^{m}}\left[\frac{1}{b}\left(\frac{b^{2}+b}{3} \right)^{|\mathfrak{p}|}\max\left(\frac{(m-t)^{|\mathfrak{p}|-1}}{(|\mathfrak{ p}|-1)!},\frac{1}{b}\right)\right.\right.\] \[\left.\left.+\left(\frac{b^{2}-1}{3b}\right)^{|\mathfrak{p}|} \prod_{j\in\mathfrak{v}}(m-w_{j})\right]\right].\]
_Remark 1_.: Note that the choice of the quantities \(w_{j}\) in (22) depends on \(s\). For sufficiently fast decaying weights \(\gamma_{j}\), it is possible to choose the \(w_{j}\) such that they do no longer depend on \(s\). Indeed, suppose, e.g., that \(\gamma_{j}=j^{-2}\). Then we could choose the \(w_{j}\) such that, for some \(\tau\in(1,2)\),
\[w_{j}\leq\min\left(\left|\log_{b}\left(j^{2-\tau}\right)\right|,m\right).\]
This then yields
\[\prod_{j=1}^{s}(1+\gamma_{j}b^{w_{j}})\leq\exp\left(\sum_{j=1}^{s}\log(1+ \gamma_{j}b^{w_{j}})\right)\leq\exp\left(\sum_{j=1}^{s}\gamma_{j}b^{w_{j}} \right)\leq\exp(\zeta(\tau)),\]
where \(\zeta(\cdot)\) is the Riemann zeta function. This then yields a dimension-independent bound on the term \(\prod_{j=1}^{s}\gamma_{j}(1+b^{w_{j}})\) from above.
_Remark 2_.: The term involving the maximum in the error bound of Corollary 1 crucially depends on the weights \(\boldsymbol{\gamma}\) and their interplay with the \(t\)-values of the projections of \(\widehat{P}_{m}\). In particular, small \(t\)-values in combination with sufficiently fast decaying weights should yield tighter error bounds. However, the analysis of \(t\)-values of \((t,m,s)\)-nets is in general non-trivial (see, e.g., [7]).
## 4 Reduced Monte Carlo
The idea of reduction is not limited to QMC algorithms, but can also be applied to Monte Carlo algorithms, as shall be discussed in this section.
Let \(N=b^{m}\) for some \(b,m\in\mathbb{N}\) with \(b\geq 2\). Further let \(0=w_{1}\leq w_{2}\leq w_{3}\leq\cdots\leq w_{s}\leq w_{s+1}=m\) be some integers. Let \(N_{j}=Nb^{-w_{j}}=b^{m-w_{j}}\) and \(N_{s+1}=1\). In particular, \(N_{1}=N\). Further we define \(M_{j}=N_{j}/N_{j+1}=b^{w_{j+1}-w_{j}}\) for \(j=1,2,\ldots,s-1\) and \(M_{s}=N_{s}=b^{m-w_{s}}\). Then \(N_{j}=M_{j}M_{j+1}\cdots M_{s-1}M_{s}\) and any integer \(0\leq n<N_{j}\) can be represented by \(n=m_{s}N_{s+1}+m_{s-1}N_{s}+\cdots+m_{j}N_{j+1}\) with \(0\leq m_{j}<M_{j}\) for \(1\leq j\leq s\).
For each coordinate \(1\leq j\leq s\) we generate \(N_{j}\) i.i.d. samples \(y_{j,0},\ldots,y_{j,N_{j}-1}\). Different coordinates are also assumed to be independent.
Now for \(0\leq m_{j}<M_{j}\) for \(1\leq j\leq s\) let
\[x_{j,m_{s}N_{s+1}+m_{s-1}N_{s}+\cdots+m_{1}N_{2}}=y_{j,m_{s}N_{s+1}+\cdots+m_{j}N _{j+1}}. \tag{23}\]
This means that in coordinate \(j\) we only have \(N_{j}\) different i.i.d. samples.
### Computational cost reduction
For each \(0\leq n<N\) we need to compute
\[x_{1,n}\boldsymbol{a}_{1}+x_{2,n}\boldsymbol{a}_{2}+\cdots+x_{s,n}\boldsymbol{a }_{s},\]
where \(\boldsymbol{a}_{j}\) is the \(j\)-th row of \(A\). Using (23) we can write this as
\[\sum_{j=1}^{s}y_{j,m_{s}N_{s+1}+\cdots+m_{j}N_{j+1}}\boldsymbol{a}_{j},\]
which we need to compute for each \(0\leq m_{j}<M_{j}\). We can do this recursively in the following way:
* First compute: \(z_{s,m_{s}}=y_{s,m_{s}}\boldsymbol{a}_{s}\) for \(0\leq m_{s}<M_{s}\) and store the results.
* For \(j=s-1,s-2,\ldots,1\) compute: \[z_{j,m_{s}N_{s+1}+\cdots+m_{j}N_{j+1}}=y_{j,m_{s}N_{s+1}+\cdots+m_{j}N_{j+1}} \boldsymbol{a}_{j}+z_{j+1,m_{s}N_{s+1}+\cdots+m_{j+1}N_{j+2}}\] for \(m_{j}=0,1,\ldots,M_{j}-1\), and store the resulting vectors.
Computing the values \(z_{s,m_{s}}\) costs \(\mathcal{O}(\tau N_{s})\) operations. Computing \(z_{j,m_{s}N_{s+1}+\cdots+m_{j}N_{j+1}}\) costs \(\mathcal{O}\left(\tau N_{j}\right)\) operations.
Computing all the values therefore costs
\[\mathcal{O}\left(\tau\left(N_{s}+N_{s-1}+\cdots+N_{1}\right)\right)=\mathcal{ O}\left(\tau b^{m}\left(b^{-w_{1}}+b^{-w_{2}}+\cdots+b^{-w_{s}}\right)\right)\]
operations.
If \(\sum_{j=1}^{\infty}b^{-w_{j}}<\infty\), then the computational cost is independent of the dimension.
### Error analysis
Since the samples are i.i.d., it follows that the estimator is unbiased, that is,
\[\mathbb{E}(Q(f))=\mathbb{E}(f).\]
For a given vector \(\boldsymbol{x}=\left(x_{1},\ldots,x_{s}\right)^{\top}\) and \(\mathfrak{u}\subseteq[s]\) let \(\boldsymbol{x}_{\mathfrak{u}}=\left(x_{j}\right)_{j\in\mathfrak{u}}\) and \(\boldsymbol{x}_{-\mathfrak{u}}=\left(x_{j}\right)_{j\notin\mathfrak{u}}\). We now consider the variance of the estimator. Let
\[\mu_{u}:=\mathbb{E}_{\boldsymbol{x}_{\mathfrak{u}}}\mathbb{E}_{\boldsymbol{x} _{-\mathfrak{u}}}\mathbb{E}_{\boldsymbol{y}_{-\mathfrak{u}}}f=\int\int f( \boldsymbol{x}_{\mathfrak{u}}\mathfrak{u},\boldsymbol{x}_{-\mathfrak{u}}) \,\mathrm{d}\boldsymbol{x}_{-\mathfrak{u}}\int f(\boldsymbol{x}_{\mathfrak{u}},\boldsymbol{y}_{-\mathfrak{u}})\,\mathrm{d}\boldsymbol{y}_{-\mathfrak{u}}\, \mathrm{d}\boldsymbol{x}_{\mathfrak{u}}.\]
For instance, \(\mu_{\emptyset}=\left(\mathbb{E}(f)\right)^{2}\) and \(\mu_{\{1,\ldots,s\}}=\int f^{2}\).
In classical Monte Carlo integration, one studies the variance \(\mathrm{Var}(Q(f))=(\mu_{\{1,\ldots,s\}}-\mu_{\emptyset})/N\). We now show how the reduced MC construction influences the variance.
**Theorem 5**.: _The variance of the reduced Monte Carlo estimator is given by_
\[\operatorname{Var}(Q(f))=\sum_{k=1}^{s}\mu_{\{k,k+1,\ldots,s\}}\prod_{j=k}^{s}M_ {j}^{-1}\left(1-\frac{1}{M_{k-1}}\right)-\frac{\mu_{\emptyset}}{M_{s}},\]
_where we set \(1-\frac{1}{M_{0}}=1\)._
Proof.: The variance of \(Q(f)\) can be written as \(\mathbb{E}(Q^{2}(f))-\left(\mathbb{E}(Q(f))\right)^{2}\). The last term \(\left(\mathbb{E}(Q(f))\right)^{2}\) equals \(\left(\mathbb{E}(f)\right)^{2}=\mu_{\emptyset}\).
We have
\[Q(f)=\frac{1}{M_{1}}\sum_{m_{1}=0}^{M_{1}-1}\cdots\frac{1}{M_{s}}\sum_{m_{s}=0 }^{M_{s}-1}f(y_{1,n_{1}},\ldots,y_{s,n_{s}}),\]
where \(n_{j}=m_{s}N_{s+1}+\cdots+m_{j}N_{j+1}\). Hence
\[Q^{2}(f)= \frac{1}{M_{1}^{2}}\sum_{m_{1},m_{1}^{\prime}=0}^{M_{1}-1}\cdots \frac{1}{M_{s}^{2}}\sum_{m_{s},m_{s}^{\prime}=0}^{M_{s}-1}f(y_{1,n_{1}},\ldots, y_{s,n_{s}})f(y_{1,n_{1}^{\prime}},\ldots,y_{s,n_{s}^{\prime}})\] \[= \sum_{u\subseteq\{1,\ldots,s\}}\frac{1}{N^{2}}\sum f(y_{1,n_{1} },\ldots,y_{s,n_{s}})f(y_{1,n_{1}^{\prime}},\ldots,y_{s,n_{s}^{\prime}}),\]
where the second sum is over all \(0\leq m_{j},m_{j}^{\prime}<M_{j}\) such that \(m_{j}=m_{j}^{\prime}\) for \(j\in u\) and \(m_{j}\neq m_{j}^{\prime}\) for \(j\notin u\).
Let \(1\leq k\leq s\). If \(m_{j}=m_{j}^{\prime}\) for \(k\leq j\leq s\), then \(n_{j}=n_{j}^{\prime}\) for \(k\leq j\leq s\) and if \(m_{k-1}\neq m_{k-1}^{\prime}\), then \(n_{i}\neq n_{i}^{\prime}\) for \(1\leq i<k\). In this case
\[\mathbb{E}\left(f(y_{1,n_{1}},\ldots,y_{s,n_{s}})f(y_{1,n_{1}^{\prime}},\ldots,y_{s,n_{s}^{\prime}})\right)=\mu_{\{k,k+1,\ldots,s\}}.\]
The number of such instances is given by \(\prod_{j=k}^{s}M_{j}(M_{k-1}^{2}-M_{k-1})\prod_{j=1}^{k-2}M_{j}^{2}\). Since \(N=M_{1}M_{2}\cdots M_{s}\), we obtain \(\prod_{j=k}^{s}M_{j}(M_{k-1}^{2}-M_{k-1})\prod_{j=1}^{k-2}M_{j}^{2}N^{-2}=\prod _{j=k}^{s}M_{j}^{-1}(1-M_{k-1}^{-1})\).
If \(m_{s}\neq m_{s}^{\prime}\) we obtain that
\[\mathbb{E}\left(f(y_{1,n_{1}},\ldots,y_{s,n_{s}})f(y_{1,n_{1}^{\prime}},\ldots,y_{s,n_{s}^{\prime}})\right)=\mu_{\emptyset}.\]
This case occurs \((M_{s}^{2}-M_{s})\prod_{j=1}^{s-1}M_{j}^{2}\) times, and therefore \((M_{s}^{2}-M_{s})\prod_{j=1}^{s-1}M_{j}^{2}N^{-2}=1-M_{s}^{-1}\).
Using the linearity of expectation we obtain the formula.
## 5 Numerical experiments
In this section we give exemplary numerical results regarding the use of reduced rank-1 lattice point sets for matrix products, as outlined in Section 2.
### Reduced matrix-vector products
In each case, we compute the generating vectors \(\mathbf{z}=(z_{1}b^{w_{1}},\ldots,z_{s}b^{w_{s}})\) depending on the reduction indices \(w_{j}\) via a reduced CBC construction with product weights \(\gamma_{j}=0.7^{j}\), as developed in [2]. For a fair comparison, we do not include in the timings the construction of \(\mathbf{z}\) and we average the computing times over 10 runs. Computations are run using MATLAB 2019a on an Octa-Core (Intel(R) Core(TM) i7-10510U CPU @ 1.80GHz) laptop.
As a first example, we illustrate the benefit of Algorithm 1 compared to the standard matrix-vector product to compute \(P=XA\) for \(A\in\mathbb{R}^{s\times\tau}\). In Figure 1 we compare different combinations of \(s,m\), for the choice of reduction indices \(w_{j}=\min(\left\lfloor\log_{2}(j)\right\rfloor,m)\) and fixed \(b=2\). We repeat the same experiment on Algorithm 2 with the same settings.
In Figures 1-3, the blue graphs show the results for the reduced matrix-matrix product according to Algorithm 1, the red graphs show the results for the optimized reduced matrix-matrix product according to Algorithm 2, and the light brown graphs show the results for a straightforward implementation of the matrix-matrix product without any adjustments.
We conclude that the computational saving due to Algorithms 1 and 2 is more pronounced for larger \(m,s\). Note that the right plot is in semi-logarithmic scale.
Next we study in Figure 2 the behavior as the size \(\tau\) increases. Also here, we see a clear advantage of Algorithms 1 and 2 over a straightforward implementation of the matrix-matrix product.
When the reduction is less aggressive, that is, \(w_{j}\) increases more slowly, the benefit is still considerable for large \(s\) especially for Algorithm 2, see Figure 3.
We now test the reduced matrix-vector product for Monte Carlo integration with respect to the normal distribution. As an example, we consider the pricing of a basket option [9, Section 3.2.3]. We define the payoff \(H(S)=\max(\frac{1}{s}\sum_{j=1}^{s}S_{j}(T)-K,0)\), where \(S_{j}(T)\) is the price of the \(j\)-th asset at maturity \(T\). Under the Black and Scholes model with zero interest rate we have \(S_{j}(T)=S_{j}(0)\exp(-\Sigma_{jj}/2T+W_{j}\sqrt{T})\), where \(W_{j}=LZ\), \(Z\sim\mathcal{N}(0,\mathrm{Id}_{s\times s})\) and \(LL^{T}=\Sigma\) is the covariance matrix of the random vector \(S=(S_{1},\ldots,S_{s})\).
We set \(S_{j}(0)=100\) for all \(j\), \(s=10,T=1\), strike price \(K=110\), and as the covariance
matrix we pick \(\sigma=0.4,\rho=0.2\) and
\[\Sigma=\begin{pmatrix}\sigma&\rho&&&\\ \rho&\sigma&\rho&&\\ &\ddots&\ddots&\ddots&\\ &&\rho&\sigma&\rho\\ &&&\rho&\sigma\end{pmatrix}.\]
We approximate the option price \(\mathbb{E}(S)\approx\frac{1}{b^{m}}\sum_{k=1}^{b^{m}}S_{j}(0)\exp(-\sigma/2T+L \boldsymbol{x}_{k}\sqrt{T})\), with \(\boldsymbol{x}_{k}\) random samples from \(Z\). The main work is to compute \(XL^{\top}\) (recall that \(X\) was defined in (3)) and thus the reduced matrix-vector multiplication can be beneficial in this example. Results for different choices of reduction indices \(w_{j}\) are displayed in Figure 4, where we plot the mean error over \(R=5\) repetitions for different values of reduction indices, using \(Rb^{m},m=25\), Monte Carlo samples for the reference value. Note that the performance of QMC methods in this illustration appears to be not particularly strong as compared to standard Monte Carlo, as we consider a setting without coordinate weights, which usually is unfavorable for QMC methods.
## Acknowledgements
Josef Dick is supported by the Australian Research Council Discovery Project DP220101811. Adrian Ebert and Peter Kritzer acknowledge the support of the Austrian Science Fund (FWF) Project F5506, which is part of the Special Research Program "Quasi-Monte Carlo Methods: Theory and Applications". Furthermore, Peter Kritzer has partially been supported by the Austrian Science Fund (FWF) Project P34808. For the purpose of open access, the authors have applied a CC BY public copyright licence to any author accepted manuscript version arising from this submission.
|
2301.12863 | Minimalistic Predictions to Schedule Jobs with Online Precedence
Constraints | We consider non-clairvoyant scheduling with online precedence constraints,
where an algorithm is oblivious to any job dependencies and learns about a job
only if all of its predecessors have been completed. Given strong impossibility
results in classical competitive analysis, we investigate the problem in a
learning-augmented setting, where an algorithm has access to predictions
without any quality guarantee. We discuss different prediction models: novel
problem-specific models as well as general ones, which have been proposed in
previous works. We present lower bounds and algorithmic upper bounds for
different precedence topologies, and thereby give a structured overview on
which and how additional (possibly erroneous) information helps for designing
better algorithms. Along the way, we also improve bounds on traditional
competitive ratios for existing algorithms. | Alexandra Lassota, Alexander Lindermayr, Nicole Megow, Jens Schlöter | 2023-01-30T13:17:15Z | http://arxiv.org/abs/2301.12863v1 | # Minimalistic Predictions to Schedule Jobs with
###### Abstract
We consider non-clairvoyant scheduling with _online_ precedence constraints, where an algorithm is oblivious to any job dependencies and learns about a job only if all of its predecessors have been completed. Given strong impossibility results in classical competitive analysis, we investigate the problem in a learning-augmented setting, where an algorithm has access to predictions without any quality guarantee. We discuss different prediction models: novel problem-specific models as well as general ones, which have been proposed in previous works. We present lower bounds and algorithmic upper bounds for different precedence topologies, and thereby give a structured overview on which and how additional (possibly erroneous) information helps for designing better algorithms. Along the way, we also improve bounds on traditional competitive ratios for existing algorithms.
## 1 Introduction
Cloud computing is a popular approach to outsource heavy computations to specialized providers [1]. Concepts like Function-as-a-Service (FaaS) offer users on demand the execution of complex computations in a specific domain [17, 14]. Such tasks are often decomposed into smaller jobs, which then depend on each other by passing intermediate results. The structure of such tasks heavily relies on the users input and internal dependencies within the users system. It might require diverse jobs to solve different problems with distinct inputs.
From the providers perspective, the goal is thus to schedule jobs with different priorities and interdependencies which become known only when certain jobs are completed and their results can be evaluated. From a more abstract perspective, we face _online precedence constraint scheduling_: new jobs arrive only if certain jobs have been completed but the set of jobs and their dependencies are unknown to the scheduler. As tasks might have different priorities, it is a natural objective to minimize the total (average) weighted completion time of the jobs. We focus on _non-clairvoyant_ schedulers that do not know a job's processing requirement in advance [13], and we allow _preemptive_ schedules, i.e., jobs can be interrupted and resumed later. We present and analyze (non-)clairvoyant algorithms and prove impossibility results for this problem.
Competitive analysis is a widely used technique to assess the performance of online algorithms [1]. The _competitive ratio_ of an algorithm is the maximum ratio over all instances between its objective value and the objective value of an _offline_ optimal solution. In our setting, an offline optimal solution is the best
schedule that can be computed with complete information and unbounded running time on the instance. We say that an algorithm is \(\rho\)-competitive if its competitive ratio is at most \(\rho\).
It is not hard to see that for our problem, we cannot hope for good worst-case guarantees: consider an instance of \(n-1\) initially visible jobs with zero weight such that exactly one of these jobs triggers at its completion the arrival of a job with positive weight. Since the initial jobs are indistinguishable, in the worst-case, any algorithm completes the positive-weight job last. An offline optimal solution can distinguish the initially visible jobs and immediately processes the one which triggers the positive-weight job. This already shows that no deterministic algorithm can have a better competitive ratio than \(\Omega(n)\) for \(n\) jobs. Notice that this strong impossibility result holds even for (seemingly) simple precedence graphs that consist of a collection of chains. In practice, such topology is highly relevant as, e.g., a sequential computer program executes a path (chain) of instructions that upon execution depends on the evaluation of control flow structures (cf. [1]).
To overcome such daunting lower bounds, we consider closer-to-real-world approaches to go beyond worst-case analysis. In particular, we study augmenting _algorithms with predictions_[13, 14]. The intuition is that in many applications, we can learn certain aspects of the uncertainty by considering historical data such as dependencies between jobs for certain computations and inputs. While these predictions might not reflect the current instance, they can contain enough information to design algorithms that break pessimistic worst-case lower bounds. Besides specifying the type of information, this requires a measure for a prediction's quality. This allows parameterized performance guarantees of algorithms w.r.t. the amount of information a prediction contains. Important performance indicators are _consistency_, that is the competitive ratio for best-possible predictions, and _robustness_, that is an upper bound on the competitive ratio for any prediction.
Despite the immense research interest in learning augmented algorithms [1], the particular choice of prediction models remains often undiscussed. In this work, we discuss various models and analyze their strengths and weaknesses. In particular, we present the first learning-augmented algorithms for scheduling with (online) precedence constraints. The question driving our research is:
_Which particular minimal information is required to achieve reasonable performance guarantees for scheduling with online precedence constraints?_
Our starting point is the analysis of the two most common models, _full input_ predictions, c.f. [17, 18, 19, 20, 21, 22] and _action predictions_, c.f. [1, 1, 22, 23, 24, 25, 26, 27, 28, 29, 30]. Our main focus is on a hierarchy of refined prediction models based on their entropy. That is, one can compute a prediction for a weaker model using a prediction from a stronger one, but not vice versa. We predict quantities related to the weight of unknown jobs which is in contrast to previous work which assumes predictions on the jobs' processing times or machine speeds (except [11]).
For each prediction model, we analyze its power and limits by providing efficient algorithms and lower bounds on the best-possible performance guarantees w.r.t. these models and the topological properties of the precedence constraints.
### Problem Definition and Prediction Models
An instance of our problem is composed of a set \(J\) of \(n\) jobs and a precedence graph \(G=(J,E)\), which is an acyclic directed graph (DAG). Every job \(j\in J\) has a processing requirement \(p_{j}\geq 0\) and a weight \(w_{j}\geq 0\). An edge \((j^{\prime},j)\in E\) indicates that \(j\) can only be started if \(j^{\prime}\) has been completed. If there is a directed path from \(j^{\prime}\) to \(j\) in \(G\), then we say that \(j\) is a _successor_ of \(j^{\prime}\) and that \(j^{\prime}\) is a _predecessor_ of \(j\). If that path consists
of a single edge, we call \(j\) and \(j^{\prime}\) a _direct_ successor and predecessor, respectively. For a fixed precedence graph \(G\), we denote by \(\omega\) the _width_ of \(G\), which is the length of the longest anti-chain in \(G\).
An algorithm can process a job \(j\) at a time \(t\geq r_{j}\) with a rate \(R_{j}^{t}\geq 0\), which describes the amount of processing the job receives at time \(t\). The completion time \(C_{j}\) of a job \(j\) is the first time \(t\) which satisfies \(\sum_{t^{\prime}=0}^{t}R_{j}^{t^{\prime}}\geq p_{j}\). On a single machine a total rate of \(1\) can be processed at any time \(t\), thus we require \(\sum_{j\in J}R_{j}^{t}\leq 1\). At any time \(t\) in a schedule, let \(F_{t}=\left\{j\ \mid\ C_{j}>t\text{ and }\forall j^{\prime}\text{ s.t. }(j^{\prime},j)\in E \colon C_{j^{\prime}}<t\right\}\) denote the set of unfinished jobs without unfinished predecessors in \(G\). We refer to such jobs as _front jobs_. In the online setting, a job is revealed to the algorithm once all predecessors have been completed. The algorithm is completely oblivious to \(G\), and, in particular, it does not know whether a front job has successors. Thus, at any time \(t\) an algorithm only sees jobs \(j\in F_{t}\) with weights \(w_{j}\) but _not_ their processing times \(p_{j}\). Note that the sets \(F_{t}\) heavily depend on an algorithm's actions. At the start time \(t=0\), an algorithm sees \(F_{0}\), and until the completion of the last job, it does not know the total number of jobs. An algorithm can at any time \(t\) only process front jobs, hence we further require that \(R_{j}^{t}=0\) for all \(j\in J\setminus F_{t}\). The objective of our problem is to minimize \(\sum_{j\in J}w_{j}C_{j}\). For a fixed instance, we denote the optimal objective value by \(\mathtt{Opt}\) and for a fixed algorithm, we denote its objective value by \(\mathtt{Alg}\).
We study different topologies of precedence graphs. In addition to general DAGs, we consider _in-forests_ resp. _out-forests_, where every node has at most one outgoing resp. incoming edge. Further, we study _chains_, which is a precedence graph that is an in-forest and an out-forest simultaneously. If an in- or out-forest has only one connected component, we refer to it as _in-_ and _out-tree_, respectively.
Two of the most studied prediction models are: a _full input prediction_, which is a prediction on the set of jobs with processing times and weights, and the complete precedence graph, and an _action prediction_, which is a prediction on a full priority order over all jobs predicted to be part of the instance (_static_) or a prediction on which job to schedule next whenever a machine idles (_adaptive_).
Both prediction models require a significant amount of information on the input or an optimal algorithm. This might be unrealistic or costly to obtain and/or not necessary. We aim for minimalistic extra information and quantify its power.
The set of front jobs \(F_{0}\) does not give sufficient information for obtaining a competitive ratio better than \(\Omega(n)\), as shown above. For a job \(v\in F_{0}\), we define the set \(S(v)\) consisting of \(v\) and its successors, and we let \(w(S(v))\coloneqq\sum_{u\in S(v)}w_{u}\). We consider various predictions on the set \(S(v)\):
_Weight predictions:_: Predictions \(\hat{W}_{v}\) on the total weight \(w(S(v))\) of each front job \(v\in F_{0}\).
_Weight order predictions:_: The _weight order_\(\leq_{0}\) over \(F_{0}\) sorts the jobs \(v\in F_{0}\) by non-increasing \(w(S(v))\), i.e., \(v\preceq_{0}u\) implies \(w(S(v))\geq w(S(u))\). We assume access to a prediction \(\widehat{\leq}_{0}\) on \(\preceq_{0}\).
_Average predictions:_: Predictions \(\hat{a}_{v}\) on the average weight \(a(S(v))=\frac{\sum_{w\in S(v)}w_{u}}{\sum_{w\in S(v)}p_{u}}\) of each front job \(v\in F_{0}\).
For each of these three models, we distinguish _static_ and _adaptive_ predictions. Static predictions refer to predictions only on the initial front jobs \(F_{0}\), and adaptive predictions refer to a setting where we receive access to a new prediction whenever a job becomes visible.
### Our Results
Our results can be separated into two categories. First, we consider the problem of scheduling with online precedence constraints with access to additional _reliable_ information. In particular, we consider all the aforementioned prediction models and design upper and lower bounds for the online problem enhanced
with access to the respective additional information. We classify the power of the different models when solving the problem on different topologies.
For the second type of results, we drop the assumption that the additional information is accurate and turn our pure online results into learning-augmented algorithms. We define suitable error measures for the different prediction models to capture the accuracy of the predictions, and give more fine-grained competitive ratios depending on these measures. We also extend our algorithms to achieve robustness.
Next, we give an overview of our results for these categories. We state all results for the single machine setting but show in Appendix D that they extend to identical parallel machines.
Reliable additional informationTable 1 summarizes our results for the pure online setting enhanced with reliable additional information. Our main results are a 4-competitive algorithms for chains and out-forests with weight predictions, and a \(H_{\omega}\)-competitive algorithm for out-forests with adaptive weight order predictions, where \(H_{k}\) is the \(k\)th harmonic number. The results show that additional information significantly improves the (worst-case) ratio compared to the setting with no predictions.
Our main non-clairvoyant algorithm, given correct weight predictions, has a competitive ratio of at most 4 for online out-forest precedence constraints on a single machine. This improves even for offline precedence constraints upon previous best-known bounds of 8 [11] and 10 [13] for this problem, although these bounds also hold in more general settings. To achieve this small constant, we generalize the Weighted Round Robin algorithm (WRR) [12, 13] for non-clairvoyant scheduling _without_ precedence constraints, which advances jobs proportional to their weight, to our setting. We handle each out-tree as a super-job and update its remaining weight when a sub-job completes. If the out-tree is a chain, this can be done even if only _static_ weight predictions are given. Otherwise, when an out-tree gets divided into multiple remaining out-trees, the distribution of the remaining weight is unknown, thus we have to rely on _adaptive_ predictions. Due to the increased dynamics of gaining partial weight of these super-jobs, the original analysis of WRR is not applicable. Instead, we use the dual-fitting technique, which has been previously used for offline precedence constraints [13]. While their analysis builds on offline information and is infeasible in our model, we prove necessary conditions on an algorithm to enable the dual-fitting, which are fulfilled even in our limited information setting. Surprisingly, we also show that
\begin{table}
\begin{tabular}{l l l} \hline \hline Prediction Model & Topology & Bound \\ \hline Actions & DAG & \(\Theta(1)\) \\ Input & DAG & \(\Theta(1)\) \\ Adaptive weights & Out-Forests & \(\Theta(1)\) \\ Adaptive weights & In-Trees & \(\Omega(\sqrt{n})\) \\ Static weights & Out-Trees & \(\Omega(n)\) \\ Static weights & Chains & \(\Theta(1)\) \\ Adaptive weight order & Out-Forests & \(\mathcal{O}(H_{\omega})\) \\ Static weight order & Chains & \(\mathcal{O}(H_{\omega}^{2}\sqrt{P})\) \\ Adaptive averages & Chains & \(\Omega(\sqrt{n})\) \\ No Prediction & Chains & \(\Omega(n)\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of bounds on the competitive ratio given reliable information. We denote by \(P\) the total processing time and by \(H_{k}\) the \(k\)th harmonic number.
a more compact linear programming (LP) relaxation, which does not consider transitive precedences, is sufficient for our result. In particular, compared to the LP used in [11], it allows us to craft simple duals which do not involve gradient-type values of the algorithm's rates.
In the more restricted model of weight order predictions, WRR cannot be applied, as the rate computation crucially relies on _precise_ weight values. We observe, however, that WRR's rates at the start of an instance have the same ordering as the known chain order. We show that guessing rates for chains in a way that respects the ordering compromises only a factor of at most \(H_{\omega}\) in the competitive ratio. If the weight order is adaptive, we show a competitive ratio of \(4\cdot H_{\omega}\). Otherwise, we give a worse upper bound and evidence that this might be best-possible for this algorithm.
Learning-augmentationWe extend our algorithmic results by designing suitable error measures for the different prediction models and proving error-dependent competitive ratios. Finally, we show how existing techniques can be used to give these algorithms a robustness of \(\mathcal{O}(\omega)\) at the loss of only a constant factor in the error-dependent guarantee. Note that a robustness \(\mathcal{O}(\omega)\) matches the lower bound for the online problem without access to additional information.
### Further Related Work
Scheduling jobs with precedence constraints to minimize the sum of (weighted) completion times has been one of the most studied scheduling problems for more than thirty years. The offline problem is known to be NP-hard, even for a single machine [15, 10], and on two machines, even when precedence constraints form chains [1, 20]. Several polynomial-time algorithms based on different linear programming formulations achieve an approximation ratio of \(2\) on a single machine, whereas special cases are even solvable optimally; we refer to [14, 1] for comprehensive overviews. For scheduling on \(m\) parallel identical machines, the best known approximation factor is \(3-1/m\)[13].
For scheduling with online precedence constraints, strong and immediate lower bounds rule out a competitive ratio better than \(\Omega(n)\) for the min-sum objective. Therefore, online scheduling has been mainly studied in a setting where jobs arrive online but once a job arrives its processing time, weight, and relation to other (already arrived) jobs is revealed [13, 15, 16, 1]. Similarly, we are not aware of any previous learning-augmented algorithm for online precedence constraints and/or weight predictions. Previous work on minimizing the total completion time focussed on the setting where jobs either arrive online and/or are clairvoyant [13, 14, 15, 16, 17, 18].
## 2 Robustness via Time-Sharing
Before we move to concrete algorithmic results, we quickly argue that any \(\rho\)-competitive algorithm for scheduling with online precedence constraints of width \(\omega\) can be extended to a \(\mathcal{O}(\min\{\rho,\omega\})\)-competitive algorithm. In particular, if \(\rho\) depends on a prediction's quality, this ensures that this algorithm is robust against arbitrarily bad predictions.
To this end, consider the algorithm which at any time \(t\) shares the machine equally among all front jobs \(F_{t}\), i.e., gives every job \(j\in F_{t}\) rate \(R_{j}^{t}=\frac{1}{|F_{t}|}\geq\frac{1}{\omega}\). For a fixed job \(j\), compared to its completion time in a fixed optimal schedule, the completion time in the algorithm's schedule can be delayed by at most a factor of \(\omega\). We conclude:
**Proposition 2.1**.: _There is an \(\omega\)-competitive non-clairvoyant single-machine algorithm for minimizing the total weighted completion time of jobs with online precedence constraints._
We can now use the _time-sharing technique_ to combine this \(\omega\)-competitive algorithm with any other algorithm for scheduling online precedence constraints while retaining the better competitive ratio of both up to a factor of \(2\).
**Theorem 2.2** ([12, 13]).: _Given two deterministic algorithms with competitive ratios \(\rho_{\mathcal{A}}\) and \(\rho_{\mathcal{B}}\) for minimizing the total weighted completion time with online precedence constraints on identical machines, there exists an algorithm for the same problem with a competitive ratio of at most \(2\cdot\min\{\rho_{\mathcal{A}},\rho_{\mathcal{B}}\}\)._
In [12, 13] there is an additional monotonicity requirement which we claim to be unnecessary; see Appendix E.
## 3 Action Predictions
Action predictions give an optimal algorithm. Hence, following accurate predictions clearly results in an optimal solution. To define an error measure for erroneous static and adaptive action predictions, let \(\hat{\sigma}:J\rightarrow[n]\) be the order in which a fixed static or adaptive action prediction suggests to process jobs. In case of static action predictions, we receive the predicted order initially, meaning that it might predict a set of jobs \(\hat{J}\) different to the actual \(J\). During the analysis, we can simply remove the jobs \(\hat{J}\setminus J\) from \(\hat{\sigma}\) as they do not have any effect on the schedule for the actual instance. For the jobs in \(J\setminus\hat{J}\), we define the static action prediction algorithm to just append them to the end of the order \(\hat{\sigma}\) once they are revealed. Thus, we can still treat \(\hat{\sigma}\) as a function from \(J\) to \([n]\). We analyse an algorithm which follows a static or adaptive action prediction using the permutation error introduced in [13]. To this end, let \(\sigma:J\rightarrow[n]\) be the order of a fixed optimal solution for instance \(J\), and \(\mathcal{I}\left(J,\hat{\sigma}\right)=\{(j^{\prime},j)\in J^{2}\mid\sigma(j^ {\prime})<\sigma(j)\wedge\hat{\sigma}(j^{\prime})>\hat{\sigma}(j)\}\) be the set of inversions between the permutations \(\sigma\) and \(\hat{\sigma}\). Applying the analysis of [13] yields the following theorem.
**Theorem 3.1**.: _Given static or adaptive action predictions, there exists an efficient \(\mathcal{O}\left(\min\left\{1+\eta,\omega\right\}\right)\)-competitive non-clairvoyant algorithm for minimizing the total weighted completion time on a single machine with online precedence constraints, where \(\eta=\sum_{(j^{\prime},j)\in\mathcal{I}\left(J,\hat{\sigma}\right)}\left(w_{ j^{\prime}}p_{j}-w_{j}p_{j^{\prime}}\right)\)._
## 4 Full Input Predictions
We can use full input predictions to compute static action predictions \(\hat{\sigma}\). In general, computing \(\hat{\sigma}\) requires exponential running time as the problem is NP-hard. For special cases, e.g., chains, there are efficient algorithms [10].
While following \(\hat{\sigma}\) allows us to achieve the guarantee of Theorem 3.1, the error \(\eta\) does not directly depend on the predicted input but on an algorithm which computed actions for that input. Thus, we aim at designing error measures depending directly on the "similarity" between the predicted and actual instance. As describing the similarity between two graphs is a notoriously difficult problem on its own, we leave open whether there is a meaningful error for general topologies. However, we give an error measure for chains. The key idea of this error is to capture additional cost that any algorithm pays due to both, _absent predicted_ weights and _unexpected actual_ weights. This is in the same spirit as the universal cover error for graph problems in [1]. Assuming that the predicted and actual instance only differ in the weights, our error \(\Lambda=\Gamma_{u}+\Gamma_{a}\) considers the optimal objective values \(\Gamma_{u}\) and \(\Gamma_{a}\) for the problem instances that use \(\{(w_{j}-\hat{w}_{j})_{+}\}_{j}\) and \(\{(\hat{w}_{j}-w_{j})_{+}\}_{j}\) as weights, respectively. Then, \(\Gamma_{u}\) and \(\Gamma_{a}\) measure the cost for _unexpected_ and _absent_ weights. In the appendix, we generalize this idea to also capture other differences of the predicted and actual chains and prove the following theorem.
**Theorem 4.1**.: _Given access to an input prediction, there exists an efficient algorithm for minimizing the total weighted completion time of unit-size jobs on a single machine with online chain precedence constraints with a competitive ratio of at most \(\mathcal{O}\left(\min\left\{1+\Lambda,\omega\right\}\right)\), where \(\Lambda=\Gamma_{u}+\Gamma_{a}\)._
## 5 Weight Value Predictions
We now switch to more problem-specific prediction models, starting with weight value predictions. We first prove strong lower bounds for algorithms with access to static weight predictions on out-trees and adaptive predictions on in-trees. Then, we give \(4\)-competitive algorithms for accurate static predictions on chains, and adaptive weight predictions on out-forest precedence constraints, and finally extend these results to obtain robust algorithms with error dependency.
The lower bound for out-trees adds a dummy root \(r\) to the pure online lower bound composed of \(\Omega(n)\) zero weight jobs, where exactly one hides a valuable job. In the static prediction setting we thus only receive a prediction for \(r\), which does not help any algorithm to improve.
**Observation 5.1**.: _Any algorithm which has only access to static weight predictions has a competitive ratio of at least \(\Omega(n)\), even if the precedence constraint graph is an out-tree._
For in-trees and adaptive weight predictions, we prove the following lower bound.
**Lemma 5.2**.: _Any algorithm which has only access to adaptive weight predictions has a competitive ratio of at least \(\Omega(\sqrt{n})\), even for in-tree precedence constraints._
Proof.: Consider an in-tree instance with unit-size jobs and root \(r\) of weight \(0\). There are \(\sqrt{n}\) chains of length \(2\) with leaf weights \(0\) and inner weights \(1\) which are connected to \(r\). Further, there are \(n-2\sqrt{n}-1\) leaves with weight \(0\), which are connected to a node \(v\) with weight \(1\), which itself is a child of \(r\). Note that the weight prediction for all potential front jobs except \(r\) is always \(1\). Thus, even the adaptive predictions do not help, and we can assume that the algorithm first processes the children of \(v\), giving a total objective of at least \(\Omega((n-2\sqrt{n}-1)^{2}+(n-2\sqrt{n}-1)\sqrt{n})=\Omega(n\sqrt{n})\), while processing the other leaves first yields a value of at most \(\mathcal{O}((2\sqrt{n})^{2}+(2\sqrt{n}+n-2\sqrt{n}))=\mathcal{O}(n)\).
### Algorithms for Reliable Information
We present algorithms assuming access to _correct_ static or adaptive weight predictions and prove their competitiveness on online chain and out-forest precedence constraints using a unified analysis framework. This uses a dual-fitting argumentation inspired by an analysis of an algorithm for _known_ precedence constraints [1]. The framework only requires a condition on the rates at which an algorithm processes front jobs, hence it is independent of the considered prediction model. Let \(U_{t}\) refer to the set of unfinished jobs at time \(t\), i.e., \(U_{t}=\bigcup_{v\in F_{t}}S(v)\). Denote by \(w(J^{\prime})\) the total weight of jobs in a set \(J^{\prime}\). We write \(W(t)\) for \(w(U_{t})\).
**Theorem 5.3**.: _If an algorithm for online out-forest precedence constraints satisfies at every time \(t\) and \(j\in F_{t}\) that \(w(S(j))\leq\rho\cdot R_{j}^{t}\cdot W(t)\), where \(R_{j}^{t}\) is the processing rate of \(j\) at time \(t\), it is at most \(4\rho\)-competitive for minimizing the total weighted completion time on a single machine._
We first present algorithms for weight predictions and derive results using Theorem 5.3, and finally prove the theorem.
```
0: Chains \(\mathcal{C}\), initial total weight \(W_{c}\) for each \(c\in\mathcal{C}\).
1:\(t\gets 0\) and \(W_{c}(t)\gets W_{c}\) for every \(c\in\mathcal{C}\).
2:while\(U_{t}\neq\emptyset\)do
3: Process the front job of every chain \(c\) at rate \(\frac{W_{c}(t)}{\sum_{c}W_{c}(t)}\)
4:\(t\gets t+1\)
5: If a job \(j\) in chain \(c\) finished, \(W_{c}(t)\gets W_{c}(t)-w_{j}\)
6:endwhile
7: Schedule remaining jobs in an arbitrary order.
```
**Algorithm 1** Weighted Round Robin on Chains
Static Weight Values for ChainsWe give an algorithm for correct static weight predictions. As Observation 5.1 rules out well-performing algorithms for out-tree precedence constraints with static weight predictions, we focus on chains. Correct static weight predictions mean access to the total weight \(W_{c}\) of every chain \(c\) in the set of chains \(\mathcal{C}\).
Algorithm 1, essentially, executes a classical weighted round robin algorithm where the rate at which the front job of a chain \(c\) is executed at time \(t\) is proportional to the total weight of unfinished jobs in that chain, \(W_{c}(t)\). As this definition is infeasible for unfinished chains with \(W_{c}(t)=0\), we process these in an arbitrary order in the end. As they have no weight, this does not negatively affect the objective.
Despite initially only having access to the weights \(W_{c}(t)\) for \(t=0\), the algorithm can compute \(W_{c}(t)\) for any \(t>0\) by subtracting the weight of finished jobs of \(c\) from the initial \(W_{c}\) (cf. Line 5). Thus, \(W_{c}(t)=w(S(j))\) holds for any time \(t\) and every \(j\in F_{t}\), where \(c\) is the corresponding chain of job \(j\). Further, \(W(t)=\sum_{c}W_{c}(t)\). We conclude that, for any \(t\) and \(j\in F_{t}\), it holds \(R_{j}^{t}=\frac{w(S(j))}{W(t)}\). Using Theorem 5.3 with \(\rho=1\), we derive the following result:
**Theorem 5.4**.: _Given correct weight predictions, Algorithm 1 is a non-clairvoyant \(4\)-competitive algorithm for minimizing the total weighted completion time of jobs with online chain precedence constraints on a single machine._
Adaptive Weight Values for Out-ForestsObservation 5.1 states that static weight predictions are not sufficient to obtain \(\mathcal{O}(1)\)-competitive algorithms for out-forests. The reason is that we, in contrast to chains, cannot recompute \(\hat{W}_{j}\) whenever a new front job \(j\) appears. For adaptive predictions, however, we do not need to recompute \(\hat{W}_{j}\), as we simply receive a new prediction. Thus, we can process every front job \(j\in F_{t}\) with rate \(R_{j}^{t}=\frac{\hat{W}_{j}}{\sum_{j^{\prime}\in F_{t}}\hat{W}_{j^{\prime}}}\). For correct predictions, Theorem 5.3 directly implies the following.
**Theorem 5.5**.: _Given correct adaptive weight predictions, there exists a non-clairvoyant \(4\)-competitive algorithm for minimizing the total weighted completion time of jobs with online out-forest precedence constraints on a single machine._
Full proof of Theorem 5.3Fix an algorithm satisfying the conditions of Theorem 5.3. Let Alg be the objective value of the algorithm's schedule for a fixed instance. We introduce a linear programming relaxation similar to the one in [11] for our problem on a machine running at lower speed \(\frac{1}{\alpha}\), for some \(\alpha\geq 1\). Let \(\textsc{Opt}_{\alpha}\) denote the optimal objective value for the problem with speed \(\frac{1}{\alpha}\). As the completion time of every job is linear in the machine speed, we have \(\textsc{Opt}_{\alpha}\leq\alpha\cdot\textsc{Opt}\). The variable \(x_{j,t}\) denotes the fractional assignment of job \(j\) at time \(t\). The first constraint ensures that every job receives enough
amount of processing to complete, the second constraint restricts the avaible rate per time to \(\frac{1}{\alpha}\), and the final constraint asserts that no job can be completed before its predecessors.
\[\min \sum_{j,t}w_{j}\cdot t\cdot\frac{x_{j,t}}{p_{j}}\] ( \[\mathrm{LP}_{\alpha}\] ) s.t. \[\sum_{t}\frac{x_{j,t}}{p_{j}}\geq 1 \forall j\] \[\sum_{j}\alpha\cdot x_{j,t}\leq 1 \forall t\] \[\sum_{s\leq t}\frac{x_{j,s}}{p_{j}}\geq\sum_{s\leq t}\frac{x_{j^ {\prime},s}}{p_{j^{\prime}}} \forall t,\forall(j,j^{\prime})\in E\] \[x_{j,t}\geq 0 \forall j,t\]
The dual of (\(\mathrm{LP}_{\alpha}\)) can be written as follows.
\[\max \sum_{j}a_{j}-\sum_{t}b_{t}\] ( \[\mathrm{LDP}_{\alpha}\] ) s.t. \[\sum_{s\geq t}\left(\sum_{j^{\prime}:(j,j^{\prime})\in E}c_{s,j \to j^{\prime}}-\sum_{j^{\prime}:(j^{\prime},j)\in E}c_{s,j^{\prime} \to j}\right) \tag{1}\] \[\leq\alpha\cdot b_{t}\cdot p_{j}-a_{j}+w_{j}\cdot t \forall j,t\] \[a_{j},b_{t},c_{t,j\to j^{\prime}}\geq 0 \forall t,\forall(j,j^{\prime})\in E\]
Let \(\kappa>1\) be a constant which we fix later. We define a variable assignment for (\(\mathrm{LDP}_{\alpha}\)) as follows: \(\bar{a}_{j}=\sum_{s\geq 0}\bar{a}_{j,s}\) for every job \(j\), where \(\bar{a}_{j,s}=w_{j}\) if \(s\leq C_{j}\) and \(\bar{a}_{j,s}=0\) otherwise, \(\bar{b}_{t}=\frac{1}{\kappa}\cdot W(t)\) for every time \(t\), and \(\bar{c}_{t,j^{\prime}\to j}=w(S(j))\) if \(j,j^{\prime}\in U_{t}\), and \(\bar{c}_{t,j^{\prime}\to j}=0\) otherwise, for every time \(t\) and edge \((j^{\prime},j)\in E\).
We show in the following that the variables \((\bar{a}_{j},\bar{b}_{t},\bar{c}_{t,j\to j^{\prime}})\) define a feasible solution for (\(\mathrm{LDP}_{\alpha}\)) and achieve an objective value close to Alg. Weak duality then implies Theorem 5.3. First, consider the objective value.
**Lemma 5.6**.: \(\sum_{j}\bar{a}_{j}-\sum_{t}\bar{b}_{t}=(1-\frac{1}{\kappa})\textsc{Alg}\)_._
Proof.: Note that \(\bar{a}_{j}=w_{j}C_{j}\), and thus \(\sum_{j}\bar{a}_{j}=\textsc{Alg}\). Also, since the weight \(w_{j}\) of a job \(j\) is contained in \(W(t)\) if \(t\leq C_{j}\), we conclude \(\sum_{t}\bar{b}_{t}=\frac{1}{\kappa}\textsc{Alg}\).
Second, we show that the duals are feasible for (\(\mathrm{LDP}_{\alpha}\)).
**Lemma 5.7**.: _Assigning \(a_{j}=\bar{a}_{j}\), \(b_{t}=\bar{b}_{t}\) and \(c_{t,j\to j^{\prime}}=\bar{c}_{t,j\to j^{\prime}}\) is feasible for (\(\mathrm{LDP}_{\alpha}\)) if \(\alpha\geq\kappa\rho\) and \(\frac{w(S(j))}{W(t)}\leq\rho R_{j}^{t}\) for any time \(t\) for \(j\in F_{t}\) and the algorithm's rates \(R_{j}^{t}\)._
Proof.: Since our defined variables are non-negative by definition, it suffices to show that this assignment satisfies (1). Fix a job \(j\) and a time \(t\geq 0\). By observing that \(\bar{a}_{j}-t\cdot w_{j}\leq\sum_{s\geq t}\bar{a}_{j,s}\), it suffices to verify
\[\sum_{s\geq t}\left(\bar{a}_{j,s}+\sum_{(j,j^{\prime})\in E}\bar{c}_{s,j\to j ^{\prime}}-\sum_{(j^{\prime},j)\in E}\bar{c}_{s,j^{\prime}\to j}\right)\leq \bar{a}\bar{b}_{t}p_{j}. \tag{2}\]
To this end, we consider the terms of the left side for all times \(s\geq t\) separately. For any \(s\) with \(s>C_{j}\), the left side of (2) is zero, because \(\bar{a}_{j,s}=0\) and \(j\notin U_{s}\).
Otherwise, if \(s\leq C_{j}\), let \(t_{j}^{*}\) be the first point in time after \(t\) when \(j\) is available, and let \(s\in[0,t_{j}^{*})\). Then, \(j\in U_{s}\), and since each vertex in an out-forest has at most one direct predecessor, there must be a unique job \(j_{1}\in U_{s}\) with \((j_{1},j)\in E\). Thus, \(\overline{c}_{s,j_{1}\to j}=w(S(j))\) and \(\overline{c}_{s,j\to j^{\prime}}=w(S(j^{\prime}))\) for all \((j,j^{\prime})\in E\). Observe that in out-forests, we have \(S(j^{\prime})\cap S(j^{\prime\prime})=\emptyset\) for all \(j^{\prime}\neq j^{\prime\prime}\) with \((j,j^{\prime}),(j,j^{\prime\prime})\in E\). This implies \(\sum_{(j,j^{\prime})\in E}\overline{c}_{s,j\to j^{\prime}}=w(S(j))-w_{j}\) and \(\sum_{(j,j^{\prime})\in E}\overline{c}_{s,j\to j^{\prime}}-\sum_{(j_{1},j)\in E }\overline{c}_{s,j_{1}\to j}=-w_{j}\). Hence,
\[\overline{a}_{j,s}+\sum_{(j,j^{\prime})\in E}\overline{c}_{s,j\to j^{\prime} }-\sum_{(j^{\prime},j)\in E}\overline{c}_{s,j^{\prime}\to j}\leq w_{j}-w_{j}=0.\]
Therefore, proving (2) reduces to proving
\[\sum_{s=t_{j}^{*}}^{C_{j}}\left(w_{j}+\sum_{(j,j^{\prime})\in E}\overline{c}_{ s,j\to j^{\prime}}-\sum_{(j^{\prime},j)\in E}\overline{c}_{s,j^{\prime}\to j} \right)\leq\alpha\overline{b}_{t}p_{j}. \tag{3}\]
Now, let \(s\in[t_{j}^{*},C_{j})\). There cannot be an unfinished job preceding \(j\), thus \(\sum_{(j^{\prime},j)\in E}\overline{c}_{s,j^{\prime}\to j}=0\). Observe that if there is a job \(j^{\prime}\in U_{s}\) with \((j,j^{\prime})\in E\), the fact that \(j\in U_{s}\) implies \(j^{\prime}\in U_{s}\), and thus \(\overline{c}_{s,j\to j^{\prime}}=w(S(j^{\prime}))\) by definition. Using again the fact that the sets \(S(j^{\prime})\) are pairwise disjoint for all direct successors \(j^{\prime}\) of \(j\), i.e., for all \((j,j^{\prime})\in E\), this yields \(\sum_{(j,j^{\prime})\in E}\overline{c}_{s,j\to j^{\prime}}=w(S(j))-w_{j}\), and further gives
\[w_{j}+\sum_{(j,j^{\prime})\in E}\overline{c}_{s,j\to j^{\prime}}-\sum_{(j^{ \prime},j)\in E}\overline{c}_{s,j^{\prime}\to j}=w(S(j)).\]
Thus, the left side of (3) is equal to \(\sum_{s=t_{j}^{*}}^{C_{j}}w(S(j))\).
The facts that \(W(t_{1})\geq W(t_{2})\) at any \(t_{1}\leq t_{2}\) and that \(j\) is processed by \(R_{j}^{t^{\prime}}\) units at any time \(t^{\prime}\in[t_{j}^{*},C_{j}]\) combined with the assumption \(\frac{w(S(j))}{W(t)}\leq\rho\cdot R_{j}^{t}\) imply the following:
\[\sum_{s=t_{j}^{*}}^{C_{j}}\frac{w(S(j))}{W(t)}\leq\sum_{s=t_{j}^{*}}^{C_{j}} \frac{w(S(j))}{W(s)}\leq\sum_{s=t_{j}^{*}}^{C_{j}}\rho\cdot R_{j}^{s}\leq\rho \cdot p_{j}.\]
Rearranging it, using the definition of \(\overline{b}_{t}\) and \(\alpha\geq\kappa\rho\) gives
\[\sum_{s=t_{j}^{*}}^{C_{j}}w(S(j))\leq\rho\cdot p_{j}\cdot W(t)=\rho\kappa \cdot p_{j}\cdot\overline{b}_{t}\leq\alpha\cdot p_{j}\cdot\overline{b}_{t},\]
which implies (3) and thus proves the statement.
Proof of Theorem 5.3.: We set \(\alpha=\rho\kappa\). Weak LP duality, Lemma 5.7, and Lemma 5.6 imply
\[\rho\kappa\cdot\textsc{Opt}\geq\textsc{Opt}_{\rho\kappa}\geq\sum_{j} \overline{a}_{j}-\sum_{t}\overline{b}_{t}=\left(1-\frac{1}{\kappa}\right) \cdot\textsc{Alg.}\]
Choosing \(\kappa=2\), we conclude that \(\textsc{Alg}\leq 4\rho\cdot\textsc{Opt}\).
### Learning-Augmented Algorithms
In this section, we extend the algorithms presented in Section 5.1 to achieve a smooth error-dependency in the case of inaccurate predictions, while preserving constant consistency. Further, we use the time-sharing technique (cf. Section 2) to ensure a robustness of \(\mathcal{O}(\omega)\).
Static Weight Predictions for ChainsHere, the main challenges are as follows: we only have access to the potentially wrong predictions \(\hat{W}_{c}\) on the total chain weight for all \(c\in\mathcal{C}\) and, therefore, we execute Algorithm 1 using \(\hat{W}_{c}\) instead of \(W_{c}\). In particular, the weight of a chain \(c\) might be _underpredicted_, \(\hat{W}_{c}<W_{c}\), or _overpredicted_, \(\hat{W}_{c}>W_{c}\). This means that \(\sum_{c}\hat{W}_{c}\) may not be the accurate total weight of the instance and that the recomputation of \(W_{c}(t)\) in Line 5 may be inaccurate. In Appendix B.1, we show how to encode the error due to underpredicted chains in an instance \(\mathcal{C}_{u}\) and the error due to overpredicted chains in an instance \(\mathcal{C}_{o}\), similar to an error-dependency proposed in [1] for online set cover. We prove the following result:
**Theorem 5.8**.: _For minimizing the total weighted completion time of jobs with online chain precedence constraints on a single machine, there is a non-clairvoyant algorithm with predicted chain weights with a competitive ratio of at most_
\[\mathcal{O}(1)\cdot\min\left\{1+\frac{\textsc{Opt}(\mathcal{C}_{o})+\omega \cdot\textsc{Opt}(\mathcal{C}_{u})}{\textsc{Opt}},\omega\right\}.\]
Adaptive Weight Predictions for Out-ForestsTo capture the quality of an adaptive prediction, we intuitively need to measure its quality over the whole execution. To this end, we use the maximal distortion factor of the weight predictions of every possible front job, which in fact can be any job in \(J\). We prove in Appendix B.2:
**Theorem 5.9**.: _For minimizing the total weighted completion time on a single machine with online out-forest precedence constraint and adaptive weight predictions, there is a non-clairvoyant algorithm with a competitive ratio of at most_
\[\mathcal{O}(1)\cdot\min\left\{\max_{v\in J}\frac{\tilde{W}_{o}}{w(S(v))}\cdot \max_{v\in J}\frac{w(S(v))}{\hat{W}_{o}},\omega\right\}.\]
## 6 Weight Order Predictions
We consider static and adaptive weight order predictions. As strong lower bounds hold for in-trees, even for the more powerful adaptive weight predictions (cf. Lemma 5.2), we focus on chains and out-forest precedence constraints.
Further, we introduce an error measure for wrongly predicted orders. A natural function on orders is the _largest inversion_, i.e., the maximum distance between the position of a front job in an order prediction \(\widehat{\leq}_{t}\) and the true order \(\leq_{t}\). However, if all out-trees have almost the same weight, just perturbed by some small constant, this function indicates a large error for the reverse order, although it will arguably perform nearly as good as the true order. To mitigate this overestimation, we first introduce \(\epsilon\)-approximate inversions. Formally, for every precision constant \(\epsilon>0\), we define
\[\mathcal{L}(\epsilon)=\max_{t,j\in F_{t}}\left|\left\{i\in F_{t}\middle|\frac{ w(S(j))}{1+\epsilon}\geq w(S(i))\wedge i\widehat{\leq}_{t}j\right\}\right|.\]
Note that \(\mathcal{L}(\epsilon)\geq 1\) for every \(\epsilon>0\), because \(\widehat{\leq}_{t}\) is reflexive. We define the _\(\epsilon\)-approximate largest inversion_ error as \(\max\{1+\epsilon,\mathcal{L}(\epsilon)\}\). We show performance guarantees depending on this error which hold for any \(\epsilon>0\). Therefore, we intuitively get a pareto frontier between the precision \((1+\epsilon)\) and \(\mathcal{L}(\epsilon)\), the largest distance of inversions which are worse than the precision. A configurable error with such properties has been applied to other learning-augmented algorithms [1, 1].
### Adaptive Weight Order
We introduce Algorithm 2, which exploits access to the adaptive order \(\widehat{\leq}_{t}\). In a sense, the idea of the algorithm is to emulate Algorithm 1 for weight predictions. Instead of having access to the total remaining weight of every out-tree to computing rates, Algorithm 2 uses \(\widehat{\leq}_{t}\) to approximate the rates. For every \(\,\)front job \(j\in F_{t}\), let \(i_{j}\) be the position of \(j\) in \(\widehat{\leq}_{t}\). Recall that \(H_{k}\) denotes the \(k\)th harmonic number.
**Theorem 6.1**.: _For any \(\epsilon>0\), Algorithm 2 has a competitive ratio of at most \(4H_{\omega}\cdot\max\{1+\epsilon,\mathcal{L}(\epsilon)\}\) for minimizing the total weighted completion time on a single machine with online out-forest precedence constraints._
Proof.: We first observe that the rates of the algorithm are feasible, because \(\sum_{j\in F_{t}}\frac{1}{H_{|F_{t}|}\cdot l_{j}}=\frac{H_{|F_{t}|}}{H_{|F_{t}| }}=1\).
Fix a time \(t\) and an \(\epsilon>0\). Assume that \(j_{1}\widehat{\leq}_{t}\ldots\widehat{\leq}_{t}j_{|F_{t}|}\), and fix a front job \(j_{i}\in F_{t}\). The algorithm processes \(j_{i}\) at time \(t\) with rate \(R^{t}_{j_{i}}=(H_{|F_{t}|}\cdot i)^{-1}\geq(H_{\omega}\cdot i)^{-1}.\) Note that showing \(\frac{w(S(j_{i}))}{W(t)}\leq H_{\omega}\cdot\max\{1+\epsilon,\mathcal{L}( \epsilon)\}\cdot L^{t}_{j_{i}}\) implies the theorem via Theorem 5.3. Assume otherwise, i.e., \(\frac{w(S(j_{i}))}{W(t)}>\frac{1}{t}\cdot\max\{1+\epsilon,\mathcal{L}( \epsilon)\}\). For the sake of readability, we define \(K_{>}=\{k\in[i-1]\mid w(S(i_{k}))>\frac{w(S(j_{i}))}{1+\epsilon}\}\) and \(K_{\leq}=\{k\in[i]\mid w(S(i_{k}))\leq\frac{w(S(j_{i}))}{1+\epsilon}\}.\) Since in an out-forest the sets \(S(j)\) are pairwise disjoint for all front jobs \(j\in F_{t}\),
\[1\geq\sum_{k\in[i]}\frac{w(S(i_{k}))}{W(t)}\geq\sum_{k\in K_{\gamma}}\frac{w( S(i_{k}))}{W(t)}+\sum_{k\in K_{\leq}}\frac{w(S(i_{k}))}{W(t)}.\]
Consider the second sum. First, observe that this sum has at most \(\mathcal{L}(\epsilon)\) many terms, including the one for \(j_{i}\), and that each such term is at most \(\frac{w(S(j_{i}))}{W(t)}\). Then, observe that every term in the first sum is at least \(\frac{w(S(j_{i}))}{(1+\epsilon)W(t)}\). Thus, we can further lower bound the sum of the two sums by
\[\frac{1}{1+\epsilon}\sum_{k\in K_{\gamma}}\frac{w(S(j_{i}))}{W( t)}+\frac{1}{\mathcal{L}(\epsilon)}\sum_{k\in K_{\leq}}\frac{w(S(j_{i}))}{W(t)}\] \[\geq\frac{1}{\max\{1+\epsilon,\mathcal{L}(\epsilon)\}}\sum_{k\in [i]}\frac{w(S(j_{i}))}{W(t)}>\sum_{k=1}^{i}\frac{1}{i}=1.\]
This is a contradiction.
Using this theorem, we conclude the following corollary.
**Corollary 6.2**.: _There exists a non-clairvoyant weight-oblivious algorithm for the problem of minimizing the total weighted completion time of \(n\) jobs on a single machine with a competitive ratio of at most \(\mathcal{O}(\log n)\) when given access to the order of the job's weights._
### Static Weight Order
If we only have access to \(\widehat{\leq}_{0}\), a natural approach would be to compute the initial rates as used in Algorithm 2 and just not update them. As Observation 5.1 rules out well-performing algorithms for out-trees, we focus on chains. Even for chains, we show that this algorithm has a competitive ratio of at least \(\Omega(\omega\cdot H_{\omega})\).
**Lemma 6.3**.: _The variant of Algorithm 2 that computes the rates using \(\widehat{\leq}_{0}\) instead of \(\widehat{\leq}_{t}\) is at least \(\Omega(\omega\cdot H_{\omega})\)-competitive, even if \(\widehat{\leq}_{0}\) equals \(\preceq_{0}\)._
Proof.: Consider an instance with \(\omega\) chains, each with a total weight of one. Then, \(\leq_{0}\) is just an arbitrary order of the chains. Recall that the algorithm starts processing the chains \(c\) with rate \((H_{\omega}\cdot i_{c})^{-1}\), where \(i_{c}\) is the position of \(c\) in the order. We define the first \(\omega-1\) chains to have their total weight of one at the very first job and afterwards only jobs of weight zero. Chain \(\omega\), the slowest chain, has its total weight on the last job. We define the chains \(c\) to contain a total of \(d\cdot H_{\omega}\cdot i_{c}\) jobs with unit processing times, for some common integer \(d\). This means that the algorithm finishes all chains at the same time. The optimal solution value for this instance is \(\omega\cdot(\omega+1)+\omega-1+d\cdot H_{\omega}\cdot\omega\), where \(\omega\cdot(\omega+1)\) is the optimal sum of completion times for the first \(\omega-1\) chains, \(d\cdot H_{\omega}\cdot\omega\) is the cost for processing the last chain, and \(\omega-1\) is the cost for delaying the last chains by the \(\omega-1\) time units needed to process the first jobs of the first \(\omega-1\) chains. The solution value of the algorithm is at least \(d\cdot H_{\omega}^{2}\cdot\omega^{2}\) as this is the cost for just processing the last chain. Thus, for large \(d\), the competitive ratio tends to \(H_{\omega}\cdot\omega\).
However, the lower bound instance of the lemma requires \(\omega\) to be "small" compared to the number of jobs, in case of unit jobs, or to \(P\coloneqq\sum_{j}p_{j}\), otherwise. We exploit this to prove the following theorem in Appendix C.
**Theorem 6.4**.: _For any \(\epsilon>0\), Algorithm 2 has a competitive ratio of at most \(\mathcal{O}(H_{\omega}^{2}\sqrt{P}\cdot\max\{1+\epsilon,\mathcal{L}(\epsilon)\})\) when computing rates with \(\widehat{\leq}_{0}\) instead of \(\widehat{\leq}_{t}\) at any time \(t\). For unit jobs, it is \(\mathcal{O}(H_{\omega}^{2}\sqrt{n}\cdot\max\{1+\epsilon,\mathcal{L}(\epsilon)\})\)-competitive._
## 7 Average Predictions
Recall that an average predictions give access to predicted values \(\hat{a}_{o}\) on \(a(S(v))=(\sum_{u\in S(v)}w_{u})/(\sum_{u\in S(v)}p_{u})\) for each \(v\in F_{t}\). We show the following lower bound for chains with unit jobs, where average predictions coincide with the average weight of the jobs in the respective chain. The lower bound exploits that we can append jobs of weight zero to a chain in order to manipulate the average weight of the chain until all chains have the same average.
**Lemma 7.1**.: _Any algorithm which has only access to correct adaptive average predictions is at least \(\Omega(\sqrt{n})\)-competitive even for chain precedence constraints with unit jobs._
Proof.: Consider an instance composed of \(\sqrt{n}\in\mathbb{N}\) chains of unit jobs, where the first two jobs of the first chain have weights \(1\) resp. \(n-\sqrt{n}\), followed by \(n-\sqrt{n}-1\) zero weight jobs. The other \(\sqrt{n}-1\) chains are single jobs with weight \(1\). For an algorithm, all chains look identical since the first jobs have weight \(1\) and the average of every chain is equal to \(1\). Therefore, an adversary can ensure that the algorithm processes the first chain last, giving an objective value of \(\sum_{i=1}^{\sqrt{n}}i+(\sqrt{n}+1)(n-\sqrt{n})=\Omega(n\sqrt{n})\), while a solution which schedules the heavy weight job initially achieves an objective value of at most \(1+2(n-\sqrt{n})+\sum_{i=1}^{\sqrt{n}-1}(3+i)=\mathcal{O}(n)\). The adaptivity of the predictions does not help for this lower bound as the algorithm would only receive meaningful updates once it finishes the first job of the first chain, which is too late.
Final Remarks
We initiated the study of learning-augmented algorithms for scheduling with online precedence constraints by considering a hierarchy of prediction models based on their entropy. For several models of the hierarchy, we were able to show that the predicted information is sufficient to break lower bounds for algorithms without predictions. We hope that our approach leads to more discussions on the identification of the "right" prediction model in learning-augmented algorithm design. As a next research step, we suggest investigating the missing bounds for our prediction models, e.g., an upper bound for average predictions.
|
2310.15695 | Lie minimal Weingarten surfaces | We consider Lie minimal surfaces, the critical points of the simplest Lie
sphere invariant energy, in Riemannian space forms. These surfaces can be
characterized via their Euler-Lagrange equations, which take the form of
differential equations of the principal curvatures. Surfaces with constant mean
curvature that satisfy these equations turn out to be rotational in their space
form. We generalize in flat ambient space: here surfaces where the principal
curvatures satisfy an affine relationship as well as elliptic linear Weingarten
surfaces are rotational as well. | Joseph Cho, Masaya Hara, Denis Polly, Tomohiro Tada | 2023-10-24T10:06:30Z | http://arxiv.org/abs/2310.15695v1 | # Lie minimal Weingarten Surfaces
###### Abstract.
We consider Lie minimal surfaces, the critical points of the simplest Lie sphere invariant energy, in Riemannian space forms. These surfaces can be characterized via their Euler-Lagrange equations, which take the form of differential equations of the principal curvatures. Surfaces with constant mean curvature that satisfy these equations turn out to be rotational in their space form. We generalize in flat ambient space: here surfaces where the principal curvatures satisfy an affine relationship as well as elliptic linear Weingarten surfaces are rotational as well.
Key words and phrases:Lie minimal surface, minimal surface, constant mean curvature surface, linear Weingarten surface 2020 Mathematics Subject Classification: Primary 53A10; Secondary 53A40, 53C42
## 1. Introduction
Willmore functional was first considered by Germain [14] as a surface analogue of the bending energy of curves whose minimizers are the elastic curves of Bernoulli and Euler. The critical points of the Willmore functional under compactly supported variations are called _constrained Willmore surfaces_, and they have been widely studied (see, for example, [4, 6]), in part, due to the now-resolved Willmore conjecture [19, 26]. In particular, the Willmore functional is conformally invariant, and is in fact the simplest variational problem that can be considered in conformal geometry, or Mobius geometry [25]. On the other hand, those surfaces not determined by invariants of surfaces in a geometry are called _deformable surfaces_, and Cartan has showed that _isothermic surfaces_, those surfaces that admit conformal curvature lines coordinates, are exactly the deformable surfaces in Mobius geometry [8]. Interestingly, constant mean curvature surfaces in space forms are important examples that are both constrained Willmore and isothermic [24, 25]. Thus, certain curvature properties, a space form notion, play an important role in consideration of both minimality and deformability of Mobius geometry.
Similar considerations can be made in other sphere geometries. In the context of Laguerre geometry, the simplest variational problem revolves around a functional that is Laguerre invariant called _Weingarten functional_; the critical points of Weingarten functional under compactly supported variations are called _\(L\)-minimal surfaces_[2] (see also [20]). On the other hand, the deformable surfaces in Laguerre geometry are exactly _\(L\)-isothermic surfaces_[1], those surfaces with curvature line coordinates that are conformal with respect to the third fundamental form. Again, certain curvature properties play a crucial role in determining the surfaces with both minimality and deformability, as minimal surfaces in Euclidean space are both \(L\)-minimal and \(L\)-isothermic [2, 20].
In this paper, we focus on Lie sphere geometry [16], the sphere geometry that includes both Mobius geometry and Laguerre geometry as its subgeometries, and consider Lie minimal surfaces that satisfy certain curvature properties in space forms.
First considered by Blaschke [3], _Lie minimal surfaces_ are surfaces that are the critical points of the simplest energy \(L_{\mathrm{Lie}}\), invariant under Lie sphere transformations, under compactly supported variations. This class of surfaces has raised interest recently, notably from two different perspectives: [13] showed that they constitute an integrable system (an integrable reduction of the Gauss-Codazzi equations of the Lie sphere frame), while [5] demonstrated that they have harmonic conformal Gauss maps (a congruence of Lie cyclides).
These two characterizations are purely Lie sphere geometric in nature. However, given a surface in a Riemannian space form, the energy \(L_{\mathrm{Lie}}\) can be computed using the principal curvatures of the space form projection [13]. The goal of this paper is gain insights into the geometry of Lie minimal surfaces whose principal curvatures satisfy additional properties, namely, relationships of the Weingarten type. We will show that affine Weingarten and elliptic linear Weingarten surfaces are Lie minimal if and only if they are rotational in their space forms in the sense of [11].
In Section 2 we will lay the groundwork for our investigations. Starting from the expression of \(L_{\mathrm{Lie}}\) via principal curvatures of the space form projection, we will develop the Euler-Lagrange equations for Lie minimal surfaces in space forms (Lemma 2.9). These differential equations for the principal curvatures are then used in Sections 3 and 4 to investigate Lie minimal surfaces with additional curvature conditions.
As minimal surfaces in Euclidean space are both constrained Willmore and L-minimal, while cmc surfaces in space forms are constrained Willmore, the starting point of our investigations are minimal Lie minimal surfaces or, more generally, Lie minimal surfaces with constant mean curvature \(H=\frac{1}{2}(k_{1}+k_{2})\). In contrary to the Mobius or Laguerre geometric cases, we show that these surfaces must be rotational within their space form in the sense of [11] (Theorem 3.2). Since rotational cmc surfaces in Riemannian space forms have been completely classified (most recently in [23]), this provides a complete classification of all Lie minimal cmc surfaces.
Following this, we consider two more classes of surfaces with constraint principal curvatures in Section 4: on the one hand, we consider surfaces in \(\mathbb{R}^{3}\) where the principal curvatures satisfy an affine linear relationship of the form
\[xk_{1}+yk_{2}=z.\]
This extends the class of cmc surfaces, as these correspond to the special case \(x=y\). We call this class the class of _affine Weingarten surfaces_. We will demonstrate that Lie minimal affine Weingarten surfaces are always rotational in \(\mathbb{R}^{3}\) (Theorem 4.3). The classification of rotational affine Weingarten surfaces of, e.g., [18] thus provides a complete classification of this class as well.
On the other hand, we consider the class of _linear Weingarten_ surfaces, examples of deformable surfaces in Lie sphere geometry [7]. This class extends the class of cmc surfaces to those where there is an affine linear relationship between the mean curvature \(H\) and the Gauss curvature \(K=k_{1}k_{2}\), that is
\[aK+2bH+c=0.\]
According to Bonnet's theorem (Proposition 4.1), all parallel transformations of linear Weingarten surfaces are again linear Weingarten. Since the parallel transformation preserves rotational surfaces and Lie minimality (a Lie sphere invariant notion), we can prove that all Lie surfaces that are parallel to a cmc surface are rotational. In the case of Euclidean ambient geometry, this comprises the class of linear Weingarten surfaces with \(b^{2}-ac>0\), dubbed elliptic linear Weingarten surfaces in [17].
## 2. Preliminaries
In this section we describe the setup of our investigations. The main goal is to fix notation we will use for the theory of surfaces in space forms. In Subsection 2.2 we will define the notion of Lie minimality. The main result of this section is Lemma 2.9 which we use to prove that channel surfaces are always Lie minimal (and in later sections).
Let \(M_{\kappa}\) be a Riemannian space form with constant sectional curvature \(\kappa\). We think of \(M_{\kappa}\) as either \(\mathbb{R}^{3}\) (\(\kappa=0\)) or a three dimensional (Riemannian) quadric in \(\mathbb{R}^{4}\) (\(\kappa=1\)) or \(\mathbb{R}^{3,1}\) (\(\kappa=-1\)). Let \(X:\Sigma\to M_{\kappa}\) be an umbilic-free immersion with _curvature line_ coordinates \((u,v)\), that is, the first and second fundamental form read as
\[\operatorname{I}(u,v)=E\operatorname{d}\!u^{2}+G\operatorname{d}\!v^{2}\quad \text{and}\quad\operatorname{II}(u,v)=\operatorname{L}\!\operatorname{d}\!u^ {2}+\operatorname{N}\!\operatorname{d}\!v^{2}. \tag{2.1}\]
We call the coordinates \((u,v)\)_isothermic_ if they are additionally conformal, that is, there exists a function \(\sigma\) such that
\[\operatorname{I}(u,v)=e^{2\sigma}(\operatorname{d}\!u^{2}+\operatorname{d}\!v ^{2})\quad\text{and}\quad\operatorname{II}(u,v)=e^{2\sigma}(k_{1}\operatorname {d}\!u^{2}+k_{2}\operatorname{d}\!v^{2}), \tag{2.2}\]
where \(k_{1},k_{2}\) denote the principal curvatures. We use these to define the _mean_ and _(extrinsic) Gauss curvature_ by
\[H=\frac{k_{1}+k_{2}}{2},\quad K=k_{1}k_{2},\]
respectively. As we are assuming umbilic-free, we have \(k_{1}\neq k_{2}\).
We briefly review the structure equations for surfaces in space forms.
**Proposition 2.1** ([12, SS65]).: _Let \(X:\Sigma\to M_{\kappa}\) be a surface with curvature line coordinates \((u,v)\) and fundamental forms as in (2.1). Then the Codazzi equations are_
\[\frac{k_{1,v}}{k_{2}-k_{1}}=\bigl{(}\log\sqrt{E}\bigr{)}_{v}\quad\text{and} \quad\frac{k_{2,u}}{k_{1}-k_{2}}=\bigl{(}\log\sqrt{G}\bigr{)}_{u}. \tag{2.3}\]
_If \((u,v)\) are conformal coordinates in addition (so that they are isothermic coordinates) with fundamental forms as in (2.2), then the Gauss equation takes the form_
\[\sigma_{uu}+\sigma_{vv}+(\kappa+K)e^{2\sigma}=0. \tag{2.4}\]
_while the Codazzi equations simplify to_
\[\frac{k_{1,v}}{k_{2}-k_{1}}=\bigl{(}\log\sqrt{E}\bigr{)}_{v}=\sigma_{v}\quad \text{and}\quad\frac{k_{2,u}}{k_{1}-k_{2}}=\bigl{(}\log\sqrt{G}\bigr{)}_{u}= \sigma_{u}. \tag{2.5}\]
As rotational surfaces in various space forms are central to our investigations, let us introduce this notion: consider \(M_{\kappa}\) as a quadric in the appropriate ambient space, equipped with the proper inner product (\(\mathbb{R}^{3}\) can be viewed as \(\{p:(v,p)=1\}\)
with \(v\in\mathbb{R}^{4}\) any non-zero vector). Isometries of \(M_{\kappa}\) are then those orthogonal transformations of the ambient space that fix the space form. We call a surface _rotational_ if it is invariant under a 1-parameter subgroup \(\rho\) of isometries that act as the identity on a 2-plane1\(\Pi\) in \(\mathbb{R}^{4}\). For hyperbolic spaces so that \(\kappa<0\), the definition amounts to three different types of rotations, depending on the signature of the metric induced on \(\Pi\) (for an extensive treatment of rotations in hyperbolic space forms see [11]). The next example gives explicit parametrizations of rotational surfaces along with their principal curvatures:
Footnote 1: Note that for \(\mathbb{R}^{3}\), the plane that is fixed by \(\rho\) must contain \(v\).
_Example 2.2_.: For brevity we assume that the induced metric on \(\Pi^{\perp}\) is Riemannian. In this case, the term _surface of revolution_ is used more frequently. We can parametrize rotational surfaces in \(M_{\kappa}\) via the action of \(\rho\) on a (geodesic) planar profile curve \(\gamma=(r,0,h,k)\) (\(k\equiv 1\) for \(\kappa=0\)). From the parametrization
\[X(u,v)=\rho(u)(r(v),0,h(v),k(v)), \tag{2.6}\]
and the fact that \(\rho\) is a 1-parameter subgroup, one quickly deduces that I and II take the form (2.2) with (we use \({}^{\prime}\) to denote \(\partial_{v}\))
\[e^{2\sigma}=r^{2},\ k_{1}=\frac{kh^{\prime}-hk^{\prime}}{r^{2}},\ k_{2}= \frac{1}{r^{3}}\begin{vmatrix}r&r^{\prime}&r^{\prime\prime}\\ h&h^{\prime}&h^{\prime\prime}\\ k&k^{\prime}&\kappa k^{\prime\prime}\end{vmatrix},\]
given a suitable parametrization of \(\gamma\). Similar expressions for the other types of rotations in hyperbolic space can be obtained.
Note that for a rotational surface \(k_{1,u}=0\). Thus, rotational surfaces are an example of channel surfaces as defined next.
**Definition 2.3**.: A surface in a space form \(M_{\kappa}\) is a _channel surface_ if one of its principal curvatures is constant along its principal direction.
_Remark 2.4_.: According to Joachimsthal's theorem, this definition is equivalent to the more usual definition of channel surfaces as envelopes of 1-parameter families of spheres [3].
### Linear Weingarten surfaces
Next, we introduce the class of linear Weingarten surfaces. There are two different conventions which surfaces to call linear Weingarten: either those for which Gauss and mean curvature satisfy a linear relationship, or those where the two principal curvatures do. We call the latter "affine Weingarten" to distinguish these two surface classes.
**Definition 2.5**.: A surface in a space form \(M_{\kappa}\) is called
* _linear Weingarten_ if there exists a non-trivial triple of constants \(a,b,c\) such that (LW) \[aK+2bH+c=0.\]
* _affine Weingarten_ if there exists a non-trivial triple of constants \(a,b,c\) such that (AW) \[xk_{1}+yk_{2}+z=0.\]
_Remark 2.6_.: The only surfaces that are linear Weingarten and affine Weingarten are surfaces of constant mean curvature (cmc): set \(a=0\) in (LW) and \(x=y\) in (AW).
We call a linear Weingarten surface \(X:\Sigma\to M_{\kappa}\)_tubular_ if \(ac-b^{2}=0\). This is equivalent to one of the principal curvatures of \(X\) being constant. Therefore, tubular linear Weingarten surfaces are also affine Weingarten (\(y=0\)) and channel surfaces. We will from now on assume further that the surfaces we consider are non-tubular.
Regarding channel linear or affine Weingarten surfaces, we have the following result:
**Proposition 2.7**.: _Let a surface be a non-tubular channel surface. If we have either_
* _it is a linear Weingarten surface in a space form_ \(M_{\kappa}\)_, or_
* _it is an affine Weingarten surface in_ \(\mathbb{R}^{3}\)_,_
_then the surface must be rotational._
Proof.: The first statement is proven in, for instance, [15].
For the second statement, note that by (AW) \(k_{1}=k_{1}(u)\) implies that \(k_{2}\) is a function of \(u\) alone as well. The Codazzi equations (2.5) then imply \(E=E(u)\) and \(G=U(u)V(v)\) for suitable functions \(U,V\) of one variable. Thus by [9], we have that the surface admits isothermic coordinates. More precisely, we can set \(r=\sqrt{E}\) and define a function \(h\) (of \(u\)) such that \(h_{u}=Ek_{2}\). The Gauss-Codazzi equations then imply that the fundamental forms are
\[\mathrm{I}(u,v)=r^{2}(\mathrm{d}u^{2}+\mathrm{d}v^{2})\quad\text{and}\quad \mathrm{II}(u,v)=r^{2}k_{1}\,\mathrm{d}u^{2}+h_{u}\,\mathrm{d}v^{2},\]
with \(k_{1}=\frac{r_{u}^{2}-r_{uu}r}{h_{u}r^{2}}\). A quick computation shows that the surface of revolution with \(r\) and \(h\) chosen such that (2.6) is isothermic (\(r^{2}=r_{u}^{2}+h_{u}^{2}\)) has the same fundamental forms, and the claim thus follows from the fundamental theorem.
### Lie minimal surfaces
Next, we define Lie minimal surfaces \(X:\Sigma\to M_{\kappa}\).
**Definition 2.8** ([3, 5, 13, 21]).: The critical points of a functional
\[L_{Lie}[X]:=\int_{\Sigma}\frac{k_{1,u}k_{2,v}}{(k_{1}-k_{2})^{2}}\,\mathrm{d}u \wedge\mathrm{d}v \tag{2.7}\]
with respect to compactly supported variations are called _Lie minimal surfaces_.
The functional (2.7) is introduced in [3] as the simplest Lie invariant integral. Thus, Lie sphere transformations map Lie minimal surfaces to Lie minimal surfaces.
We now state a characterization of Lie minimal surfaces in terms of a pair of differential equations that the principal curvatures of \(X\) have to satisfy.
**Lemma 2.9**.: _Let \(X:\Sigma\to M_{\kappa}\) be a curvature line parametrization of a surface in a space form. Then \(X\) is Lie minimal if and only if the principal curvatures satisfy_
\[(k_{2}-k_{1})k_{1,uv}+2k_{1,u}k_{1,v}=0\quad\text{and}\quad(k_{1}-k_{2})k_{2, uv}+2k_{2,u}k_{2,v}=0. \tag{2.8}\]
Proof.: Consider a compactly supported variation of the surface \(X\), with variation parameter \(\varepsilon\) independent of \(u\) and \(v\), that fixes boundaries, and suppose that it results in, for the principal curvatures \(\hat{k}_{1},\hat{k}_{2}\) at \(\varepsilon\),
\[k_{1}(u,v) \mapsto\hat{k}_{1}(u,v)=k_{1}(u,v)+\varepsilon h_{1}(u,v)+\mathcal{O}( \varepsilon^{2})\] \[k_{2}(u,v) \mapsto\hat{k}_{2}(u,v)=k_{2}(u,v)+\varepsilon h_{2}(u,v)+ \mathcal{O}(\varepsilon^{2})\]
for suitably chosen functions \(h_{1},h_{2}:\Sigma\to\mathbb{R}\) such that, for \(i=1,2\), \(h_{i}(u,v)=0\) outside of the compact support of the variation. Setting
\[f=f[k_{1},k_{2},k_{1,u},k_{2,v}]=\frac{k_{1,u}k_{2,v}}{(k_{1}-k_{2})^{2}},\]
then
\[\left.\frac{\mathrm{d}L}{\mathrm{d}\varepsilon}\right|_{\varepsilon=0} =\left.\frac{\mathrm{d}}{\mathrm{d}\varepsilon}\int_{\Sigma}f[ \hat{k}_{1}(u,v),\hat{k}_{2}(u,v),\hat{k}_{1,u}(u,v),\hat{k}_{2,v}(u,v)]\, \mathrm{d}u\wedge\mathrm{d}v\right|_{\varepsilon=0}\] \[=\left.\int_{\Sigma}\left(\frac{\partial f}{\partial k_{1}} \frac{\partial\hat{k}_{1}}{\partial\varepsilon}+\frac{\partial f}{\partial k_ {2}}\frac{\partial\hat{k}_{2}}{\partial\varepsilon}+\frac{\partial f}{ \partial k_{1,u}}\frac{\partial\hat{k}_{1,u}}{\partial\varepsilon}+\frac{ \partial f}{\partial k_{2,v}}\frac{\partial\hat{k}_{2,v}}{\partial \varepsilon}\right)\mathrm{d}u\wedge\mathrm{d}v\right|_{\varepsilon=0}\] \[=\int_{\Sigma}\left(\frac{\partial f}{\partial k_{1}}h_{1}+ \frac{\partial f}{\partial k_{2}}h_{2}+\frac{\partial f}{\partial k_{1,u}}h_{ 1,u}+\frac{\partial f}{\partial k_{2,v}}h_{2,v}\right)\mathrm{d}u\wedge \mathrm{d}v\] \[=\int_{\Sigma}\left(\left(\frac{\partial f}{\partial k_{1}}-\frac {\partial}{\partial u}\frac{\partial f}{\partial k_{1,u}}\right)h_{1}+\left( \frac{\partial f}{\partial k_{2}}-\frac{\partial}{\partial v}\frac{\partial f }{\partial k_{2,v}}\right)h_{2}\right)\mathrm{d}u\wedge\mathrm{d}v,\]
using the compact support condition. To find the critical points, we want \(\mathrm{d}L/\left.\mathrm{d}\varepsilon\right|_{\varepsilon=0}\) to be zero for arbitrary \(h_{1}\) and \(h_{2}\), so we have to show the stationary condition
(2.9) \[\left\{\begin{aligned} &\frac{\partial f}{\partial k_{1}}-\frac {\partial}{\partial u}\frac{\partial f}{\partial k_{1,u}}=0,\\ &\frac{\partial f}{\partial k_{2}}-\frac{\partial}{\partial v} \frac{\partial f}{\partial k_{2,v}}=0.\end{aligned}\right.\] (2.10)
These equations are called the _Euler-Lagrange (differential) equations_.
Because
\[\frac{\partial f}{\partial k_{1}} =-\frac{2k_{1,u}k_{2,v}}{\left(k_{1}-k_{2}\right)^{3}},\] \[\frac{\partial f}{\partial k_{2}} =\frac{2k_{1,u}k_{2,v}}{\left(k_{1}-k_{2}\right)^{3}},\] \[\frac{\partial}{\partial u}\frac{\partial f}{\partial k_{1,u}} =\frac{k_{2,uv}}{\left(k_{1}-k_{2}\right)^{2}}-\frac{2k_{2,v} \left(k_{1,u}-k_{2,u}\right)}{\left(k_{1}-k_{2}\right)^{3}},\quad\text{and}\] \[\frac{\partial}{\partial v}\frac{\partial f}{\partial k_{2,v}} =\frac{k_{1,uv}}{\left(k_{1}-k_{2}\right)^{2}}-\frac{2k_{1,u} \left(k_{1,v}-k_{2,v}\right)}{\left(k_{1}-k_{2}\right)^{3}},\]
from (2.9) we have
\[0 =-\frac{2k_{1,u}k_{2,v}}{\left(k_{1}-k_{2}\right)^{3}}-\frac{k_{2, uv}}{\left(k_{1}-k_{2}\right)^{2}}+\frac{2k_{2,v}\left(k_{1,u}-k_{2,u}\right)}{ \left(k_{1}-k_{2}\right)^{3}}\] \[\therefore\quad\frac{\left(k_{1}-k_{2}\right)k_{2,uv}+2k_{2,u}k_{2, v}}{\left(k_{1}-k_{2}\right)^{3}}=0. \tag{2.11}\]
Similarly, from (2.10) we have
\[0 =\frac{2k_{1,u}k_{2,v}}{\left(k_{1}-k_{2}\right)^{3}}-\frac{k_{1,uv} }{(k_{1}-k_{2})^{2}}+\frac{2k_{1,u}\left(k_{1,v}-k_{2,v}\right)}{\left(k_{1}-k _{2}\right)^{3}} \tag{2.12}\] \[\therefore\quad\frac{(k_{2}-k_{1})k_{1,uv}+2k_{1,u}k_{1,v}}{(k_{1}-k _{2})^{3}}=0.\]
As we are assuming umbilic-free, the numerators of (2.11) and (2.12) lead to the desired conclusions (2.8).
**Corollary 2.10**.: _Every rotational surface in \(M_{\kappa}\) is Lie minimal._
Proof.: As we have seen in Example 2.2, \(k_{1,u}=k_{2,u}=0\) for rotational surfaces, hence, (2.8) is satisfied.
## 3. Lie minimal cmc surfaces
In this section we consider the relationship between Lie minimal and cmc \(H\) (or minimal) surfaces. Recall that every cmc (or minimal) surface admits isothermic coordinates, which we will denote by \((u,v)\).
**Lemma 3.1**.: _Let \(X:\Sigma\to M_{\kappa}\) be Lie minimal with cmc \(H\). Then, the principal curvatures of \(X\) satisfy_
\[k_{1}-k_{2}=\alpha(u)\beta(v),\]
_where \(\alpha\) (resp. \(\beta\)) is a function only depending on \(u\) (resp. \(v\))._
Proof.: Suppose that \(X\) is Lie minimal, and let \(k_{1}-k_{2}>0\) without loss of generality. Since \(X\) is Lie minimal, a simple calculation and Lemma 2.9 show
\[(\log(k_{1}-k_{2}))_{uv}=\frac{H_{u}H_{v}}{(k_{1}-k_{2})^{2}},\]
which vanishes because \(H\) is constant. We conclude that
\[\log(k_{1}-k_{2}) =\tilde{\alpha}(u)+\tilde{\beta}(v)\] \[\therefore\quad k_{1}-k_{2} =\alpha(u)\beta(v),\]
where \(\alpha\) (resp. \(\beta\)) is a function only depending on \(u\) (resp. \(v\)) and \(e^{\tilde{\alpha}(u)}=\alpha(u)\) (resp. \(e^{\tilde{\beta}(v)}=\beta(v)\)).
**Theorem 3.2**.: _Let \(X:\Sigma\to M_{\kappa}\) be a constant mean curvature \(H\) surface. Then, \(X\) is Lie minimal if and only if it is a rotational surface._
Proof.: It follows from Corollary 2.10 that every rotational surface is Lie minimal.
Suppose conversely that \(X\) has cmc \(H\) and is Lie minimal and is non-tubular. By solving the Codazzi equation (2.5), we find
\[\sigma=\frac{1}{2}\left(\log\gamma-\log(k_{1}-k_{2})\right)\quad\therefore\quad e ^{2\sigma}=\frac{\gamma}{k_{1}-k_{2}},\]
where \(\gamma\) is some positive constant. Because of Lemma 3.1, we can write this as
\[\sigma=\frac{1}{2}\left(\log\gamma-\left(\tilde{\alpha}+\tilde{\beta}\right) \right)\quad\text{and}\qquad e^{2\sigma}=\frac{\gamma}{\alpha\beta},\]
and using the Gauss equation (2.4), we have
\[-\frac{1}{2}\left(\tilde{\alpha}^{\prime\prime}+\tilde{\beta}^{\prime\prime} \right)+\left(\kappa+K\right)e^{2\sigma}=0.\]
Now we differentiate both sides with respect to \(u\) and \(v\) and obtain
\[0=\frac{\alpha^{\prime}\beta^{\prime}}{4\alpha^{2}\beta^{2}}(\kappa+K).\]
Thus there are two possibilities. On the one hand, if \(\alpha^{\prime}\beta^{\prime}=0\), then without loss of generality, \(\alpha\) is constant and \(\sigma,k_{1},k_{2}\) only depend on \(v\). The surface is thus a channel linear Weingarten surface and by Proposition 2.7, rotational.
On the other hand, if \(\kappa+K=0\), we have
\[(k_{1},k_{2})=\left(H+\sqrt{H^{2}+\kappa},H-\sqrt{H^{2}+\kappa}\right),\]
where we assumed \(k_{1}-k_{2}>0\) without loss of generality. Since both principal curvatures are constant, the surface is tubular.
A complete classification of Lie minimal cmc surfaces is thus provided by explicit parametrizations of all rotational cmc surfaces (see for instance [23]).
**Corollary 3.3**.: _Let \(X:\Sigma\to\mathbb{R}^{3}\) be Lie minimal and minimal and not part of a plane. Then, \(X\) is (part of) the catenoid._
Proof.: If \(X\) is not restricted to a plane it is rotational according to Theorem 3.2. Since the catenoid is the only rotational minimal surface in \(\mathbb{R}^{3}\) this finishes the proof.
## 4. Lie minimal Weingarten surfaces
In this section we study Lie minimal linear and affine Weingarten surfaces in \(\mathbb{R}^{3}\). Many of our results extend to other space forms, but in order to give a streamlined presentation we restrict ourselves to \(\kappa=0\).
First, we consider the class of linear Weingarten surfaces. For a surface \(X:\Sigma\to\mathbb{R}^{3}\) satisfying (LW), we define \(\Delta=b^{2}-ac\). The surface \(X^{t}=X+tN\), where \(N\) denotes the Gauss map of \(X\), is called the _parallel surface at distance \(t\)_. It is well-known that obtaining parallel surfaces amounts to applying certain Lie sphere transformations [10, Sec 4.4], often referred to as _parallel transformations_.
**Proposition 4.1** (Bonnet's theorem [22, Section 3.4]).: _Every surface parallel to a linear Weingarten surface is again linear Weingarten satisfying the linear Weingarten condition_
\[a^{t}K^{t}+2b^{t}H^{t}+c^{t}=0,\]
_with_
\[\Delta^{t}=(b^{t})^{2}-a^{t}c^{t}=\Delta.\]
_Every linear Weingarten surface with \(\Delta>0\) is parallel to either_
* _a surface of positive constant Gauss curvature and hence a pair of cmc surfaces, or_
* _a minimal surface._
As mentioned in Subsection 2.2, Lie minimality is preserved under Lie sphere transformations, and hence also preserved under parallel transformations. These are exactly the transformations that map a surface to one of its parallel surfaces. Thus, we arrive at the following theorem.
**Theorem 4.2**.: _Every non-tubular Lie minimal linear Weingarten surface in \(\mathbb{R}^{3}\) with \(\Delta>0\) is a surface of revolution._
Proof.: Assume that \(X:\Sigma\to\mathbb{R}^{3}\) is a Lie minimal linear Weingarten surface with \(\Delta>0\). Then, there is a value \(t_{0}\) such that the parallel surface \(X^{t_{0}}\) is Lie minimal and cmc. According to Theorem 3.2, \(X^{t_{0}}\) is rotational and thus has a rotational Gauss map \(N\). Therefore, \(X=X^{t_{0}}-t_{0}N\) is rotational as well.
Finally, we consider the case of affine Weingarten surfaces. Recall, that a surface \(X:\Sigma\to\mathbb{R}^{3}\) is called affine Weingarten if there are constants \(x,y,y\) such that
\[xk_{1}+yk_{2}+z=0. \tag{4.1}\]
We obtain the following theorem.
**Theorem 4.3**.: _Let \(X:\Sigma\to\mathbb{R}^{3}\) be non-tubular affine Weingarten. Then it is Lie minimal if and only if it is a surface of revolution._
Proof.: Since every surface of revolution is Lie minimal (Corollary 2.10), let us assume that \(X\) is Lie minimal and an affine Weingarten surface. From Lemma 2.9, we have
\[((x+y)k_{1}+z)k_{1,uv}-2xk_{1,u}k_{1,v} =0\] \[\text{and}\quad((x+y)k_{1}+z)k_{1,uv}-2yk_{1,u}k_{1,v} =0\] \[\therefore\quad(x-y)((x+y)k_{1}+z)k_{1,uv} =0.\]
There are three cases to consider: for \(x-y=0\), the surface is Lie minimal cmc and thus rotational.
If \(k_{1,uv}=0\), we have \(k_{1,u}k_{1,v}=0\) because \(X\) is Lie minimal and Lemma 2.9. Thus, \(X\) is a channel surface and, according to Proposition 2.7, a surface of revolution.
Finally, if \((x+y)k_{1}+z=0\) both principal curvatures are constant. Thus, \(X\) is tubular, which goes against our assumptions.
## 5. Future work
We have seen that linear Weingarten surfaces in \(\mathbb{R}^{3}\) parallel to cmc surfaces are Lie minimal if and only if they are rotational in Theorem 4.2. This result, of course, extends to the other ambient space forms. Since linear Weingarten surfaces belong to the realm of Lie sphere geometry, in the sense that they are those \(\Omega\)-surfaces that have constant conserved quantities (see [7]) it seems a worthy goal to employ Lie sphere geometric methods to investigate Lie minimal linear Weingarten surfaces. This shall be the subject of a future project.
**Acknowledgements.** This paper is based on the fourth author's Master thesis, the goal of which was to investigate Lie minimal surfaces with additional curvature properties. The authors would like to express their gratitude to Professor Wayne Rossman for his support and continued interest in this project. Further, we thank Professor Udo Hertrich-Jeromin for fruitful discussions at Kobe University. This work was done while the third author was a JSPS International Research Fellow (Graduate School of Science, Kobe University) and has been supported by the JSPS Grant-in-Aid for JSPS Fellows 22F22701. |
2302.05697 | Dark Matter Searches with Top Quarks | Collider signatures with top quarks provide sensitive probes of dark matter
(DM) production at the Large Hadron Collider (LHC). In this article, we review
the results of DM searches in final states with top quarks conducted by the
ATLAS and CMS Collaborations at the LHC, including the most recent results on
the full LHC Run 2 dataset. We highlight the complementarity of DM searches in
final states with top quarks with searches in other final states in the
framework of various simplified models of DM. A re-interpretation of a DM
search with top quarks in the context of an effective-field theory description
of scalar dark energy is also discussed. Finally, we give an outlook on the
potential of DM searches with top quarks in LHC Run 3, at the high-luminosity
LHC, and possible future colliders. In this context, we highlight new benchmark
models that could be probed by existing and future searches as well as those
that predict still uncovered signatures of anomalous top-quark production and
decays at the LHC. | J. Katharina Behr, Alexander Grohsjean | 2023-02-11T13:53:16Z | http://arxiv.org/abs/2302.05697v1 | # Dark Matter Searches with Top Quarks
###### Abstract
Collider signatures with top quarks provide sensitive probes of dark matter (DM) production at the Large Hadron Collider (LHC). In this article, we review the results of DM searches in final states with top quarks conducted by the ATLAS and CMS Collaborations at the LHC, including the most recent results on the full LHC Run 2 dataset. We highlight the complementarity of DM searches in final states with top quarks with searches in other final states in the framework of various simplified DM models. A re-interpretation of a DM search with top quarks in the context of an effective-field theory description of scalar dark energy is also discussed. Finally, we give an outlook on the potential of DM searches with top quarks in LHC Run 3, at the high-luminosity LHC, and possible future colliders. In this context, we highlight new benchmark models that could be probed by existing and future searches as well as those that predict still uncovered signatures of anomalous top-quark production and decays at the LHC.
top quark; dark matter; WIMP; LHC
## 1 Introduction
The particle nature of dark matter (DM) is one of the major puzzles in modern particle physics, despite long-standing evidence for its existence. As early as 1884, Lord Kelvin realised that the mass of the Milky Way derived from the velocity dispersion of the stars orbiting its centre is very different from the mass of the visible stars. He considered the majority of stars in our galaxy to be dark bodies. 140 years later overwhelming astronomical and cosmological evidence has been accumulated for the existence of Dark Matter (DM) across different scales, ranging from the rotational velocity of stars in ultra-faint galaxies over gravitational lensing effects to precision measurements of the cosmic microwave background [1, 2, 3, 4, 5].
It is well established that 85% of the matter in our Universe consists of DM. The dominant part of DM must be stable with a lifetime much longer than the age of the Universe. The fact that DM was already produced in the early Universe may provide a clue to non-gravitational interactions. At the same time, the feature that
DM must form cosmological structures consistent with current observations allows setting a limit on the strength of DM interactions with SM particles and with itself. It is clear that none of the Standard Model particles is consistent with all of these observations.
One of the highly-motivated theory paradigms for DM is the so-called WIMP (weakly interacting massive particle) paradigm, also known as the WIMP miracle [6]. Assuming DM to be produced via the freeze-out mechanism, one can achieve the observed relic density when the DM mass is close to the electroweak scale and when the DM coupling to Standard Model particles is at the order of the weak interaction. Consequently, DM particles could be produced and studied at the Large Hadron Collider (LHC) [7].
Besides producing DM under controlled experimental conditions, the LHC would also provide access to the particles mediating the interactions between DM and the Standard Model. A DM mediator produced in proton-proton (\(pp\)) collisions could decay to DM particles. Such _invisible decays_ could only be inferred via the presence of missing transverse momentum, \(p_{\rm T}^{\rm miss}\), in the detector. However, a DM mediator decaying back into SM particles (_visible decays_) would provide direct access to its properties. DM searches at the LHC explore both avenues. To detect the invisible decays of a mediator, it is mandatory to produce the mediator in association with SM particles. In this review article, we will focus on the associated production with top quarks and more generally on the role of top quarks in the quest for DM. Best suited to study DM in top quark channels are the two general purpose detectors ATLAS [8] and CMS [9].
Discovered in 1995 at the Fermilab Tevatron collider [10, 11], the top quark is the heaviest of all known elementary particles. In the case of a DM mediator with Yukawa-like couplings, the top quark would be ideal for discovery. Moreover, the top quark would allow for a first characterization of the dark sector. Due to its short lifetime, the top quark fully transmits its spin information to the decay particles. In turn, this allows inferring the spin of the mediator for both the associated production of top quarks and DM as well as for the decay of a mediator to a top-quark pair.
Another major unknown in the physics of our universe, beside the particle nature of DM, is the origin of its accelerating expansion [12, 13], which is usually attributed to the presence of a yet unknown repulsive force, referred to as dark energy (DE). If DE is a scalar field, it may be possible to produce it at the LHC. Like DM, DE would escape the detector unnoticed. DM searches with top quarks could be sensitive to DE production, as shown in Sections 2.4 and 4.4 of this review.
This article is structured as follows. After a detailed discussion of the underlying DM models in Section 2, we will focus on the experimental signatures of DM searches involving top quarks at LHC in Section 3. In Section 4, current highlights and results from DM searches at LHC are summarised. We conclude with a discussion of uncovered signatures and models, follwood by an outlook on prospects for discovering DM at future colliders in Sections 5 and 6.
## 2 Models with BSM signatures involving top quarks
Collider searches for DM are usually interpreted in the context of so-called _simplified models_, which contain a minimal set of new particles and couplings. Most of these models contain only a single Dirac DM particle and a single mediator particle. They are characterised by a minimal set of free parameters, namely the masses of the DM and mediator particles and the couplings of the mediator to the SM and dark sector. Simplified models provide a convenient framework to compare searches in different final states and among different experiments. In the following, the simplified models used for the interpretation of DM searches involving top quarks are described. Additionally, an effective-field theory (EFT) description of scalar DE is introduced.
### Vector and axial-vector mediators
#### 2.1.1 Flavour-conserving interaction
A mediator with flavour-universal couplings to the SM quarks and leptons, respectively, is predicted in a simplified model that describes a flavour-conserving interaction between a fermionic WIMP DM particle \(\chi\) and the SM fermions [14]. It is based on a simple extension of the SM by a new \(U(1)\) gauge symmetry under which \(\chi\) as well as some of the SM fermions are charged, thus allowing the mediator to couple to the SM sector. The interaction described by this gauge group is mediated by the \(s\)-channel exchange of a new, electrically neutral spin-1 particle \(Z^{\prime}\) with either vector or axial-vector couplings to the DM and SM fields. It will be referred to as _vector mediator_ or _axial-vector mediator_ in the following.
The model contains five free parameters [14]: the masses of the mediator, \(m_{Z^{\prime}}\), and the DM particle, \(m_{\chi}\), as well as the quark-flavour universal coupling \(g_{q}\) of the mediator to quarks, the lepton-flavour universal coupling \(g_{\ell}\) of the mediator to leptons, and the coupling \(g_{X}\) of the mediator to DM.
The mediator can decay either invisibly into a \(\chi\bar{\chi}\) pair or visibly into a fermion-anti-fermion \(f\bar{f}\) pair, as illustrated schematically by the left and right diagrams, respectively, in Figure 1. The former process can be detected as a \(p_{\rm T}^{\rm miss}+X\) signature in the presence of initial-state radiation (ISR), where \(X\) can be a gluon, photon, or vector boson, depending on the type of ISR, while the latter process results in a resonant enhancement in the invariant mass spectrum of the \(f\bar{f}\) pair.
Constraints on this model are derived in various parameter planes, including the \((m_{Z^{\prime}},m_{\chi})\) plane for fixed couplings \(g_{q}\), \(g_{\ell}\), \(g_{X}\)[15] and as upper limits on \(g_{q}\) as a function of \(m_{Z^{\prime}}\), as shown in Section 4.1.1.
#### 2.1.2 Flavour-changing interaction
DM signatures with top quarks are predicted in simplified models containing a vector mediator \(Z^{\prime}_{\rm VFC}\) with a flavour-changing coupling \(V_{ut}\) to the top and up quark.This type of model, referred to as _VFC model_ in the following, is motivated, for example, by scenarios with DM in a hidden sector that only interacts with the SM sector via a flavour-changing coupling of a \(Z^{\prime}\) boson [17, 16]. The dominant production and decay modes of the VFC model are shown in Figure 2. The mediator can be produced on-shell in association with a single top or anti-top (left diagram) and decay either invisibly into DM or visibly into a top and up quark. The former decay results in a \(p_{\rm T}^{\rm miss}+t\) signature, often referred to as _mono-top_. The latter decay yields a characteristic final state with two top quarks (\(tt\)) or two anti-top quarks \(\overline{t}t\) (same-sign \(tt\)). This signature can be easily distinguished from the more abundant \(t\bar{t}\) production via SM processes by the sign of the lepton
Figure 1: Schematic representation of the dominant production and decay modes of the simplified model with an \(s\)-channel vector or axial-vector mediator \(Z^{\prime}\)[15].
charges in fully leptonic decays. Similar \(tt/\overline{t}\) final states arise from the other two diagrams in Figure 2, which represent the \(t\)-channel exchange of the \(Z^{\prime}_{\rm VFC}\) mediator.
The VFC model is fully characterised by four free parameters: the mass of the mediator, \(m_{Z^{\prime}_{\rm VFC}}\), the mass of the DM particle, \(m_{\chi}\), the coupling of the mediator to DM, \(g_{\chi}\), and the flavour-changing coupling, \(g_{\mu t}\)[18]. The DM mass has no significant impact on the collider phenomenology of the VFC model, if \(2m_{\chi}<m_{Z^{\prime}_{\rm VFC}}\) and is fixed to a value of 1 GeV for existing collider searches [15]. Constraints on the VFC model are accordingly derived in several parameter planes involving the remaining free parameters (or dependent parameters): \(m_{Z^{\prime}_{\rm VFC}}\), \(g_{ut}\), and the invisible branching ratio \(\mathcal{BR}(\chi\bar{\chi})\) of the mediator.
### Scalar and pseudoscalar mediators
A preferred coupling of DM to top quarks is predicted in simplified models containing a spin-0 mediator with Yukawa-like couplings to SM fermions. The mediator can be either a scalar (\(\phi\)) or pseudoscalar (\(a\)). These models can be straightforwardly embedded in ultra-violet (UV) complete theories with extended Higgs sectors, such as Two-Higgs-Doublet Models (2HDMs, see also Section 2.3). Assuming Yukawa-like couplings allows this class of models to satisfy strong constraints from flavour precision measurements. The dynamics of flavour violation are completely determined by the structure of the ordinary fermion Yukawa couplings, which is referred to as _Minimal Flavour Violation (MFV)_[19].
The simplified models described in this section can be broadly categorised into models with a colour-neutral and a colour-charged interaction. An overview of the models falling into each category can be found in Ref. [15] and references therein. Two representative benchmark models used by the ATLAS and CMS Collaborations are presented in the following.
#### 2.2.1 Colour-neutral interaction
A colour-neutral interaction between a SM and a DM particle is described by a simplified model with a neutral, scalar or pseudoscalar mediator [20, 14] with Yukawa-like couplings to the SM fermions. The model has four free parameters: the mass of the DM particle, \(m_{\chi}\), the mass of the mediator, \(m_{\phi/a}\), the coupling of the mediator to DM, \(g_{\chi}\), and the coupling of the mediator to SM fermions. The latter is parameterised by a flavour-universal coupling constant \(g_{q}\equiv g_{u}=g_{d}=g_{\ell}\), which modifies the SM-like Yukawa coupling of the mediator to fermions [20], thus satisfying the requirements of MFV. It should be noted that couplings to leptons are explicitly included in the model but in practice the related signatures play no significant role in the parameter space accessible to collider searches [14]. Couplings to vector bosons \(W,Z\) are not included in this simplified model [20]. The Yukawa-like couplings imply that the mediator is mostly produced via loop-induced gluon fusion via a heavy-quark dominated loop or in association with heavy-flavour quarks, mostly top quarks. Additionally, visible decays of the mediator preferentially result in heavy quarks. The
Figure 2: Schematic representation of the dominant production and decay modes of the VFC model [15].
dominant production and decay modes of the mediator with heavy-flavour quarks in the final state are shown in Figure 3. These are (from left to right):
* visible decay of a mediator produced via gluon-fusion to heavy-flavour quarks, resulting in a resonant \(t\bar{t}\) or \(b\bar{b}\) signal;
* associated production of a mediator that decays either visibly or invisibly with heavy-flavour quarks, leading to a \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+t\bar{t}/b\bar{b}\) signature in the case of invisible mediator decay or characteristic fully visible \(t\bar{t}t\bar{t}\), \(t\bar{b}\bar{b}\), \(b\bar{b}b\bar{b}\) signatures;
* associated production of an invisibly decaying mediator with a top quark and a light (\(d,u,s,c\)) quark, leading to a \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+t\bar{t}\) signature;
* associated production of an invisibly decaying mediator with a top quark and a \(W\) boson, resulting in a \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+tW\) signature.
Additional signatures not shown here include \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+\)jet and \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+V/h\) production.
It should be noted that, while the Yukawa-like coupling structure implies a greater importance of signatures involving top quarks rather than bottom quarks in the final state, signatures involving bottom quarks are still relevant as some UV completions of this simplified model involve a parameter modifying the relative importance of the couplings to up- and down-type quarks. In these UV completions, signatures involving bottom quarks can be more sensitive than signatures involving top quarks if the couplings to up-type quarks are suppressed.
#### 2.2.2 Colour-charged interaction
A colour-charged interaction between the SM quarks and DM is described in a class of simplified models containing a scalar, colour-triplet mediator particle. This type of simplified models is inspired by the Minimal Supersymmetric Standard Model (MSSM) [21, 22] with first- and second-generation squarks and neutralino DM [15]. The mediator couplings to quarks and DM in the simplified models, however, can differ from those of the MSSM, leading to additional production diagrams.
Different models of colour-charged mediators, differing by the mediator couplings to quarks, have been probed at the LHC. These include a model with preferred couplings of the mediator to the first and second quark generation, a model with preferred mediator couplings to bottom quarks, and a model with preferred mediator couplings to top quarks. Only the latter will be discussed in this review. The concrete realisation of this model is documented in Ref. [17]. It contains a new \(\mathrm{SU}(2)_{\mathrm{L}}\) singlet field that couples to right-handed quarks. The mediator corresponding to this field is produced from a down-type quark-anti-quark pair and decays to a top quark and a DM particle, as illustrated in Figure 4. This model can be related to the MSSM if an additional R-parity violating interaction of the top squark with the down-type quarks is assumed [15].
Figure 3: Schematic representation of the dominant production and decay modes with heavy-flavour quarks in the final state in the simplified model with a scalar (\(\phi\)) or pseudoscalar (\(a\)) mediator [15].
The free parameters of this model are the mass of the DM particle, \(m_{\chi}\), the mass of the mediator, \(m_{\eta_{t}}\), the \(t\)-DM coupling strength of the mediator, \(\lambda_{t}\), and the coupling strength of the mediator to down-type quarks, \(g_{ds}\).
### Extended Higgs sectors
Extended Higgs sectors are predicted by a range BSM theories, such as supersymmetry [23], certain classes of axion models [26], or theories predicting additional sources of CP violation in the Higgs sector to explain the observed baryon asymmetry in the universe [24, 25]. Extension of the SM Higgs sector by a second complex SU(2) doublet, referred to as Two-Higgs-Doublet Models (ZHDMs), are among the simplest and most studied models with an extended Higgs sector, historically due to their strong motivation from supersymmetry. In the past years, 2HDMs have also received considerable attention from the DM community as a means of embedding the simplified, mediator-based, models described in the previous sections in the context of a UV-complete and renormalisable framework with a broader collider phenomenology. Models of DM based on a 2HDM with a vector [29], pseudoscalar [31, 32], and scalar [30] mediator have been proposed. Concrete realisations of the former two have been used as benchmark models by the LHC experiments. Models with vector mediators are not discussed in this review as final states with top quarks do not play a dominant role in their phenomenology. Models with a pseudoscalar mediator, on the other hand, feature a rich phenomenology involving relevant signatures with top quarks due to the Yukawa-type coupling of the mediator to SM fermions. pseudoscalar mediators are also particularly interesting to study at the LHC as they are not strongly constrained by direct-detection experiments because the DM-nucleon scattering cross-section pseudoscalar couplings is strongly suppressed at tree-level by the momentum transfer in the non-relativistic limit [33]. A concrete realisation of a 2HDM with a pseudoscalar mediator that is used as a benchmark model by the LHC experiments is described in Section 2.3.1.
#### 2.3.1 2HDM with a pseudoscalar mediator
A 2HDM with a pseudoscalar mediator \(a\)[31], referred to as 2HDM+\(a\) in the following, is a more complex simplified model that embeds the phenomenology of the simplified models with a colour-neutral pseudoscalar mediator (Section 2.2.1) in more complete model with a second complex SU(2) doublet. The 2HDM in this model has a CP-conserving potential with a softly broken \(\mathbb{Z}_{2}\) symmetry [28]. Its Higgs sector contains five Higgs bosons: two scalars, \(h\) and \(H\), a pseudoscalar, \(A\), and two charged Higgs bosons \(H^{\pm}\). The alignment limit is assumed, meaning that one of the two scalars of the model is identified with the 125 GeV Higgs boson discovered in 2012. Furthermore, the Yukawa structure of the 2HDM is of type-II [27] meaning that couplings of the additional Higgs bosons to top quarks are preferred over those to other fermions at low values of the ratio of the two vacuum expectation values, \(\tan\beta\), one of the model parameters with the
Figure 4: Schematic representation of \(p_{\rm T}^{\rm miss}\) +\(t\) production via a colour-changing scalar mediator \(\eta_{t}\)[15].
biggest impact on the collider phenomenology of the model. The pseudoscalar mediator \(a\) mixes with the pseudoscalar \(A\) of the 2HDM with mixing angle \(\theta\).
The phenomenology of the 2HDM+\(a\) is fully defined by 14 free parameters, making it considerably more complex than the simplified models described in the previous sections. These parameters are: the masses \(m_{h}\), \(m_{H}\) and \(m_{A}\) of the neutral Higgs bosons; the masses \(m_{H^{\pm}}\) of the charged Higgs bosons; the mass \(m_{a}\) of the mediator; the mass \(m_{X}\) of the DM particle; the coupling \(y_{X}\) between DM and the mediator; the three quartic couplings \(\lambda_{\rm P1}\), \(\lambda_{\rm P2}\), \(\lambda_{3}\) of the mediator to the SU(2) fields; the vacuum expectation value (VEV) \(v\) of the electroweak sector; the ratio \(\tan\beta=\frac{v_{2}}{v_{1}}\) of the VEVs of the two Higgs fields; the mixing angle \(\alpha\) between the two scalar Higgs bosons \(h\) and \(H\); and the mixing angle \(\theta\) between the pseudoscalar Higgs boson \(A\) and the mediator \(a\).
The choice of the alignment limit (\(\cos(\beta-\alpha)\)=0) implies \(m_{h}=125\) GeV and \(v=246\) GeV. The DM-mediator coupling is set to unity (\(y_{X}=1.0\)) without significant impact on the phenomenology of the model. The setting \(\lambda_{3}=3\) is chosen to ensure the stability of the Higgs potential in the mass ranges of interested of the heavy Higgs bosons [15]. Furthermore, the choice \(\lambda_{\rm P1}=\lambda_{\rm P2}=\lambda_{3}=3\) maximises the tri-linear couplings between the CP-even and CP-odd neutral states [15]. Finally, the choice \(m_{A}=m_{H}=m_{H^{\pm}}\) ensures compatibility of the model predictions with flavour constraints [31] and additionally simplifies the phenomenology of the model [15].
With these constraints, the remaining 2HMD+\(a\) parameter space can be described by the following five parameters: \(m_{A}\), \(m_{a}\), \(m_{X}\), \(\sin\theta\), and \(\tan\beta\). Representative benchmark scans of this parameter space have been defined by the LHC Dark Matter Working Group [34] with the aim to highlight different aspects of the phenomenology of this benchmark model and the interplay between searches targeting different signal processes across this parameter space. Additional benchmark scans have been defined in Ref. [35].
The 2HDM+\(a\) predicts a rich phenomenology with a diverse range of final states. The dominant processes leading to final states with top quarks are shown in Figure 3, along with the leading diagrams for the resonant production of an invisibly decaying mediator with a Higgs or \(Z\) boson, leading to \(p_{\rm T}^{\rm miss}\) +\(h\) and \(p_{\rm T}^{\rm miss}\) +\(Z\) final states, respectively, which are among the most sensitive probes of the 2HDM+\(a\). A full overview of the phenomenology of the 2HDM+\(a\) can be found in Refs. [31, 34].
Figure 5: Schematic representation of relevant production and decay modes with top quarks leading to either top quarks in the final state or \(p_{\rm T}^{\rm miss}\) +\(h/Z\) signatures. From left to right: resonant production of a neutral scalar or pseudoscalar particle \(H/A/a\) decaying to \(t\bar{t}\) or \(b\bar{b}\); associated production with \(b\bar{b}\) or \(t\bar{t}\) of a single \(H/A/a\) decaying either visibly to heavy flavour or invisibly to DM; associated production of a top quark and a charged Higgs boson decaying to a \(W\) boson and an invisibly decaying mediator \(a\); resonant \(A/H\) production with subsequent decay to a \(Z/h\) boson and an invisibly decaying mediator \(a\)[15].
### EFT model of scalar dark energy
Searches for DM signatures involving top quarks provide a powerful tool to probe models of scalar DE. The first re-interpretation of DM searches in the context of DE, which relied on the analysis of 36 fb\({}^{-1}\) of LHC Run 2 data [15], used an EFT implementation [37] of the Horndeski theories [36] to describe DE production at the LHC [15]. The latter introduce a new scalar field, \(\phi_{\rm DE}\), corresponding to DE, that couples to gravity.
The EFT model contains two classes of operators: operators that are invariant under a shift symmetry \(\phi_{\rm DE}\to\phi_{\rm DE+constant}\) and operators that break this symmetry. The former contain only derivative couplings of the DE field to SM fermions as direct Yukawa-type interactions break the shift symmetry. The latter induce direct couplings of the DE field to the SM fermions, such as Yukawa-type interactions, and are subject to tight experimental constraints [38].
Only shift-symmetric operators of the EFT model have been considered for the DE re-interpretation of LHC DM searches [15]. The model under consideration contains nine such operators, \(\mathcal{O}_{i}^{(d)}\), where \(d\) denotes the dimensionality of the operator. This leads to nine possible terms in the Lagrangian, each suppressed by powers of a characteristic energy scale \(M_{i}^{d-4}\), according to the operator's dimensionality:
\[\mathcal{L}=\mathcal{L}_{\rm SM}+\sum_{i=1}^{9}c_{i}\mathcal{L}_{i}=\mathcal{ L}_{\rm SM}+\sum_{i=1}^{9}\frac{c_{i}}{M_{i}^{d-4}}\mathcal{O}_{i}^{(d)},\]
where the \(c_{i}\) denote the Wilson coefficients.
Only the phenomenology of the two leading, i.e. least suppressed, terms has been considered by the LHC experiments so far. These are of dimension eight and can be expressed in terms of the conformal anomaly, \(T_{\nu}^{\nu}\) (\(=m\bar{\psi}\psi\) for a Dirac field), and the energy-momentum tensor of the SM Lagrangian \(T^{\mu\nu}\) as follows:
\[\mathcal{L}_{1} = \frac{\partial_{\mu}\phi_{\rm DE}\partial^{\mu}\phi_{\rm DE}}{M_{ 1}^{4}}T_{\nu}^{\nu}\] \[\mathcal{L}_{2} = \frac{\partial_{\mu}\phi_{\rm DE}\phi_{\rm\phi}\phi_{\rm DE}}{M_ {2}^{4}}T^{\mu\nu}.\]
The coupling described by the first term, \(\mathcal{L}_{1}\), is proportional to the mass of the SM fermions to which the DE field couples, thus making collider signatures involving top quarks a sensitive probe of DE. A schematic representation of DE production at the LHC via this operator is shown in Figure 6. It describes the radiation of a pair of DE particles off a final-state top quark from SM \(t\bar{t}\) production, leading to a \(p_{\rm T}^{\rm miss}\) +\(t\bar{t}\) signature.
The second operator, \(\mathcal{L}_{2}\), involves derivatives of the SM fields and is therefore proportional to their momenta. Final states involving high-momentum intermediate states, of which a DE pair is radiated off, provide the best sensitivity to this operator. At a hadron collider like the LHC, the most likely high-momentum
Figure 6: Schematic representation of the leading process of DE production in association with a \(t\bar{t}\) pair in an EFT model of scalar DE via the operator \(\mathcal{L}_{1}\)[15].
intermediate state particles are hadronically interacting particles, such as gluons, leading to characteristic \(p_{\rm T}^{\rm miss}\) +jet signatures as the smoking-gun signatures for DE production.
Constraints on the EFT model of DE have been derived using searches for both \(p_{\rm T}^{\rm miss}\) +\(t\bar{t}\) (\(\mathcal{L}_{1}\) term) and \(p_{\rm T}^{\rm miss}\) +jet signatures [15] (\(\mathcal{L}_{2}\) term). Only the former are discussed in this review. It should be noted that additional signatures, such as \(p_{\rm T}^{\rm miss}\) +\(t\) production, are predicted based on the sub-leading operators. The exploration of these additional signatures and possible re-interpretations of further DM searches in the context of DE is left to future work.
## 3 Experimental signatures
Searches for DM in \(pp\) collisions involving single or multiple top quarks can be broadly split into two categories: Searches for large \(p_{\rm T}^{\rm miss}\) and searches for a DM mediator decaying into SM particles. Both classes rely on different analysis technique. Common to all searches is a detailed exploration of the top quark decay. Due to the almost diagonal structure of the CKM matrix and in particular \(V_{tb}\) being close to one, the top quark decays almost 100% of the time into a bottom quark and a \(W\) boson. The \(W\) boson itself decays with about 30% probability into a charged lepton, i.e. an electron, muon, or tau, and the corresponding neutrino, or into two quarks otherwise. Similar to DM particles, neutrinos can only be inferred from missing transverse momentum in the detector. Events with two top quarks or with a single top quark and a \(W\) boson are typically categorised in three orthogonal channels based on the lepton (\(\ell=e,\mu\), including decays via \(\tau\) leptons, i.e. \(\tau\to\mathrm{e},\tau\to\mu\)) multiplicity in the final state. 0-lepton (\(0\ell\)) final states arise in events in which both \(W\) bosons decay hadronically; 1-lepton (\(1\ell\)) final states arise in events in which one \(W\) boson decays hadronically, the other leptonically; 2-lepton (\(2\ell\)) final states arise if both \(W\) bosons decay leptonically.
When top quarks recoil against significant \(p_{\rm T}^{\rm miss}\) or result from the decay of a very heavy resonance, top quarks are highly Lorentz boosted and their decay products become highly collimated. In the case of hadronic top-quark decays, this means that the particle showers from the three final-state quarks can no longer be reconstructed as three separate small-radius (small-\(R\)) jets (_resolved decay_) but merge into a single large-radius (large-\(R\)) jet with characteristic substructure (_merged decay_). Merged top-quark decays are identified using dedicated _top tagging_ algorithms.
### Final states with invisible decays
#### 3.1.1 \(p_{\rm T}^{\rm miss}\) +\(t\)
Searches for the production of large \(p_{\rm T}^{\rm miss}\) in association with a single top quark have been conducted by both the ATLAS [39] and CMS [40] Collaborations.
The ATLAS Collaboration has performed a \(p_{\rm T}^{\rm miss}\) +\(t\) search targeting merged hadronic top-quark decays using 139 fb\({}^{-1}\) of \(\sqrt{s}=13\) TeV \(pp\) collision data [39]. Events are required to have \(p_{\rm T}^{\rm miss}>\)250 GeV and contain at least one large-\(R\) (anti-\(k_{t}\)[44]\(R=1.0\)) jet with transverse momentum \(350<p_{T}<2500\) GeV and mass \(40<m<600\) GeV. Additionally, the selected jet must be identified as a top-quark candidate via a dedicated top-tagging algorithm [41], which relies on deep neural net (DNN) that uses jet kinematics and substructure variables as input [41, 42]. The working point for the top tagging algorithm chosen for this analysis corresponds to a 50% top tagging efficiency.
Dedicated signal regions targeting resonant DM production via a colour-charged scalar mediator (Section 2.2.2) and non-resonant DM production via a vector mediator with a \(V_{ut}\) coupling (Section 2.1.2) are defined based on the output score of XGBoost classifiers [43] that are trained on several event observables. Control regions are defined to constrain the dominant backgrounds from \(t\bar{t}\) and \(V\)+jets production.
A similar search has been performed by the CMS Collaboration [40]. Different from the ATLAS analysis the result is based on data recorded in 2016 only corresponding to an integrated luminosity of 36 fb\({}^{-1}\). To identify the hadronically decaying top quark, CA15 jets are used. CA15 jets are clustered from particle flow candidates using the Cambridge-Aachen algorithm [44] with a distance parameter of 1.5. The CA15 jets must have a transverse momentum \(p_{T}>250\) GeV, \(|\eta|<2.4\) and an invariant mass of 110 GeV \(<\) m \(<\) 210 GeV. Furthermore, several substructure observables, like the N-subjettiness [45] or so-called energy-correlation functions [46, 47] are combined in a boosted decision tree (BDT) [48] to distinguish top quark jets from the hadronisation products of single light quarks or gluons. At 50% signal efficiency, the BDT background acceptance is 4.7%. The dominant backgrounds from \(t\bar{t}\) and single vector bosons (\(Z\), \(W\), \(\gamma\)) are constraint from dedicated control regions. The signal is probed in distributions of missing transverse energy \(p_{\mathrm{T}}^{\mathrm{miss}}\) considering two signal regions which correspond to a BDT output between 0.1 and 0.45 and above 0.45 respectively.
The summary plots for the benchmark model with a colour-charged scalar mediator in Section 4.2.2, which show the interplay between the \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+t\) and same-sign \(tt\) (Section 3.2.1) searches, are based on an earlier search of the ATLAS Collaboration using 36 fb\({}^{-1}\) of \(\sqrt{s}=13\) TeV \(pp\) collision [49]. This analysis statistically combines the results from two orthogonal channels, targeting semi-leptonic and hadronic top-quark decays, respectively.
#### 3.1.2 \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+tW\) and \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+tj\)
Like the \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+t\) searches described in Section 3.1.1, searches for \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+tW\) target events with single top quarks produced in association with large \(p_{\mathrm{T}}^{\mathrm{miss}}\) but additionally require the existence of a second visible object. This can be either a \(W\) boson or a hadronic jet. The resulting signatures are referred to as \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+tW\) and \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+tj\), respectively. It should be noted that searches in these final states are not orthogonal to the \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+t\) searches discussed in Section 3.1.1 as the latter do not veto the presence of additional visible objects in the event and hence implicitly include \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+tj\) and \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+tW\) signatures.
While \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+t\) searches are traditionally used to constrain resonant DM production via a colour-charged scalar mediator and non-resonant DM production via a vector mediator with a flavour-violating \(V_{tt}\) coupling, as explained in Section 3.1.1, \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+tW\) searches in particular are used to probe the 2HDM\(+a\) (Section 2.3.1) and more recently also simplified models with a scalar or pseudoscalar mediator (Section 2.2.1).
Simplified models with a scalar or pseudoscalar mediator predict both \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+tW\) and \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+tj\) production, as illustrated by the two right-most diagrams in Figure 3. The corresponding signal cross-sections are, up to mediator masses of 200 GeV, smaller than those of the dominant \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+t\bar{t}\) production mode discussed in Section 3.1.3. Therefore, \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+tW\) and \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+tj\) searches have not been used to constrain these simplified models by the ATLAS Collaboration. However, with the increased sensitivity of recent searches, single top associated production becomes more and more relevant and a first search including \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+tW\) and \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+t\bar{t}\) signatures has been performed by the CMS Collaboration [50] as further discussed in Section 3.1.4.
\(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+tW\) and \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+tj\) production is also predicted in the 2HDM\(+a\). Compared to simplified models with a single (pseudo)scalar mediator, this model contains additional production modes, illustrated for example by the third diagram in Figure 5, which lead to higher predicted signal cross-sections for \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+tW\) and \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+tj\) production. A search for \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+tW\) and \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+tj\) signatures, optimised specifically for 2HDM\(+a\) signal processes, has been conducted by the ATLAS Collaboration [51] using 139 fb\({}^{-1}\) of \(\sqrt{s}=13\) TeV \(pp\) collision data. The search considers events with one or two leptons (\(e\),\(\mu\)), at least one \(b\)-tagged jet, and significant \(p_{\mathrm{T}}^{\mathrm{miss}}\) in three orthogonal categories. Two of them target \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+tW\) production in final states with one or two leptons, while the third channel targets \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+tj\) production in final states with exactly one lepton. The search has been extended in the context of a preliminary analysis of the same dataset [52] to include
events with highly energetic \(W\) boson decays in final states with zero leptons or one lepton. These provide additional sensitivity for large masses of the charged Higgs bosons. The newly added zero- and improved one-lepton channels are statistically combined with the two-lepton channel of Ref. [51].
#### 3.1.3 \(p_{\mathrm{T}}^{\mathrm{miss}}\) +\(t\bar{t}\)
Searches for DM or DE production in association with a \(t\bar{t}\) pair target final states characterised by sizeable \(p_{\mathrm{T}}^{\mathrm{miss}}\) and the presence of the \(t\bar{t}\) decay products.
The CMS Collaboration has released a search for DM in association with a \(t\bar{t}\) pair using 137 \(\mathrm{fb}^{-1}\) of data recorded at \(\sqrt{s}\) = 13 TeV between 2016 and 2018 [53]. The analysis combines previous searches in final states with 0 [54], 1 [55] or 2 [56] leptons. While the primary target of the analyses is stop quark production, a re-interpretation of the combined result in a simplified DM model with scalar mediators is provided.
Central feature of the analysis in the 0-lepton channel is an advanced jet-tagging algorithm identifying hadronically decaying top quarks and \(W\) bosons with low and high Lorentz-boost. For the highly Lorentz-boosted regime, the DeepAK8 algorithm [57] is used whereas in the resolved regime the DeepResolved algorithm [55] is explored to tag top quarks in the intermediate transverse momentum range from 150 to 450 GeV. The analysis includes a total of 183 non-overlapping signal regions. The contribution of each SM background process is estimated through measurements of event rates in dedicated background control samples that are translated to predicted event counts in the corresponding signal region with the aid of MC simulation.
The key requirements in the 1-lepton channel are exactly one lepton and \(p_{\mathrm{T}}^{\mathrm{miss}}>250\) GeV. Moreover, the transverse mass computed from the lepton and the missing momentum is required to be larger than 150 GeV to reduce the dominant background from SM \(t\bar{t}\) and \(W\)+jets production, for which the transverse mass has a natural cutoff at the mass of the \(W\) boson. The SM production of dileptonic \(t\bar{t}\) events, where one of the leptons is lost, is the largest remaining background. It is estimated through a set of dedicated control regions and reduced by using the modified topness variable [55]. The 1-lepton channel also exploits the jet tagging algorithms used in the 0-lepton channel, to identify hadronic top quark decays. In order to enhance the sensitivity to different signal scenarios, including the case of small missing transverse momentum, events are categorised into a total of 39 non-overlapping signal regions.
The search in the 2-lepton channel explores orthogonal signal regions based on the flavour of the leptons and three characteristic observables: The so-called missing transverse momentum significance [58] and two specific definitions of the stransverse mass [56, 59]. The \(p_{\mathrm{T}}^{\mathrm{miss}}\) significance is given by the ratio of the \(p_{\mathrm{T}}^{\mathrm{miss}}\) over its resolution and it is particularly powerful to suppress events where detector effects and misreconstruction of particles from pileup interactions are the main source of missing transverse momentum. The key feature of the stransverse mass using leptons (lepton and b-quark jets) is that it retains a kinematic endpoint at the \(W\)-boson (top-quark) mass for SM background events from the leptonic decays of two \(W\) bosons (top quarks). The dominant backgrounds arise from \(t\bar{t}\) and \(t\bar{t}+Z\) production as well as single-top quark production in the \(Wt\) channel. After a veto of the \(Z\)-boson mass window, i.e. \(|m_{\ell\ell}-m_{Z}|>15\) GeV, Drell-Yan production represents only a minor source of background.
A similar search using 139 \(\mathrm{fb}^{-1}\) of LHC data has been released by the ATLAS Collaboration exploring separately the 0-lepton [60], 1-lepton [61], and 2-lepton [62] channels. All three final states have been combined afterwards into a single result [63]. In this context, the (0\(\ell\)) channel search has been further optimised through an improved selection of triggers targeting \(b\)-jets. Searches for \(p_{\mathrm{T}}^{\mathrm{miss}}\) +\(tW\) (Section 3.1.2) production have not been included in this combination as their datasets are not orthogonal to those in the \(p_{\mathrm{T}}^{\mathrm{miss}}\) +\(t\bar{t}\) by construction. Including them in a statistical combination is left to future publications. While by now the \(p_{\mathrm{T}}^{\mathrm{miss}}\) +\(t\bar{t}\) searches discussed above have been interpreted in simplified models with a scalar
or pseudoscalar mediator only, see Section 4.2.1, earlier searches, based on smaller datasets, have already been used to constrain a 2HDM with a pseudoscalar mediator (Section 4.3.1) and a model of scalar DE (Section 2.4).
1.4 \(p_{\rm T}^{\rm miss}\) +\(tW\), \(p_{\rm T}^{\rm miss}\) +\(tj\) and \(p_{\rm T}^{\rm miss}\) +\(t\bar{t}\)
A first result exploring topologies of single top quark and top-quark pair associated production has been released by the CMS Collaboration [64]. The analysis is using 36 fb\({}^{-1}\) of data recorded in 2016 at 13 TeV and combines multiple selection categories in final states with 0 or 1 lepton. In the 1 lepton channel, dominant background is suppressed using a similar strategy as the one discussed in Section 3.1.3, while in the 0 lepton channel, dominant background is reduced by a cut on the missing transverse energy, the ratio of the leading jet transverse momentum over the total hadronic transverse energy in the event, and the minimum opening angle between the missing transverse energy and the two leading jets. To enhance the sensitivity to single top quark associated production, events are separated according to the number of identified b-quark jets. Events with a single b-tagged jet are further split into events with a central or forward jet. The categorization in terms of forward jets allows a further enhancement of t/t+DM t-channel events. This production mode leads to final states with one top quark and an additional jet, which tends to be in the forward region of the detector, while the additionally produced b quark is typically low in transverse momentum and therefore not reconstructed. Key observable of this search is the \(p_{\rm T}^{\rm miss}\) spectrum explored in a combined fit to different orthogonal signal regions. Overall, data are found to be in good agreement with the expected SM background. Due to the combination of single top quark and \(t\bar{t}\) associated production, this analysis was able to derive the most stringent limits from LHC data on spin-0 mediators at that time.
### Final states without invisible decays
#### 3.2.1 Same-sign \(t\bar{t}\)
Events with a same-sign \(t\bar{t}\) pair are identified via the leptonic decays of the \(W\) bosons from the two top quarks. They are required to contain two same-sign charged leptons, at least one \(b\)-jet, and significant \(p_{\rm T}^{\rm miss}\) from the two neutrinos resulting from the leptonic \(W\) boson decays.
A search in same-sign \(t\bar{t}\) events has been conducted by the ATLAS Collaboration, using 36 fb\({}^{-1}\) of \(\sqrt{s}=13\) TeV data [18]. The signal region of this search is defined by requiring the presence of two positively charged leptons (\(e\),\(\mu\)) and at least one \(b\)-jet. Additionally, the scalar sum of the transverse momenta of all selected objects in the event, \(H_{\rm T}\) is required to be significant (\(H_{\rm T}>750\) GeV). Further requirements on the \(p_{\rm T}^{\rm miss}\) and the angular separation of the two leptons are imposed. The signal region is split into three orthogonal channels based on the lepton flavour (\(ee\), \(e\mu\), \(\mu\mu\)). The main backgrounds of this search are estimated using MC simulation, while the sub-dominant background from fake leptons is estimated using data-driven techniques.
#### 3.2.2 \(t\bar{t}\)
A search for resonant \(t\bar{t}\) production in the \(0\ell\) channel has been conducted by the ATLAS Collaboration using 139 fb\({}^{-1}\) of \(\sqrt{s}=13\) TeV data [65]. This search targets heavy vector and axial-vector resonances (including DM mediators) with masses \(>1.4\) TeV, resulting in two merged top-quark decays. Merged top-quark decays are identified using a deep-neural net (DNN) based top tagger trained on the distributions of various characteristic jet and jet substructure variables to distinguish top-quark from light-quark and gluon initiated jets. SM \(t\bar{t}\) production constitutes the main, irreducible background to this search, followed by strong multi-jet production. The background spectrum is derived from data by fitting a smoothly falling
function to the reconstructed \(m_{t\bar{t}}\) distribution, similar to the approach classically chosen in di-jet resonance searches.
A larger range of resonance masses has been probed by a search for resonant \(t\bar{t}\) production in the \(1\ell\) channel, conducted by the ATLAS Collaboration on 36 fb\({}^{-1}\) of \(\sqrt{s}=13\) TeV data [66]. This search targets both _merged_ and _resolved_ hadronic top-quark decays and is sensitive to resonance masses just above the \(t\bar{t}\) kinematic threshold (\(>2m_{\rm top}\)). The main, irreducible background from SM \(t\bar{t}\) production, as well as most other, smaller backgrounds, are estimated using MC simulation. Data-driven corrections are applied to the MC simulation of the \(W\)+jets background. The small background from strong multi-jet production is estimated with a fully data-driven approach.
A first search for heavy spin-1 resonances combining final states with 0, 1 and 2 leptons has been performed by the CMS Collaboration using data recorded at \(\sqrt{s}=\)13 TeV and corresponding to a total integrated luminosity of 35.9 fb\({}^{-1}\)[67]. The analysis utilises reconstruction techniques that are optimised for top quarks with high Lorentz boosts, which requires the use of non-isolated leptons partially overlapping with \(b\)-quark jets and jet substructure techniques for top-quark tagging. Except for the QCD multijet background in the 0-lepton channel, the shapes of all backgrounds are estimated from MC simulation. The signal strength is extracted from the distributions of the reconstructed invariant mass of the top quark pair for the 0- and 1-lepton channels and from the sum of missing transverse energy and the transverse momenta of all jets and leptons in the 2-lepton channel.
Interference effects between the resonant signal and background processes are not taken into account in the searches discussed above as they are irrelevant for spin-1 and spin-2 particles. However, this is not true for scalar and pseudoscalar resonances, such as additional heavy Higgs bosons, which are produced from \(gg\) initial states via heavy quark loops. The process \(gg\to A/H\to t\bar{t}\) interferes strongly with the irreducible background from SM \(t\bar{t}\) production, which is dominated by \(gg\to t\bar{t}\). Interference effects significantly distort the resonance lineshape from a Breit-Wigner peak to a characteristic peak-dip or even more complicated structures. The treatment of these effects is non-trivial and requires dedicated analysis methods, in particular in the statistical analysis. Searches for heavy scalars and pseudoscalars have been conducted by both the ATLAS [68] and CMS Collaborations [69] in the \(1\ell\) and \(1\ell+2\ell\) channels, respectively. These searches are sensitive to the production of scalar and pseudoscalar DM mediators. However, due to the strong model-dependence of the interference patterns, no dedicated interpretation of these results in the context of DM models exists to date. An approximate re-interpretation of the results in Ref. [68] in the context of the 2HDM+\(a\) (Section 2.3.1) can be found in Ref. [31].
#### 3.2.3 \(t\bar{t}t\bar{t}\)
Final states with four top quarks (\(t\bar{t}t\bar{t}\)) can arise from non-resonant processes predicted in the SM but are also predicted in BSM models allowing for the associated production of a heavy BSM resonance, which subsequently decays to \(t\bar{t}\), with a \(t\bar{t}\) pair. Four-top final states are particularly relevant in searches for heavy scalars and pseudoscalars, as the signal-background interference is negligible for associated production with \(t\bar{t}\) compared to loop-induced production from \(gg\) initial states (Section 3.2.2). It should be noted, though that the production cross-section for associated production is significantly lower than for loop-induced production.
Four-top final states are characterised by a high object multiplicity. Orthogonal signal regions can be defined based on the multiplicity of leptons (\(e,\mu\)) in the final state, which corresponds to the number of top quarks with a leptonically decaying \(W\) boson.
The ATLAS Collaboration has recently found evidence (4.3 \(\sigma\) observed, 2.4 \(\sigma\) expected significance) for four-top quark production in a search focusing on the multi-lepton final state conducted on 139 fb\({}^{-1}\) of
\(\sqrt{s}=13\) TeV \(pp\) collision data [70]. The result is consistent with the SM prediction for four-top production within 1.7\(\sigma\). A subsequent dedicated search for BSM four-top production on the same dataset specifically targets \(t\bar{t}\) associated production of heavy scalar or pseudoscalar Higgs bosons \(A/H\) decaying to \(t\bar{t}\) (\(t\bar{t}\)\(A/H\to t\bar{t}\bar{t}\)) [71]. It is based on and extends the analysis strategy of Ref. [70] to increase the sensitivity to \(A/H\) production. In both the SM and BSM searches, events are required to contain either a same-sign lepton pair or at least three leptons. A multivariate discriminant based on a Boosted Decision Tree (BDT) is used to separate between SM four-top production and other background processes, using event-level information such as jet and \(b\)-jet multiplicity as well as additional kinematic variables. The BSM search relies on a second BDT to subsequently distinguish between BSM and SM four-top production. This second BDT is parameterised as a function of the mass of the heavy Higgs boson by introducing the mass as a labelled input in the training [72]. The main, irreducible backgrounds arise from associated production of a \(t\bar{t}\) pair with a boson and additional jets (\(t\bar{t}\)\(+\)\(W\)+jets, \(t\bar{t}\)\(+\)\(Z\)+jets, \(t\bar{t}\)\(+\)\(h\)+jets). They are estimated using MC simulations with additional data-driven corrections applied in the case of \(t\bar{t}\)\(+\)\(W\)+jets production. Smaller, reducible backgrounds arise mostly from \(t\bar{t}\)\(+\)jets and \(t\)\(W\)+jets production with mis-identified charge or fake/non-prompt leptons. These smaller backgrounds are estimated from data using dedicated control regions. No significant excess of events over the SM prediction is observed in the BSM four-top search and the results are interpreted in the context of a type-II 2HDM. No dedicated interpretation in the context of DM models has been performed. The constraints on the type-II 2HDM with \(m_{A}=m_{H}\), however, indicate that this search can improve upon the current four-top constraints on the 2HDM+\(a\) parameter space included in the latest 2HDM+\(a\) summary plots of Ref. [81] (Section 4.3.1), which are based on a search in the single-lepton channel using 36 fb\({}^{-1}\) of \(\sqrt{s}=13\) TeV data [73].
The CMS Collaboration has reported an observed (expected) significance for \(t\bar{t}\)\(t\bar{t}\) of 2.6 \(\sigma\) (2.7 \(\sigma\)) in the multi-lepton channel using 137 fb\({}^{-1}\) of \(\sqrt{s}=13\) TeV \(pp\) collision data [75]. The search relies on a new multivariate classifier to maximize the sensitivity to the SM \(t\bar{t}\)\(t\bar{t}\) signal. As in the equivalent ATLAS search, the main backgrounds from \(t\bar{t}\)\(+\)boson+jets production are estimated using MC simulations. Data-driven corrections are applied in the cases of \(t\bar{t}\)\(+\)\(W\)+jets and \(t\bar{t}\)\(+\)\(Z\)+jets production. Backgrounds arising from charge mis-identification or fake/non-prompt leptons are estimated from data. This result has been used to constrain scalar and pseudoscalar production in 2HDMs as well as in the simplified DM model with a scalar or pseudoscalar mediator (Section 2.2.1). No dedicated interpretation for the 2HDM+\(a\) is available, although the constraints on type-II 2HDMs suggest that the search will also constrain the 2HDM+\(a\) parameter space.
The searches described above have been optimised for non-resonant \(t\bar{t}\)\(t\bar{t}\) production and/or production of heavy scalar or pseudoscalar resonances, including resonance masses below 1 TeV. An additional search targeting top-philic vector and axial-vector (\(Z^{\prime}\)) resonances with masses \(>1\) TeV has been conducted by the ATLAS Collaboration. The preliminary result relies on 139 fb\({}^{-1}\) of \(\sqrt{s}=13\) TeV data [79]. Unlike other searches in the \(t\bar{t}\)\(t\bar{t}\) final state, this search is designed to reconstruct the BSM resonance explicitly from a pair of re-clustered jets identified as merged top quarks. The results can in principle be used to constrain purely top-philic vector or axial-vector mediators to which classic \(t\bar{t}\) resonance searches, which assume \(Z^{\prime}\) production from light-quark or gluon initial states (Section 3.2.2) may not be sensitive. A dedicated interpretation of this search in the context of DM models is left to future work.
#### 3.2.4 \(tbH^{\pm}(tb)\)
Final states with two top and two bottom quarks are sensitive to the associated production of a charged Higgs boson \(H^{\pm}\) with a top and a bottom quark (\(tb\)) and its subsequent decay to \(tb\).
The ATLAS Collaboration has published a search for \(tbH^{\pm}(tb)\) production using 139 fb\({}^{-1}\) of \(\sqrt{s}=13\) TeV data [76]. It targets charged Higgs boson masses in the range 0.2-2.0 TeV. Events are required to contain exactly one electron or muon to suppress the large backgrounds from strong multi-\((b)\)-jet production. The
selected events are further classified according to the number of reconstructed jets and the number of \(b\)-jets among them. A neural network is used to enhance the separation between signal and background. The dominant background for this search is composed of \(t\bar{t}\) jets events as well as single-top production in the \(Wt\) channel. The backgrounds are modelled using MC simulations with additional data-driven corrections derived in a dedicated control region.
A search for charged Higgs bosons decaying into a top and a bottom quark in the 0-lepton final state has been performed by the CMS Collaboration using proton-proton collision at \(\sqrt{s}=13\) TeV from 2016 [77]. Two different scenarios have been studied, the associated production with a top and bottom quark and the \(s\)-channel production of a charged Higgs. The results are combined with a search in final states with one or two leptons [78]. For production in association with a top quark, upper limits at the 95% confidence level on the charged Higgs production cross section and branching fraction of 9.25 to 0.005 pb are obtained for charged Higgs masses in the range of 0.2 to 3 TeV. While there is no DM interpretation of the result by the CMS Collaboration, the result from ATLAS was interpreted in a 2HDM+\(a\) scenario, as further detailed in Section 4.3.1.
## 4 Results
### Vector and axial-vector mediators
#### 4.1.1 Flavour-conserving interaction
Strong constraints on visible decays of the axial-vector (Figure 7) or vector (Figure 8) mediator \(m_{Z^{\prime}}\) are obtained from a variety of resonance and related searches that probe mediator masses in the range between 50 GeV [80] and 5000 GeV [81].
The latest constraints on axial-vector mediators released by the ATLAS Collaboration and based on data from \(pp\) collisions at \(\sqrt{s}=13\) TeV are shown in Figure 7. The coupling of the mediator to leptons is set to zero (\(g_{\ell}=0\)), while the coupling to DM is set to unity (\(g_{\chi}=1.0\)) and the DM mass is taken to be 10 TeV to kinematically suppress invisible mediator decays and highlight the interplay of constraints on visible mediator decays.
In the high mediator mass range, the main sensitivity comes from two searches for di-jet resonances, referred to as _di-jet_ and _di-jet angular_. The former aims to identify local resonant enhancements in the di-jet invariant mass spectrum and targets narrow mediator widths. The latter, for which no results on the full LHC Run 2 dataset are available, relies on the di-jet angular separation to identify broader mediator widths that cannot be probed by the search in the invariant mass spectrum. Neither of the searches imposes quark-flavour specific selection requirements and hence are sensitive to all possible hadronic decays of the mediator.
Searches for \(t\bar{t}\) resonances, which rely on top-quark identification algorithms to identify specifically the decays of the mediator to top quarks, have a slightly lower expected sensitivity to the coupling \(g_{q}\) than di-jet searches, although the observed limit is stronger than that from the di-jet search in some small regions of the mediator mass where the di-jet observed limit fluctuates upward. The use of top-quark identification allows for a stronger suppression of SM backgrounds compared to di-jet and also di-\(b\)-jet searches, in particular the background from strong multi-jet production. This effect partially compensates the disadvantage of probing only roughly \(\frac{1}{6}\) of the hadronic mediator decays.
In Figure 8, constraints on vector mediators in the plane of the DM and the mediator mass from the CMS Collaboration [80] are shown. Different from Figure 7, results from visible and invisible decays are summarised. While searches with invisible final states are only possible when the mediator mass is about twice the DM mass, the sensitivity of searches for visible decays only depends on the DM mass through the
width of the mediator. When the decay channel to DM particles opens up, the width of the mediator increases and resonant searches become less sensitive. The best sensitivity to vector mediators from \(p_{\rm T}^{miss}\) searches is provided by DM searches with initial state radiation either from a gluon/quark jet or from the hadronic decay of a vector boson [82]. Searches with visible final states achieve best sensitivity down to 50 GeV when looking for a large radius jet that recoils against the mediator [83]. At high mass, the strongest constraints are obtained from di-jet searches [84]. The searches discussed in Section 3.2.2 probing vector mediators decaying into \(t\bar{t}\) are not shown as no dedicated interpretation of these results where performed in models of DM by the CMS Collaboration. However, the interpretation of the searches in generic vector particle models show comparable sensitivity between the results released by the ATLAS and CMS Collaborations.
#### 4.1.2 Flavour-changing interaction
The strongest constraints on the VFC model are obtained from searches targeting same-sign \(tt\) and \(p_{\rm T}^{\rm miss}\)\(+\)\(t\) production on 36 fb\({}^{-1}\) of \(pp\) collision data [15]. Results for two representative parameter planes are shown in Figure 9.
The left plot of Figure 9 shows a scan in the mediator mass versus the flavour-changing coupling \(g_{ut}\) while fixing the remaining two parameters at \(m_{\chi}=1\) GeV and \(g_{X}=1\). The \(p_{\rm T}^{\rm miss}\)\(+\)\(t\) search provides stronger constraints on \(g_{ut}\) at lower mediator masses, excluding \(g_{ut}\) down to 0.07 at 1 TeV, while the same-sign \(tt\) search is more sensitive for mediator masses > 1.6 TeV, still excluding \(g_{ut}>0.3\) at 3 TeV. Mediator masses below 1 TeV have been probed by the CMS Collaboration at \(\sqrt{s}\) = 13 TeV and are shown in Figure 10. The \(p_{\rm T}^{\rm miss}\)\(+\)\(t\) search discussed in Section 3.1.1 is able to exclude couplings as low as 0.03 for mediator masses of 200 GeV. The right plot of Figure 9 shows a scan in the invisible branching ratio of the mediator \(\mathcal{BR}(\chi\chi)\) and the coupling \(g_{ut}\). The constraints derived from the same-sign \(tt\) search exhibits only a weak dependence on \(\mathcal{BR}(\chi\chi)\) due to the fact that the sensitivity of this process is dominated by the \(t\)-channel exchange of the mediator (middle and right diagrams in Figure 2). This process is only indirectly sensitive to \(g_{X}\) through the total width of the mediator in the \(t\)-channel exchange. The same-sign \(tt\) analysis hence dominates the sensitivity at low values of \(g_{X}\) (and hence low values of \(\mathcal{BR}(\chi\chi)\)), while the \(p_{\rm T}^{\rm miss}\)\(+\)\(t\) analysis dominates the sensitivity at large values of \(\mathcal{BR}(\chi\chi)\), excluding \(g_{ut}\) down to almost 0.06 at \(\mathcal{BR}(\chi\chi)=1\).
### Scalar and pseudoscalar mediators
#### 4.2.1 Colour-neutral interaction
Simplified models with a colour-neutral scalar or pseudoscalar mediator have been constrained by searches targeting invisible mediator decays at the ATLAS and CMS experiments using data from \(pp\) collisions at \(\sqrt{s}=13\) TeV. The most recent constraints from the CMS Collaboration based on \(p_{\rm T}^{\rm miss}\)\(+\)\(t\bar{t}\) events are shown in Figure 11, while Figure 12 shows the most recent summary from the ATLAS Collaboration.
Up to now, only \(t\bar{t}\) associated DM production has been probed by the CMS Collaboration using the full Run II dataset of 137 fb\({}^{-1}\)[53]. The interpretation of this analysis in simplified models of scalar and pseudoscalar mediators is shown in Figure 11. Assuming a mediator coupling of 1 to DM and SM particles, masses up to 400 GeV and 420 GeV can be excluded for scalar and pseudoscalar mediators, respectively. While the sensitivities of the 0- and 1-lepton channels are comparable, the sensitivity of the 2-lepton channel is significantly weaker. The sensitivity of this channel can be further enhanced by exploring information sensitive to the spin of the mediator which has not been done here. The exclusion limits for pseudoscalar mediators can be further extended up to 470 GeV by \(p_{\rm T}^{\rm miss}\)\(+\)jet searches [82].
The results shown in Figure 12 are obtained from analyses targeting \(p_{\rm T}^{\rm miss}\)\(+\)\(t\bar{t}\), \(p_{\rm T}^{\rm miss}\)\(+\)\(tW\), \(p_{\rm T}^{\rm miss}\)\(+\)\(tj\), \(p_{\rm T}^{\rm miss}\)\(+\)\(b\bar{b}\), and \(p_{\rm T}^{\rm miss}\)\(+\)jet production using the full ATLAS Run 2 dataset of 139 fb\({}^{-1}\)[81]. The sensitivity across most of the mediator mass region is dominated by a statistical combination of three searches for \(p_{\rm T}^{\rm miss}\)\(+\)\(t\bar{t}\) production
Figure 8: 95% CL observed and expected exclusion regions on vector mediators in the DM-mediator mass plane from searches with visible and invisible final states released by the CMS Collaboration [80]. Exclusions are computed for a lepto-phobic scenario with \(g_{l}=0\), a universal quark coupling of \(g_{q}=0.25\) and a DM coupling of \(g_{\text{DM}}=1.0\).
Figure 7: Upper limits at 95% CL on the coupling \(g_{q}\) of the mediator to quarks in a simplified model with a vector or axial-vector mediator obtained from different types of resonance searches using data from \(pp\) collisions at \(\sqrt{s}=13\) TeV. The DM mass is \(m_{\chi}=10\) TeV and its coupling to the mediator \(g_{X}=1\)[81].
in the 0-, 1-, and 2-lepton channels (Section 3.1.3). In the scenario with a scalar mediator, the statistical combination of the \(p_{\rm T}^{\rm miss}\)\(+t\bar{t}\) searches provides the strongest constraints across the probed mediator mass range, while pseudoscalar case, the dominant constraints for \(m_{\phi/a}>300\) GeV are obtained from \(p_{\rm T}^{\rm miss}\)\(+\)jet searches. Searches targeting the \(p_{\rm T}^{\rm miss}\)\(+\)\(b\bar{b}\) signature provide significantly weaker constraints on this model. However, as explained in Section 2.2.1, in UV completions of the simplified model the couplings to up-type quarks can be suppressed compared to those to down-type quarks, making \(p_{\rm T}^{\rm miss}\)\(+\)\(b\bar{b}\) searches a relevant complement to \(p_{\rm T}^{\rm miss}\)\(+\)\(t\bar{t}\) searches. Searches targeting DM production with a single top quark (\(p_{\rm T}^{\rm miss}\)\(+\)\(tj\) and \(p_{\rm T}^{\rm miss}\)\(+\)\(tW\), see Section 2.2.1) have a similar sensitivity as the individual searches for \(p_{\rm T}^{\rm miss}\)\(+\)\(t\bar{t}\) production. They have not been included in the statistical combination as they are not orthogonal to the searches in the \(p_{\rm T}^{\rm miss}\)\(+\)\(t\bar{t}\) final states by construction.
Figure 10: Exclusion limits for the VFC model in the two-dimensional plane spanned by the mediator mass and the coupling between the mediator and quarks released by the CMS Collaboration [40]. The observed exclusion range is shown as yellow solid line, while the yellow dashed lines show the cases in which the predicted cross section is shifted by the assigned theoretical uncertainty. The expected exclusion range is indicated by a black solid line, the experimental uncertainties are shown in black dashed lines.
Figure 11: Expected (dashed line) and observed (solid line) upper limits at the 95% CL on the ratio of the excluded and predicted cross-section at leading-order for a DM particle with a mass of 1 GeV as a function of the mediator mass for a scalar (left) and pseudoscalar (right) mediator [53]. The green and yellow bands represent the regions containing 68 and 95%, respectively, of the distribution of limits expected under the background-only hypothesis. The mediator couplings are set to 1.
Figure 12: Upper limits at 95% CL on the production of a scalar \(\phi\) (left) and pseudoscalar \(a\) (right) mediator as a function of the mediator mass [81]. The limits are expressed in terms of the ratio of the excluded cross-section and the cross-section calculated for a coupling assumption of \(g=g_{q}=g_{\chi}=1.0\). The latter was calculated at NLO for the \(p_{\rm T}^{\rm miss}+t\bar{t}\) signatures and at LO for the \(p_{\rm T}^{\rm miss}+tW/tj\) and \(p_{\rm T}^{\rm miss}+j\) signatures.
If \(m_{\phi/a}>2\cdot m_{t}\), searches targeting visible mediator decays to top quarks are also sensitive to the production of scalar or pseudoscalar mediators. Two different modes can contribute: gluon-induced mediator production and production of a mediator in association with \(t\bar{t}\). Searches targeting both modes have been performed, as discussed in Sections 3.2.2 and 3.2.3, respectively. However, only the results of a search for four-top production conducted by the CMS Collaboration have been interpreted in the context of simplified models with a scalar or pseudoscalar mediator. The results are shown in Figure 13 as upper limits on the cross-section of associated production of the mediator with top quarks times the branching ratio of the mediator decay to \(t\bar{t}\). Masses between 350 GeV and 450 (510) GeV for a scalar (pseudoscalar) mediator are excluded.
It should be noted that the re-interpretation of the results from searches targeting gluon-induced mediator production is significantly more involved than for the case of associated production due to the presence of strong signal-background interference (Section 3.2.2). The resulting interference patterns are highly model-dependent which means that a re-interpretation in the context of a different model requires the generation of the model-specific interference pattern and a subsequent re-running of the full profile likelihood fit for these model-specific interference patterns.
#### 4.2.2 Colour-charged interaction
Models in which the colour-charged mediator decays to a top quark and a DM particle are constrained by the searches in \(p_{\rm T}^{\rm miss}\)\(+t\) final states discussed in Section 3.1.1. Mediator masses up to 5 TeV can be excluded by the ATLAS Collaboration for coupling strength values \(\lambda_{t}=0.4\) and \(g_{ds}=0.6\) assuming a DM mass \(m_{X}=10\) GeV [39].
Figure 13: Upper limits at 95% CL on the production of a scalar (left, called \(H\) here instead of \(\phi\)) and pseudoscalar (right, called \(A\) here instead of \(a\)) mediator as a function of the mediator mass [81]. The limits are expressed in terms of an upper limit on the production cross-section times the branching ratio of the mediator to \(t\bar{t}\) and compared to the cross-section calculated at LO for a coupling assumption of \(g=g_{q}=g_{X}=1.0\) (here denoted as: \(g_{\rm SM}=g_{\rm DM}=1.0\)).
Results with a mixed scalar and pseudoscalar coupling to both SM quarks as well as DM and top quarks are provided by CMS Collaboration [40]. Assuming a coupling of 0.1 to SM quarks and of 0.2 to DM and top quarks, mediators with masses up to 3.3 TeV can be excluded for a dark matter mass of 100 GeV.
### Extended Higgs sectors
#### 4.3.1 2HDM with a pseudoscalar mediator
Constraints on the 2HDM+\(a\) are derived from a variety of searches targeting different production and decay modes of the mediator and the additional Higgs bosons. The most comprehensive summary of constraints has been released by the ATLAS Collaboration [81]. These summary plots are based results obtained on the partial or full Run 2 datasets. Not all of the latest searches on the full Run 2 dataset have been re-interpreted in the context of the 2HDM+\(a\). Updated summary plots will be released in the near future.
The constraints are evaluated as a function of the free parameters of the model described in Section 2.3.1. Two representative parameter scans in the (\(m_{a}\),\(m_{A}\)) and the (\(m_{a}\),\(\tan\beta\)) plane highlighting the interplay of signatures involving top quarks with other types of signatures are shown in Figure 14. The constraints for other benchmark scans can be found in Ref. [81].
The sensitivity in the (\(m_{a}\),\(m_{A}\)) plane for \(\tan\beta=1\), \(\sin\theta=0.35\), and \(m_{A}=m_{H}=m_{H^{\pm}}\) is largely dominated by searches targeting the production of an invisibly decaying mediator with a Higgs or \(Z\) boson, leading to \(p_{\rm T}^{\rm miss}\) +\(h\) and \(p_{\rm T}^{\rm miss}\) +\(Z\) signatures, directly. These processes are dominated by diagrams involving the resonant production of a neutral Higgs bosons \(H\) or \(A\) that decays to \(ah\) or \(aZ\), respectively. The sensitivity from searches for \(p_{\rm T}^{\rm miss}\) +\(tW\) production, which can also proceed resonantly via a charged Higgs boson (Section 2.3.1) is sub-dominant in this parameter region.
Constraints that are largely complementary to those from \(p_{\rm T}^{\rm miss}\) +\(X\) searches are obtained from a search targeting resonant associated production of a charged Higgs boson \(H^{\pm}\) with a top-bottom quark pair (\(tbH^{\pm}\)) with subsequent decay to a top-bottom quark pair \(tb\). These constraints exhibit only a weak dependence on the mediator mass \(m_{a}\) as this signature does not involve production of a mediator at leading order and is
Figure 14: Regions in the 2HDM+\(a\) parameter space excluded at 95% CL by several individual searches targeting different signatures and a statistical combination of \(p_{\rm T}^{\rm miss}\) +\(Z(\ell\ell)\) and \(p_{\rm T}^{\rm miss}\) +\(h(b\bar{b})\) searches. The results are shown in the (\(m_{a}\),\(m_{A}\)) plane (left) and the (\(m_{a}\),\(\tan\beta\)) plane (right). In the former case, \(\tan\beta=1\), while in the latter case, \(m_{A}=600\) GeV. In both cases, the conditions \(\sin\theta=0.35\) and \(m_{A}=m_{H}=m_{H^{\pm}}\) are imposed. All results are based on either the full 139 fb\({}^{1}\) of \(pp\) collision data at \(\sqrt{s}=13\) TeV or a subset of that dataset amounting to 36 fb\({}^{1}\)[81].
hence only indirectly dependent on the mediator mass via its effect on the branching ratio to \(tb\) compared to those for other decays, such as \(H^{\pm}\to aW^{\pm}\), \(AW^{\pm}\), \(HW^{\pm}\).
Searches targeting resonant production of the neutral Higgs bosons \(A/H\), either via gluon fusion or \(t\bar{t}\) associated production, and their decay to \(t\bar{t}\), leading to \(t\bar{t}\) and \(t\bar{t}t\bar{t}\) final states, respectively, are expected to also provide complementary constraints to those from \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+X\) searches in this parameter region, given that the choice \(\tan\beta=1\) favours the coupling of those Higgs bosons to top quarks. No constraints from \(A/H(t\bar{t})\) have been derived for the 2HDM\(+a\) yet due to the presence of strong, model-dependent interference effects that make a straightforward re-interpretation of these searches in the context of other benchmark models difficult, as explained in Section 4.2.1. A search targeting \(t\bar{t}A/H(t\bar{t})\) production has been used to constrain the 2HDM\(+a\) parameter space (see below). It is based on 36 fb\({}^{-1}\) of LHC Run 2 data and not sensitive at \(\tan\beta=1\), as shown in Figure 14 (right plot). The results of a search for \(t\bar{t}A/H(t\bar{t})\) production in multi-lepton final states using 139 fb\({}^{-1}\) of LHC Run 2 data indicate that \(A/H\) masses up to 700 GeV could be excluded in the 2HDM\(+a\) for the parameter region with \(\tan\beta\) under consideration here [71].
In the (\(m_{a}\),tan \(\beta\)) plane with \(m_{A}=m_{H}=m_{H^{\pm}}=600\) GeV (right plot in Figure 14), the sensitivity is again dominated by the statistical combination of the \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+h(b\bar{t})\) and \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+Z(\ell\ell)\) searches and the search for \(tbH^{\pm}(tb)\) production, which provide complementary constraints in this region of parameter space. Low values of \(\tan\beta\) are fully excluded by the search for charged Higgs bosons decaying to \(tb\). The constraints from the search targeting \(t\bar{t}t\bar{t}\) production on 36 fb\({}^{-1}\) of LHC Run 2 data are also shown. While they are notably weaker than the constraints from the charged-Higgs-boson search, which relies on the full Run 2 dataset amounting to 139\({}^{-1}\), the results from the search for \(t\bar{t}A/H(t\bar{t})\) on 139 fb\({}^{-1}\) of LHC Run 2 data [71] (Section 3.2.3) indicate that this final state may provide a comparable exclusion power as the charged-Higgs-boson search if re-interpreted in the context of this model.
Searches for \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+t\bar{t}\) production, which dominate the sensitivity to the simplified model with a colour-neutral scalar or pseudoscalar mediator (Section 4.2.1), only weakly constrain the benchmark scenarios [34, 35] probed at the LHC. It should, however, be noted that the \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+t\bar{t}\) constraints shown in Figure 14 are based on only 36 fb\({}^{-1}\) of LHC Run 2 data and the sensitivity is mainly limited by low event rates. Hence significantly stronger constraints are expected from a re-interpretation of searches using the full 139 fb\({}^{-1}\) of LHC Run 2 data [63]. The sensitivity of the \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+t\bar{t}\) final state is expected to become comparable to that of searches in the \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+h\) and \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+Z\) final states for an integrated luminosity of 300 fb\({}^{-1}\), expected to be available after the end of LHC Run 3 (2022-2025) [31]. In this context, it should be noted that the cross-section for \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+t\bar{t}\) production is suppressed by \(\sin\theta^{2}\), making this process more sensitive for large values of \(\sin\theta\)[31]. Furthermore, for \(m_{a}>2\cdot m_{t}\), visible mediator decays to \(t\bar{t}\) are possible, reducing the invisible branching ratio \(a\to\chi\chi\) and hence the sensitivity of the \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+t\bar{t}\) searches [31].
### Scalar DE EFT model
Searches in the \(p_{\mathrm{T}}^{\mathrm{miss}}\)\(+t\bar{t}\) final state have been used to constrain the \(\mathcal{L}_{1}\) operator in the EFT model of scalar DE (Section 2.4). Results from three independent analyses, each targeting a different \(t\bar{t}\) decay mode (0-, 1-, 2-lepton channels) have been used. No statistical combination was performed. Instead, the constraint from the analysis yielding the smallest CLs value for a given signal hypothesis was re-interpreted in the EFT model of DE. The strongest constraints arise from searches in the 0- and 1-lepton channels, with both contributing roughly equally.
The constraints are derived as a function of the effective coupling \(g_{*}\) associated with the UV completion of the EFT model and the effective mass scale \(M_{1}\). It is assumed that the EFT is valid for momentum transfers \(Q_{\mathrm{tr}}<g_{*}M\)[15]. For events failing this requirement, a conservative approach to correct the final limits based on the fraction of valid events, referred to as iterative rescaling [14], is applied.
The regions excluded at 95% CL are shown in Figure 15. Mass scales \(<\) 200 GeV are excluded for \(g_{*}>\pi^{2}\). The sensitivity of the \(p_{\rm T}^{\rm miss}\)\(+t\bar{t}\) signature to softer effective couplings \(g_{*}\) is limited by the EFT criterion as \(t\bar{t}\) pair production typically involves large momentum transfers.
## 5 Discussion
A variety of searches targeting top-quark production in association with DM or via visible decays of mediator particles have been conducted by the ATLAS and CMS Collaborations. No significant deviation from the SM prediction has been observed. Therefore the results are used to constrain DM in a variety of simplified models as well as scalar DE described in an EFT model. Signatures involving top quarks often provide sensitivity in parameter regions not covered by other DM searches, underlining their importance as sensitive probes of DM at colliders. They provide a particularly relevant probe of models involving new particles with Yukawa-like interactions, which imply preferred couplings to top quarks.
It should be noted that many of the results and summary plots presented in this review are preliminary as various searches on the full LHC Run 2 collision data still on-going. Furthermore, not all of the existing results have been interpreted in relevant benchmark models. Further results of DM searches with top quarks are expected to be released by both collaborations in the near future.
## 6 Outlook
### LHC Run 3
The non-observation of WIMP DM at the LHC and various direct detection experiments to date has prompted the particle physics community to place a stronger focus on models and searches for non-WIMP DM as well as uncovered DM signatures at the LHC that can be probed during LHC Run 3 (2022-2025) and/or via re-interpretations of existing searches on LHC Run 2 data. A few notable examples involving signatures with top quarks are given in the following.
Figure 15: Regions in the plane of the effective coupling \(g_{*}\) associated with the UV completion of the EFT model and the effective mass scale \(M_{1}\) for the \(\mathcal{L}_{\infty}\) operator excluded at 95% CL by searches in the \(p_{\rm T}^{\rm miss}\)\(+t\bar{t}\) final state [15].
#### 6.1.1 Alps
Axions and axion-like particles (ALPs) [88, 89] have received increasing attention in recent years. A novel strategy to search for ALPs and, more generally, pseudo-Nambu-Goldstone bosons (pNGB) at the LHC has been proposed in Ref. [90], focusing on non-resonant searches that would be sensitive to ALPs produced as an off-shell \(s\)-channel mediator. It is motivated by the fact that the pNGB nature of the ALPs implies that their couplings to the SM are dominantly derivative, which leads to a cross-section enhancement for non-resonant ALPs production at centre-of-mass energies \(\hat{s}>>m_{a}\), where \(m_{a}\) denotes the mass of the ALP. The focus of recent studies has been on constraining the ALP-boson (\(W\), \(Z\), \(h\), \(g\), \(\gamma\)) coupling via non-resonant \(ZZ\), \(\gamma\gamma\), and \(gg\)[90], non-resonant \(ZZ\) and \(Zh\)[91], and non-resonant \(WW\), \(Z\gamma\)[92] production. The ALPs-fermion coupling can be predominantly probed via non-resonant \(t\bar{t}\) production (illustrated by the left diagram in Figure 16) due to the Yukawa-like structure of the ALP-fermion couplings. No public results exist to date but studies are on-going.
The ALPs-fermion coupling can also be probed in \(p_{\rm T}^{\rm miss}\)\(+t\bar{t}\) final states. These are sensitive to \(t\bar{t}\)-associated production of a single ALP with couplings to quarks derived from couplings to the bosonic sector and proportional to the fermion mass [93]. It should be noted that the \(p_{\rm T}^{\rm miss}\) distribution predicted for this signal process is softer on average than that predicted by e.g. stop production in supersymmetric models, emphasising the importance of keeping the \(p_{\rm T}^{\rm miss}\) threshold low in future searches.
Novel detector signatures involving exotic top quark decays are predicted in models with flavour-violating ALPs [94], which are motivated by \(t\)-channel dark sector models [95] or Frogatt-Nielsen models of flavour [96]. These models predict flavour-violating decays of the top quark to an up-type quark and an ALP, with the ALP decaying predominantly to hadrons, either promptly or with a long lifetime. Precision measurements of single-top-quark production can constrain the parameter space of such models for prompt ALPs decays to jets and detector-stable ALPs. Displaced detector signatures are predicted for non-prompt ALPs decays within the detector volume. A novel search has been proposed [94] focusing on exotic top-quark decays from SM \(t\bar{t}\) production (right diagram in Figure 16), where one of the top quarks decays into an up-type quark and an ALP, which in turn decays into a displaced narrow jet within the calorimeter volume. This and other signatures involving long-lived particles (LLP) in top-quark decays have not yet been probed in dedicated searches at the LHC. They remain an exciting prospect for the analysis of LHC Run 3 data within the currently fast-growing field of LLPs searches at the LHC, a field that benefits in particular from novel trigger and reconstruction algorithms deployed by the ATLAS and CMS experiments for Run 3 data taking.
Figure 16: Schematic representation of non-resonant \(t\bar{t}\) production via an off-shell \(s\)-channel ALP (left, [97]) and SM \(t\bar{t}\) production with subsequent decay of one of the top quarks to an up-type quark and a long-lived ALP (right, [94]).
#### 6.1.2 Composite pseudo Nambu-Goldstone Bosons
Signatures with top quarks can also be used to probe still viable WIMP models in which WIMP DM is made up of composite pNGBs [99]. In these models, both the SM Higgs boson and DM emerge from a TeV-scale strongly-coupled sector as pNGBs and the SM-DM interaction is provided by higher-dimensional derivative couplings with the Higgs fields, which leads to a strong suppression of the DM scattering rates against SM particles. Thus, these models evade the strong constraints from direct detection experiments, making collider searches particularly relevant. The pNGB DM contains additional interactions with the SM sector, besides the derivative Higgs portal, with preferential couplings to third-generation fermions being well-motivated [99]. If couplings to top quarks are preferred over couplings to bottom quarks, e.g. in the case of Yukawa-type couplings, pNGB models can be probed at the LHC via associated production of pNGB DM with \(t\bar{t}\) or a single top quark, i.e. in \(p_{\rm T}^{\rm miss}\)\(+t\bar{t}\) or \(p_{\rm T}^{\rm miss}\)\(+t\)\(+X\) final states. Two possible production modes of pNGB leading to \(p_{\rm T}^{\rm miss}\)\(+tW\) final states via the Higgs portal and direct DM-top interactions are shown in Figure 17. Searches in these final states are complementary to searches for invisible Higgs boson decays in vector-boson fusion (VBF) production as they are sensitive to pNGB interactions with fermions not accessible via the latter. Re-interpretations of existing \(p_{\rm T}^{\rm miss}\)\(+t\bar{t}\) and \(p_{\rm T}^{\rm miss}\)\(+tW\) searches as well as possible optimisations of future searches for pNGB production could be interesting to explore during LHC Run 3.
#### 6.1.3 Dark Mesons
Final states with multiple top quarks are predicted in models with a strongly coupled dark sector consisting of composite particles that carry electroweak but no colour charges [98]. These models not only address the hierarchy problem but can also provide a DM candidate in the form of a composite meson whose decays are suppressed via an automatic accidental symmetry.
The most promising target for collider searches is the dark meson sector, consisting of dark vector mesons \(\rho_{D}\) and dark pions \(\pi_{D}\)[98]. Signatures with multiple top or bottom quarks are predicted if a pair of dark pions with gauge-phobic couplings to the SM is produced from the decay of a resonantly produced \(\rho_{D}\) (\(pp\to\rho_{D}\to\pi_{D}\pi_{D}\)). The dark pions then decay predominantly into third-generation fermions, with decays to \(t\bar{t}\) (\(tb\)) dominating the branching fraction for \(\pi_{D}^{0}\) (\(\pi_{D}^{\pm}\)) if the pion mass is above the \(t\bar{t}\) (\(tb\)) production threshold. Depending on the charge of the intermediate \(\rho_{D}\), different final states involving third-generation quarks are possible: \(bb\bar{t}b\), \(t\bar{t}b\bar{b}\), \(t\bar{t}t\bar{b}\).
Existing searches in multi-top final states only weakly constrain the parameter space of these models [98]. This is due to the fact that small masses of the \(\rho_{D}\) and \(\pi_{D}\) are still viable, which means that the SM fermions in the final state tend to be rather soft. In searches at \(\sqrt{s}=13\) TeV, in particular, higher thresholds are
Figure 17: Schematic representation of \(p_{\rm T}^{\rm miss}\)\(+tW\) production via DM-Higgs operators (left) and DM-top operators in an EFT of composite pNGBs [99].
imposed on the energy/momenta of the final-state objects or their vector sum. In order to probe dark pions, or more generically strongly-coupled like models, dedicated searches targeting final states with a high multiplicity of low-momentum objects compatible with the decays of one or several low-momentum top quarks are needed.
### HL-LHC and HE-LHC
The physics potential for DM searches involving top quarks during the high-luminosity phase of the LHC (HL-LHC, starting 2028) and the perspectives for a possible future high-energy LHC (HE-LHC) have been studied in the context of a 2019 CERN Yellow Report [100]. The final HL-LHC dataset is expected to amount to an integrated luminosity of 3000 fb\({}^{-1}\) at a centre-of-mass energy \(\sqrt{s}=14\) TeV. The HE-LHC scenario relies on the assumption of a possible further upgrade of the LHC to a 27 TeV \(pp\) collider with a final integrated luminosity of 15,000 fb\({}^{-1}\).
Sensitivity studies have been performed for the \(p_{\rm T}^{\rm miss}\) +\(t\bar{t}\), \(p_{\rm T}^{\rm miss}\) +\(tW\), \(p_{\rm T}^{\rm miss}\) +\(t\bar{t}\), \(t\bar{t}\), and \(t\bar{t}t\bar{t}\) signatures within various benchmark models, including simplified models with a scalar or pseudoscalar mediator (Section 2.2.1), simplified models with a vector mediator with a flavour-changing coupling to the top and up quark (Section 2.1.2), and the 2HDM+\(a\) (Section 2.3.1). These studies are mostly based on the analysis tools and strategies used for the analysis of the partial LHC Run 2 dataset (2015-2016). They do not include further improvements, such as new machine-learning based tools or background estimation strategies, implemented for the later analyses of the full LHC Run 2 dataset. A full review of the results of these sensitivity studies across the different final states and models is beyond the scope of this article but a few general observations can be made. Overall, both the increase in integrated luminosity (HL-LHC) and centre-of-mass energy (HE-LHC) lead to a significant sensitivity increase across the different final states. For example, the mass range for a (pseudo)scalar mediator expected to be excluded by \(p_{\rm T}^{\rm miss}\) +\(t\bar{t}\) searches in the simplified model of Section 2.2.1 with \(g=g_{q}=g_{\chi}=1.0\) (compare Figure 12) is expected to increase by a factor of two for the HL-LHC compared to the expected sensitivity for LHC Run 3, and by another factor of two for the HE-LHC compared to the HL-LHC.
The sensitivity of most of the searches is dominated by the systematic uncertainties on the main (often irreducible) background processes, for example \(t\bar{t}+V\) in the case of \(p_{\rm T}^{\rm miss}\) +\(t\bar{t}\) searches. In \(t\bar{t}\) final states, these typically arise from two sources: firstly, uncertainties related to reconstructed objects, such as the energy scale for hadronic jets, and, secondly, uncertainties arising from the modelling of SM processes, such as missing higher-order corrections. These uncertainties can vary between a few percent and a few tens of percent, depending on the process and kinematic region. The former are expected to decrease with increasing integrated luminosity as the statistical uncertainties on the measurements from which they are derived are reduced accordingly. A further reduction of these uncertainties can be expected due to the development of better and more refined calibration methods. The latter can be reduced significantly through profiling in a likelihood fit to data if appropriate, background-enriched control regions are defined. Improved theoretical predictions, for example for differential cross-sections at higher orders in perturbation theory, can also significantly boost the sensitivity of many searches.
In the case of the HE-LHC, in addition to the improvements due to the larger integrated luminosity, the larger centre-of-mass energy provides access to mediator masses beyond the kinematic reach of the (HL-)LHC and to process with small signal cross-sections.
### FCC-hh
Similar considerations as for the HE-LHC apply to the case of a potential future hadron collider operating at centre-of-mass energies beyond that of the LHC and HE-LHC. The most prominent example is that of the
FCC-hh, the Future Circular Collider in its operation mode as a hadron collider with a centre-of-mass energy of \(\sqrt{s}=100\) TeV [101]. Few dedicated studies regarding the sensitivity of DM searches with top quarks at the FCC-hh exist. For example, in Ref. [102] the sensitivity of the 2-lepton \(p_{\rm T}^{\rm miss}\)\(+t\bar{t}\) final state to Higgs portal models and their extensions is discussed. In general, a significant increase in the accessible mass range of both mediators and DM particles is expected, as well as a significant increase in the sensitivity to smaller DM-SM couplings, rendering detector signatures involving decays of long-lived particles away from the interaction point highly relevant. Moreover, top quarks appearing in the final states of FCC-hh collision can be extremely boosted, underlining the need for high-resolution detectors to identify very collimated decays, as well as the use of advanced pattern recognition methods for top-quark tagging. A particularly interesting observation is the fact that associated production of a single Higgs boson with \(t\bar{t}\) becomes the dominant Higgs boson production mode at Higgs boson transverse momenta of 1-2 TeV and above, a kinematic regime that would be well-populated at the FCC-hh [103]. According to initial studies [103], searches for invisible Higgs boson decays in this production mode would feature a very low background contamination (\(S/B\sim 1\)) and hence provide excellent sensitivity to Higgs portal models with small couplings. The corresponding final state would be \(p_{\rm T}^{\rm miss}\)\(+t\bar{t}\) with highly boosted top quarks.
### Future \(e^{+}e^{-}\) colliders
No studies of DM searches with top quarks exist for future \(e^{+}e^{-}\) colliders, such as the International Linear Collider (ILC) [104], the Compact Linear Collider (CLIC) [105, 106], the Future Circular Collider FCC-ee [107, 108], and the Circular Electron-Positron Collider (CEPC) [109, 110]. This can be mostly attributed to the fact that these machines are primarily designed for Higgs boson and top quark precision measurements rather than a broad range of BSM (including DM) searches and that their foreseen centre-of-mass energies are in many cases below or close to the \(t\bar{t}\) production threshold. For example, operation modes at \(\sqrt{s}=240\) GeV (250 GeV), i.e. around the maximum of the \(Zh\) production cross-section, are foreseen for the FCC-ee and the CEPC (ILC). Additional operation modes in the range 350-365 GeV (FCC-ee, CEPC) and 380 GeV (CLIC) are foreseen for top quark precision measurements. Higher centre-of-mass energies of 1 TeV (ILC) and 1-3 TeV could be possible for the linear \(e^{+}e^{-}\) machines to allow for wider range of BSM searches. Hence direct DM production in association with at least one top quark, leading to \(p_{\rm T}^{\rm miss}\)\(+t\bar{t}\) and \(p_{\rm T}^{\rm miss}\)\(+t\)\(+X\) final states, while in principle possible, is trivially limited by the available centre-of-mass energy. Nevertheless, the foreseen precision scans of the \(t\bar{t}\) production threshold at the FCC-ee could in principle be sensitive to anomalous resonant or non-resonant \(t\bar{t}\) production linked with DM or DM mediators as well as anomalous top-quark decays. Further studies are needed to understand the prospects for DM searches with top quarks at future \(e^{+}e^{-}\) colliders.
### Conclusion
Collider signatures with top quarks provide sensitive probes of DM predicted by a wide range of models, and possibly even to DE signatures. Searches targeting top-quark production in association with DM or via visible decays of mediator particles have been performed by the ATLAS and CMS Collaborations, with many searches on the full LHC Run 2 collision data still on-going. As shown in this review, DM searches involving top quarks often provide sensitivity in parameter regions not covered by other DM searches, underlining their importance as sensitive probes of DM at colliders. The upcoming LHC Run 3 opens up further opportunities to improve upon existing results or to explore new signatures, for example involving top quarks in association with long-lived particle signatures.
## Acknowledgements
K.B.thanks the Helmholtz Association for the support through the "Young Investigator Group" initiative. The authors acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC 2121 "Quantum Universe" - 390833306.
|
2305.19219 | An Improved Two-Particle Self-Consistent Approach | The two-particle self-consistent approach (TPSC) is a method for the one-band
Hubbard model that can be both numerically efficient and reliable. However,
TPSC fails to yield physical results deep in the renormalized classical regime
of the bidimensional Hubbard model where the spin correlation length becomes
exponentially large. We address the limitations of TPSC with improved
approaches that we call TPSC+ and TPSC+SFM. In this work, we show that these
improved methods satisfy the Mermin-Wagner theorem and the Pauli principle. We
also show that they are valid in the renormalized classical regime of the 2D
Hubbard model, where they recover a generalized Stoner criterion at zero
temperature in the antiferromagnetic phase. We discuss some limitations of the
TPSC+ approach with regards to the violation of the f-sum rule and conservation
laws, which are solved within the TPSC+SFM framework. Finally, we benchmark the
TPSC+ and TPSC+SFM approaches for the one-band Hubbard model in two dimensions
and show how they have an overall better agreement with available diagrammatic
Monte Carlo results than the original TPSC approach. | C. Gauvin-Ndiaye, C. Lahaie, Y. M. Vilk, A. -M. S. Tremblay | 2023-05-30T17:06:16Z | http://arxiv.org/abs/2305.19219v1 | # An Improved Two-Particle Self-Consistent Approach
###### Abstract
The two-particle self-consistent approach (TPSC) is a method for the one-band Hubbard model that can be both numerically efficient and reliable. However, TPSC fails to yield physical results deep in the renormalized classical regime of the bidimensional Hubbard model where the spin correlation length becomes exponentially large. We address the limitations of TPSC with improved approaches that we call TPSC+ and TPSC+SFM. In this work, we show that these improved methods satisfy the Mermin-Wagner theorem and the Pauli principle. We also show that they are valid in the renormalized classical regime of the 2D Hubbard model, where they recover a generalized Stoner criterion at zero temperature in the antiferromagnetic phase. We discuss some limitations of the TPSC+ approach with regards to the violation of the f-sum rule and conservation laws, which are solved within the TPSC+SFM framework. Finally, we benchmark the TPSC+ and TPSC+SFM approaches for the one-band Hubbard model in two dimensions and show how they have an overall better agreement with available diagrammatic Monte Carlo results than the original TPSC approach.
## I Introduction
As one of the simplest models to encapsulate the effect of strong correlations in electronic systems, the Hubbard model has been used to study quantum materials such as the cuprates [1] and the newly discovered nickelate superconductors [2]. The accurate description of realistic systems often requires the use of extensions of the Hubbard model to the multi-orbital case. For instance, the cuprates seem to be described more accurately by the three-band VSA [3] Emery-Hubbard model [4], strontium ruthenate by a three-band Hund's metal with strong spin-orbit coupling [5], and the nickelates by a model that takes into account at least one correlated band and a charge reservoir [6].
One of the challenges in studying strongly correlated electron systems is that even the simpler one-band Hubbard model has no exact analytical solution for dimensions other than \(1\)[7] and infinity [8]. The numerical solution of the model in finite dimensions \(d>1\) can be achieved through approximate methods such as dynamical mean-field theory (DMFT) [8; 9; 10], its cluster extensions [11; 12; 13] and diagrammatic extensions [14], or through numerically exact quantum or diagrammatic Monte Carlo simulations [15; 16; 17]. Many of the methods for the one-band Hubbard model have recently been reviewed and benchmarked extensively for the \(2D\) weak-coupling case at half-filling [18]. At low temperatures, multiple methods face challenges even in the weak-coupling regime. For instance, finite cluster sizes limit the use of determinantal quantum Monte Carlo (DQMC) and cluster extensions of DMFT when the correlation length becomes large, while diagrammatic Monte Carlo (DiagMC) becomes limited by convergence issues. Diagrammatic extensions of DMFT such as the dynamical vertex approximation (D\(\Gamma\)A) can be used in a wider range of low temperatures. However, such methods are computationally expensive, which limits their application to extended, multi-orbital Hubbard-like models.
The two-particle self-consistent approximation (TPSC) is a conserving, non-perturbative method for the Hubbard model that can be derived from a Luttinger-Ward functional [19; 20]. In this approximation, the double occupancy is calculated self-consistently from the local spin and charge sum rules. The TPSC approximation introduces RPA-like spin and charge susceptibilities with renormalized vertices \(U_{sp}\) and \(U_{ch}\). These susceptibilities and vertices are then used to compute the self-energy.
The TPSC approximation is the first method that predicted the opening of an antiferromagnetic pseudogap in the \(2D\) Hubbard model at weak coupling, and that quantitatively associated the phenomenon with the Vilk criterion [19; 21; 18]. That criterion states that an antiferromagnetic pseudogap opens up when the spin correlation length exceeds the thermal de Broglie wave length.
Though the TPSC approximation was first formulated in the context of the one-band Hubbard model, it has since been extended to the multi-orbital case [22; 23]. The flexibility of this method, its low computational cost, as well as the fact that it respects the Mermin-Wagner theorem, the Pauli principle and conservation laws, make it attractive for applications to real materials and for extensions such as out-of-equilibrium calculations [24].
However, the TPSC approximation has many limitations, such as the fact that it is only valid in the weak to intermediate interaction regime of the Hubbard model. It is also not valid deep in the renormalized classical regime in \(2D\). Though it agrees quantitatively with benchmarks at high temperatures [25; 19; 20] and can give a qualitative description of the crossover to the renormalized classical regime, it overestimates spin fluctuations at low temperatures, leading some of its predictions such as the self-energy and the double occupancy to deviate significantly from the benchmarks [18].
In this work, we introduce an improved version of TPSC, which we call TPSC+. In addition to better agreement with benchmarks, TPSC+ has the advantage that in two dimensions it is valid deep in the pseudogap regime, all the way to the zero-temperature long-range ordered antiferromagnet. Very few approximations [26; 27; 28] can achieve this. The TPSC
approximation, that includes some level of self-consistency, was first discussed in Ref. [18], but we provide here an extended discussion of its properties. In Sec. (II), we discuss the model and obtain equations for the self-energy and generalized susceptibilities using the functional derivative approach. Next, we review the TPSC equations in Sec. (III) and give an overview of its main properties. We introduce the TPSC+ approximation formalism in Sec. (IV). We also discuss the limitations of the method, more precisely how it violates spin and charge conservation laws and the f-sum rule. We show that a variant of the TPSC+ approximation, the TPSC+SFM method, can mitigate these limitations. Finally, we show the application of the TPSC+ and TPSC+SFM approximations to the \(2D\) Hubbard model in Sec. (V), where we provide comparisons to DiagMC [29, 18] and CDet [30] benchmarks. We show that the TPSC+ and the TPSC+SFM approximations are valid in the weak to intermediate regime of the \(2D\) Hubbard model and that they outperform the original TPSC approximation at low temperatures, while maintaining low computational costs.
## II Model and exact results
We start with the definition of the Hubbard model in Sec. (II.1). In Sec. (II.2) and Sec. (II.3), we recall the general functional derivative approach of Martin and Schwinger [31, 32] that allows us to find exact results and to set up the TPSC approach in the following section (Sec. (III)).
### Hubbard model
We study the one-band Hubbard model in dimension two or more
\[H=\sum_{\mathbf{k},\sigma}\epsilon_{\mathbf{k}}c^{\dagger}_{\mathbf{k}\sigma }c_{\mathbf{k}\sigma}+U\sum_{i}n_{i\uparrow}n_{i\downarrow}, \tag{1}\]
where \(c^{(\dagger)}_{\mathbf{k}\sigma}\) annihilates (creates) an electron of spin \(\sigma\) and wave vector \(\mathbf{k}\), \(n_{i\sigma}\) counts the number of electrons of spin \(\sigma\) at site \(i\), \(U\) is the on-site repulsive interaction, and \(\epsilon_{\mathbf{k}}\) is the bare band dispersion with wave vector \(\mathbf{k}\). Working in units where \(\hbar=k_{B}=1\), this dispersion is defined as
\[\sum_{\mathbf{k},\sigma}\epsilon_{\mathbf{k}}c^{\dagger}_{\mathbf{k}\sigma}c_ {\mathbf{k}\sigma}=\sum_{i,j,\sigma}t_{ij}c^{\dagger}_{i\sigma}c_{j\sigma}, \tag{2}\]
where \(t_{ij}\) is the hopping amplitude between sites \(i\) and \(j\). Throughout this paper, we focus on the \(2D\) square lattice with first neighbour hopping \(t\) only and set the lattice spacing to \(a=1\), corresponding to the dispersion \(\epsilon_{\mathbf{k}}=-2t(\cos(k_{x})+\cos(k_{y}))\). Though most of the results shown are at half-filling (\(n=1\)), we also show some benchmarks away from half-filling in Sec. (V.2). We set \(t=1\) as the unit of energy. The benchmarks in Sec. (V.2) are provided in two dimensions.
### Self-energy in the Hubbard model
In this section, we derive an expression for the self-energy of the Hubbard model using the source field approach [31, 32]. We start by defining a partition function \(Z\) in the presence of a source field \(\phi\)
\[Z[\phi]=\langle T_{\tau}e^{-c^{\dagger}_{\sigma}(\bar{1})c_{\sigma}(\bar{2}) \phi_{\sigma}(\bar{1},\bar{2})}\rangle, \tag{3}\]
where \(T_{\tau}\) is the time-ordering operator and \(\langle O\rangle=\mathrm{Tr}[Oe^{-\beta(H-\mu N)}]/\mathrm{Tr}[e^{-\beta(H-\mu N )}]\) is the thermodynamic average of the operator \(O\) in the the grand-canonical ensemble. Moreover, we introduce the notation \((\mathbf{r}_{1},\tau_{1})\equiv(1)\). The bar denotes a sum over the spin, position and imaginary time. For instance we have, explicitly,
\[c_{\bar{\sigma}}(\bar{1})\phi(\bar{1},2)= \tag{4}\] \[\int_{0}^{\beta}d\tau_{1}\sum_{\mathbf{r}_{1}}\sum_{\sigma_{1}}c_ {\sigma_{1}}(\mathbf{r}_{1},\tau_{1})\phi_{\sigma_{1},\sigma_{2}}(\mathbf{r}_ {1},\tau_{1},\mathbf{r}_{2},\tau_{2}).\]
We use the partition function Eq. (3) to define the Green's function in the presence of a source field
\[\mathcal{G}_{\sigma}(1,2)_{\phi}=-\frac{\delta\mathrm{ln}Z[\phi]}{\delta\phi_ {\sigma}(2,1)},\hskip 14.226378pt=-\langle c_{\sigma}(1)c^{\dagger}_{\sigma}(2) \rangle_{\phi}, \tag{5}\]
where the symbol \(\delta\) denotes a functional derivative. Higher-order correlation functions are obtained from additional functional derivatives. Also, the thermodynamic average of an operator \(O\) in the presence of the source field \(\phi\) is defined as
\[\langle O\rangle_{\phi}=\frac{\langle T_{\tau}e^{-c^{\dagger}_{\sigma}(\bar{ 1})c_{\sigma}(\bar{2})\phi_{\sigma}(\bar{1},\bar{2})}O\rangle}{Z[\phi]}. \tag{6}\]
The Green's function of Eq. (5) is related to the usual Green's function by setting the source field \(\phi\) to \(0\).
From the equations of motion for the Green's function in the presence of a source field, we obtain the Green's function through the Dyson equation and an exact expression (Schwinger-Dyson) for the self-energy [32]
\[\mathcal{G}^{-1}_{\sigma}(1,2)=\mathcal{G}^{0^{-1}}_{\sigma}(1,2)-\phi(1,2)- \Sigma_{\sigma}(1,2)_{\phi}, \tag{7}\]
\[\Sigma_{\sigma}(1,\bar{2})_{\phi}\mathcal{G}_{\sigma}(\bar{2},2)_{\phi}=U \langle T_{\tau}c^{\dagger}_{\sigma}(2)c^{\dagger}_{-\sigma}(1^{+})c_{- \sigma}(1)c_{\sigma}(1)\rangle_{\phi}. \tag{8}\]
We use the notation \((1^{+})=(\mathbf{r}_{1},\tau_{1}+0^{+})\).
### Spin and charge irreducible vertices
In the source field approach, the generalized susceptibilities are
\[\chi_{+}(1,3;2,4)=\lim_{\phi\to 0}\sum_{\sigma,\sigma^{\prime}}- \frac{\delta\mathcal{G}_{\sigma}(1,3)_{\phi}}{\delta\phi_{\sigma^{\prime}}(4,2)}, \tag{9}\] \[\chi_{-}(1,3;2,4)=\lim_{\phi\to 0}\sum_{\sigma,\sigma^{\prime}}- \sigma\sigma^{\prime}\frac{\delta\mathcal{G}_{\sigma}(1,3)_{\phi}}{\delta\phi_{ \sigma^{\prime}}(4,2)}, \tag{10}\]
where the spin index \(\sigma\) is equal to \(\pm 1\) when used as a variable in the sum. The previous equations are obtained directly from the definition of \(\mathcal{G}_{\sigma}(1,2)_{\phi}\) given in Eq. (5), which indeed leads to
\[\frac{\delta\mathcal{G}_{\sigma}(1,3)_{\phi}}{\delta\phi_{\sigma^{ \prime}}(4,2)} =\mathcal{G}_{\sigma}(1,3)_{\phi}\mathcal{G}_{\sigma^{\prime}}(2,4)_{\phi}\] \[-\langle T_{\tau}c_{\sigma}(1)c_{\sigma}^{\dagger}(3)c_{\sigma^{ \prime}}(2)c_{\sigma^{\prime}}^{\dagger}(4)\rangle_{\phi}. \tag{11}\]
From spin rotational invariance, Eq. (9) and Eq. (10) can be written as
\[\chi_{+}(1,3;2,4) =\lim_{\phi\to 0}-2\left[\frac{\delta\mathcal{G}_{\uparrow}(1,3)_{ \phi}}{\delta\phi_{\uparrow}(4,2)}+\frac{\delta\mathcal{G}_{\uparrow}(1,3)_{ \phi}}{\delta\phi_{\downarrow}(4,2)}\right], \tag{12}\] \[\chi_{-}(1,3;2,4) =\lim_{\phi\to 0}-2\left[\frac{\delta\mathcal{G}_{\uparrow}(1,3)_{ \phi}}{\delta\phi_{\uparrow}(4,2)}-\frac{\delta\mathcal{G}_{\uparrow}(1,3)_{ \phi}}{\delta\phi_{\downarrow}(4,2)}\right]. \tag{13}\]
The relationship between the generalized susceptibilities defined in Eq. (10) and Eq. (9) and the spin and charge susceptibilities is
\[\chi_{ch,sp}(1,2)=\chi_{+,-}(1,1^{+};2,2^{+}). \tag{14}\]
Expanding the equations for the generalized susceptibilities using spin rotational invariance and the definition of the Green's function in the presence of a source field Eq. (7), we obtain
\[\chi_{ch}(1,2) =-2\mathcal{G}_{\sigma}(1,2)\mathcal{G}_{\sigma}(2,1)\] \[+\mathcal{G}_{\sigma}(1,\bar{3})U_{ch}(\bar{3},\bar{4};\bar{5}, \bar{6})\chi_{ch}(\bar{5},\bar{6};2^{+},2)\mathcal{G}_{\sigma}(\bar{4},1^{+}), \tag{15}\] \[\chi_{sp}(1,2) =-2\mathcal{G}_{\sigma}(1,2)\mathcal{G}_{\sigma}(2,1)\] \[-\mathcal{G}_{\sigma}(1,\bar{3})U_{sp}(\bar{3},\bar{4};\bar{5}, \bar{6})\chi_{sp}(\bar{5},\bar{6};2^{+},2)\mathcal{G}_{\sigma}(\bar{4},1^{+}), \tag{16}\]
where the irreducible spin and charge vertices are defined as
\[U_{sp}(\bar{3},\bar{4};\bar{5},\bar{6}) =\frac{\delta\Sigma_{\uparrow}(\bar{3},\bar{4})}{\delta\mathcal{G }_{\downarrow}(\bar{5},\bar{6})}-\frac{\delta\Sigma_{\uparrow}(\bar{3},\bar{4 })}{\delta\mathcal{G}_{\uparrow}(\bar{5},\bar{6})}, \tag{17}\] \[U_{ch}(\bar{3},\bar{4};\bar{5},\bar{6}) =\frac{\delta\Sigma_{\uparrow}(\bar{3},\bar{4})}{\delta\mathcal{G }_{\uparrow}(\bar{5},\bar{6})}+\frac{\delta\Sigma_{\uparrow}(\bar{3},\bar{4 })}{\delta\mathcal{G}_{\downarrow}(\bar{5},\bar{6})}. \tag{18}\]
Once we set the source field to zero, these expressions for \(U_{sp}\) and \(U_{ch}\) can, in general, be functions of three imaginary time differences (frequency) and of three position differences (wave vector). Assuming that they are local (in the next section), i.e. delta-functions in all time and position differences, we need to determine only two scalars, \(U_{sp}\) and \(U_{ch}\).
Finally, we note that in the time and space invariant case, the spin and charge susceptibilities obey the exact local sum rules
\[\chi_{sp}(\mathbf{r}=0,\tau=0) =\frac{T}{N}\sum_{\mathbf{q},iq_{n}}\chi_{sp}(\mathbf{q},iq_{n}), \tag{19}\] \[=n-2\langle n_{\uparrow}n_{\downarrow}\rangle, \tag{20}\]
\[\chi_{ch}(\mathbf{r}=0,\tau=0) =\frac{T}{N}\sum_{\mathbf{q},iq_{n}}\chi_{ch}(\mathbf{q},iq_{n}), \tag{21}\] \[=n+2\langle n_{\uparrow}n_{\downarrow}\rangle-n^{2}, \tag{22}\]
where we use the Fourier transforms with \(q_{n}=2n\pi T\) as bosonic Matsubara frequencies, with \(\mathbf{q}\) as wave vectors in the Brillouin zone, with \(N\) as the total number of sites in the system and \(T\) the temperature. These expressions for the local spin and charge susceptibilities are obtained by enforcing the Pauli principle through \(\langle n_{\sigma}^{2}\rangle=\langle n_{\sigma}\rangle\).
## III Tpsc
In this section, we recall some of the main properties of the TPSC approach. This method, which was first developed for the one-band Hubbard model, is valid in the weak to intermediate coupling regime. It is a conserving approach that respects both the Mermin-Wagner theorem and the Pauli principle [19]. The starting point of the TPSC approach for the Hubbard model is the Schwinger-Dyson self-energy defined in Eq. (8). We first impose that Eq. (8) is satisfied exactly at equal time and position, namely
\[\Sigma_{\sigma}(1,\bar{2})_{\phi}\mathcal{G}_{\sigma}(\bar{2},1^{+})_{\phi}=U \langle n_{\uparrow}(1)n_{\downarrow}(1)\rangle_{\phi}. \tag{23}\]
Next, we consider a Hartree-Fock like factorization of Eq. (8) when the point \(2\) is different from the point \(1\). We perform the factorization by introducing a functional \(A_{\phi}\)
\[\Sigma_{\sigma}^{(1)}(1,\bar{2})_{\phi}\mathcal{G}_{\sigma}^{(1)}(\bar{2},2)_{ \phi}=A_{\phi}\mathcal{G}_{-\sigma}^{(1)}(1,1^{+})_{\phi}\mathcal{G}_{\sigma}^ {(1)}(1,2)_{\phi}. \tag{24}\]
The superscript \({}^{(1)}\) denotes the first level of approximation. The TPSC ansatz postulates that Eq. (23) and Eq. (24) must be satisfied simultaneously, in the spirit of Singwi [33] and Hedeyati-Vignale [34]. This means that the functional \(A_{\phi}\) must be defined as
\[A_{\phi}=U\frac{\langle n_{\uparrow}(1)n_{\downarrow}(1)\rangle_{\phi}}{ \langle n_{\uparrow}(1)\rangle_{\phi}\langle n_{\downarrow}(1)\rangle_{\phi}}. \tag{25}\]
We now compute the irreducible spin vertex defined in Eq. (17) using the first level of approximation for the self-energy. Setting the source field to zero after functional differentiation, we find
\[U_{sp}(\bar{3},\bar{4};\bar{5},\bar{6})=A_{\phi=0}\delta(\bar{3}-\bar{5}) \delta(\bar{3}-\bar{6})\delta(\bar{3}-\bar{4}). \tag{26}\]
This leads to the following expression for the vertex \(U_{sp}\), which is local in space and time
\[U_{sp}=U\frac{\langle n_{\uparrow}n_{\downarrow}\rangle}{\langle n_{\uparrow} \rangle\langle n_{\downarrow}\rangle}. \tag{27}\]
Assuming that and \(U_{ch}\) is also local, we obtain RPA-like expressions for the spin and charge susceptibilities from Eq. (15) and Eq. (16)
\[\chi_{sp}(\mathbf{q},iq_{n})=\frac{\chi^{(1)}(\mathbf{q},iq_{n})}{1-\frac{U_{sp}} {2}\chi^{(1)}(\mathbf{q},iq_{n})}, \tag{28}\]
\[\chi_{ch}(\mathbf{q},iq_{n})=\frac{\chi^{(1)}(\mathbf{q},iq_{n})}{1+\frac{U_{ch}} {2}\chi^{(1)}(\mathbf{q},iq_{n})}. \tag{29}\]
The bubble \(\chi^{(1)}\), evaluated at the first level of approximation, is
\[\chi^{(1)}(\mathbf{q},iq_{n})=-2\frac{T}{N}\sum_{\mathbf{k},ik_{n}}\mathcal{G}^{( 1)}_{\sigma}(\mathbf{k},ik_{n})\mathcal{G}^{(1)}_{\sigma}(\mathbf{k}+\mathbf{q },ik_{n}+iq_{n}), \tag{30}\]
where \(k_{n}=(2n+1)\pi T\) are fermionic Matsubara frequencies and \(\mathbf{k}\) are wave vectors in the first Brillouin zone.
The TPSC approach solves the Hubbard model through the self-consistency of the ansatz that leads to the definition of \(U_{sp}\) (Eq. (27)) and the sum rules for the spin and charges susceptibilities. Indeed, comparing the spin susceptibility sum rule Eq. (20) and the TPSC equation for the spin susceptibility Eq. (28), we find
\[\frac{T}{N}\sum_{\mathbf{q},iq_{n}}\frac{\chi^{(1)}(\mathbf{q},iq_ {n})}{1-\frac{U_{sp}}{2}\chi^{(1)}(\mathbf{q},iq_{n})} =n-2\langle n_{\uparrow}n_{\downarrow}\rangle, \tag{31}\] \[=n-2\frac{U_{sp}}{U}\langle n_{\uparrow}\rangle\langle n_{ \downarrow}\rangle, \tag{32}\]
where the second line comes from Eq. (27), which defines \(U_{sp}\) from the double occupancy \(\langle n_{\uparrow}n_{\downarrow}\rangle\).
We first solve Eq. (32) self-consistently for \(U_{sp}\) and the double occupancy. Then, given the double occupancy, the sum rule on the charge susceptibility
\[\frac{T}{N}\sum_{\mathbf{q},iq_{n}}\frac{\chi^{(1)}(\mathbf{q},iq_{n})}{1+ \frac{U_{ch}}{2}\chi^{(1)}(\mathbf{q},iq_{n})}=n+2\langle n_{\uparrow}n_{ \downarrow}\rangle-n^{2}. \tag{33}\]
gives us the value of \(U_{ch}\). Since the expressions for the local spin and charge sum rules used within TPSC enforce the Pauli principle, the method itself respects it. Since the self-energy in the first level of approximation \(\Sigma^{(1)}_{\sigma}(1,2)=U_{sp}n_{-\sigma}\delta(\mathbf{r}_{1}-\mathbf{r} _{2})\delta(\tau_{1}-\tau_{2})\) is a constant, we use the noninteracting Lindhard function \(\chi^{(0)}\) in the TPSC equations and absorb \(\Sigma^{(1)}\) in the definition of the chemical potential.
Within TPSC, in analogy with what is done for the electron gas, at the second level of approximation the self-energy is influenced by the spin and charge fluctuations. It enters the interacting Green's function through \(\mathcal{G}^{(2)}=\left(\mathcal{G}^{(1)^{-1}}-\Sigma^{(2)}\right)^{-1}\) and is given by
\[\Sigma^{(2)}_{\sigma}(\mathbf{k},ik_{n}) =Un_{-\sigma}+\frac{T}{N}\frac{U}{8}\sum_{\mathbf{q},iq_{n}}[3U_{ sp}\chi_{sp}(\mathbf{q},iq_{n})\] \[+\,U_{ch}\chi_{ch}(\mathbf{q},iq_{n})]\,\mathcal{G}^{(1)}_{\sigma }(\mathbf{k}+\mathbf{q},ik_{n}+iq_{n}). \tag{34}\]
This expression for the self-energy Eq. (34) contains the contribution from the longitudinal [19] and transverse [35] channels.
This form satisfies exactly the Galitski-Migdal equation
\[Tr\left[\Sigma^{(2)}\mathcal{G}^{(1)}\right]=U\langle n_{\uparrow}n_{\downarrow}\rangle, \tag{35}\]
demonstrating consistency between one- and two-particle quantities. However, within TPSC, the equality in Eq. (35) is not satisfied if one uses the interacting Green's function \(\mathcal{G}^{(2)}\) instead of the non-interacting one \(\mathcal{G}^{(1)}\). The deviation between the trace of \(\Sigma^{(2)}\mathcal{G}^{(i)}\) with the noninteracting (\(i=1\)) and the interacting (\(i=2\)) Green's functions can be used as an internal consistency check of the approach [19].
## IV TPSC+
The main aim of the TPSC+ approach is to improve the results deep in the renormalized classical regime where, as will be shown in Sec. (IV.3), the TPSC approach fails. We start by introducing the formulation of two variants of the TPSC+ approach in Sec. (IV.1), called TPSC+ and TPSC+SFM. Then, we show that the methods respect the Mermin-Wagner theorem in Sec. (IV.2), that they are valid in the renormalized classical regime in Sec. (IV.3), and that they recover a generalized Stoner criterion with a renormalized interaction in the antiferromagnetic phase in Sec. (IV.4). Moreover, we show that the TPSC+ approach is consistent with respect to one- and two-particle quantities in Sec. (IV.5). In Sec. (IV.6), we show that the methods predict an antiferromagnetic pseudogap in the \(2D\) Hubbard model. Finally, we comment on the limitations of the TPSC+ approach in Sec. (IV.7), where we show that it violates spin and charge conservation laws as well as the f-sum rule, whereas its variant TPSC+SFM does not.
### Formulation of the approach
The two TPSC+ approaches introduced here are based on the same considerations we introduced for the TPSC approach. In the TPSC+ approach, the self-energy \(\Sigma^{(2)}\) that enters \(\mathcal{G}^{(2)}\) is defined as in Eq. (34). However, the spin and charge susceptibilities are no longer defined with the non-interacting susceptibility \(\chi^{(1)}\), but instead as
\[\chi_{sp}(\mathbf{q},iq_{n}) =\frac{\chi^{(2)}(\mathbf{q},iq_{n})}{1-\frac{U_{sp}}{2}\chi^{(2) }(\mathbf{q},iq_{n})}, \tag{36}\] \[\chi_{ch}(\mathbf{q},iq_{n}) =\frac{\chi^{(2)}(\mathbf{q},iq_{n})}{1+\frac{U_{ch}}{2}\chi^{(2) }(\mathbf{q},iq_{n})}. \tag{37}\]
The spin and charge irreducible vertices are computed in the same way as in the TPSC approach, namely through the self-consistency with the local sum rules and the TPSC ansatz \(U_{sp}=U\langle n_{\uparrow}n_{\downarrow}\rangle/\langle n_{\uparrow}\rangle \langle n_{\downarrow}\rangle\). The distinction between TPSC, TPSC+ and TPSC+SFM comes from the asymmetric form of the partially-dressed susceptibility \(\chi^{(2)}\) that we consider here. In TPSC+, it is defined as
\[\chi^{(2)}_{\mathrm{TPSC+}}(\mathbf{q},iq_{n})= -\frac{T}{N}\sum_{\mathbf{k},ik_{n}}\left(\mathcal{G}^{(2)}_{ \sigma}(\mathbf{k},ik_{n})\mathcal{G}^{(1)}_{\sigma}(\mathbf{k}+\mathbf{q},ik_ {n}+iq_{n})\right.\] \[+\,\left.\mathcal{G}^{(2)}_{\sigma}(\mathbf{k},ik_{n})\mathcal{G} ^{(1)}_{\sigma}(\mathbf{k}-\mathbf{q},ik_{n}-iq_{n})\right). \tag{38}\]
In TPSC+SFM, the partially-dressed susceptibility takes a different form
\[\chi^{(2)}_{\rm TPSC+SFM}({\bf q},iq_{n})=\begin{cases}-\frac{T}{N}\sum_{{\bf k },ik_{n}}\left(\tilde{\mathcal{G}}^{(2)}_{\sigma}({\bf k}-{\bf q},ik_{n}) \mathcal{G}^{(1)}_{\sigma}({\bf k},ik_{n})+\tilde{\mathcal{G}}^{(2)}_{\sigma} ({\bf k}+{\bf q},ik_{n})\mathcal{G}^{(1)}_{\sigma}({\bf k},ik_{n})\right),&q_{n }=0\\ \chi^{(1)}({\bf q},iq_{n})&q_{n}\neq 0.\end{cases} \tag{39}\]
In Eq. (38) and Eq. (39), the Green's function \(\mathcal{G}^{(1)}\) is the noninteracting Green's function, and the Green's function \(\mathcal{G}^{(2)}\) is the interacting Green's function that includes the complete TPSC self-energy defined previously in Eq. (34). The Green's function \(\tilde{\mathcal{G}}^{(2)}\) is an interacting Green's function in which the self-energy only contains the contribution from the longitudinal spin fluctuations. Hence, the distinction between the interacting Green's functions \(\mathcal{G}^{(2)}\) and \(\tilde{\mathcal{G}}^{(2)}\) is
\[\mathcal{G}^{(2)} \Rightarrow\Sigma^{(2)}({\bf k},ik_{n})=\frac{T}{N}\frac{U}{8} \sum_{{\bf q},iq_{n}}\left[3U_{sp}\chi_{sp}({\bf q},iq_{n})+U_{ch}\chi_{ch}({ \bf q},iq_{n})\right]\mathcal{G}^{(1)}_{\sigma}({\bf k}+{\bf q},ik_{n}+iq_{n}). \tag{40}\] \[\tilde{\mathcal{G}}^{(2)} \Rightarrow\tilde{\Sigma}^{(2)}({\bf k},ik_{n})=\frac{T}{N}\frac {U}{4}\sum_{{\bf q},iq_{n}}U_{sp}\chi_{sp}({\bf q},iq_{n})\mathcal{G}^{(1)}_{ \sigma}({\bf k}+{\bf q},ik_{n}+iq_{n}). \tag{41}\]
In the Green's functions \(\mathcal{G}^{(2)}\) and \(\tilde{\mathcal{G}}^{(2)}\), the chemical potential is chosen so that the total density \(n\) is kept constant. This means that the partially-dressed susceptibilities \(\chi^{(2)}\) calculated from TPSC+ and TPSC+SFM obey the same sum rule as the noninteracting correlation function \(\chi^{(1)}\)
\[\chi^{(1)}({\bf r}=0,\tau=0) =\chi^{(2)}({\bf r}=0,\tau=0),\] \[=n-\frac{n^{2}}{2}. \tag{42}\]
We note that the final Green's function obtained in the TPSC+SFM approach is still the one that includes both spin and charge fluctuations, as defined in Eq. (34). Only the self-energy used in the calculation of the partially-dressed susceptibility takes the form defined in Eq. (41). We also remark that the discontinuity in the partially-dressed susceptibility introduced in Eq. (39) could be an issue for the analytic continuation to real frequencies.
Both TPSC+ approaches are self-consistent in two ways: (a) The self-consistency between \(U_{sp}\) and the double occupancy through the sum rule and the TPSC ansatz is still present in the extended approaches, and (b) The self-energy \(\Sigma^{(2)}_{\sigma}\), the Green's function \(\mathcal{G}^{(2)}_{\sigma}\) and the partially-dressed susceptibility \(\chi^{(2)}\) all depend self-consistently on each other and can be calculated through an iterative process.
This approach is analogous to the pairing approximation (\(GG_{0}\) theory) for the pair susceptibility introduced by Kadanoff and Martin [36; 37; 38]. Though we will rigorously justify the approach in the following sections, we now provide a phenomenological justification for the use of a partially-dressed susceptibility \(\chi^{(2)}\), which was also introduced in Appendix D.7 of Ref. [18]. As detailed in Section II.3, in the source field approach the susceptibilities are obtained from functional derivatives of the Green's function. Using the identity \(G(1,\bar{3})G^{-1}(\bar{3},2)=\delta(1-2)\), these susceptibilities can be written as
\[\frac{\delta G}{\delta\phi}=-G\frac{\delta G^{-1}}{\delta\phi}G, \tag{43}\]
where \(\frac{\delta G^{-1}}{\delta\phi}\) is the vertex. Assuming a quasiparticle picture, the Green's function can be expressed as a function of the quasiparticle weight \(Z\) and the non-interacting Green's function \(G_{0}\), namely \(G=ZG_{0}\). Hence, in this approximation, the functional derivative of the Green's function becomes analogous to susceptibilities introduced in Eqs. 36, 37 and 38,
\[\frac{\delta G}{\delta\phi}=-G_{0}\frac{\delta G_{0}^{-1}}{\delta\phi}G. \tag{44}\]
This is reminiscent of the cancellation between quasiparticle renormalization in the Green function and in the vertex that occurs in Landau Fermi liquid theory.
### Mermin-Wagner theorem
Here, we show that the TPSC+ approach and the TPSC+SFM variant respect the Mermin-Wagner theorem and more specifically that it prevents antiferromagnetic phase transitions at finite temperatures in two dimensions. The proof is the same as in the case of the TPSC approach detailed in Ref. [19]. We consider a regime where the spin correlation length is large. In this regime, the spin susceptibility can be expanded in an Ornstein-Zernicke form at zero frequency and for wave vectors near the antiferromagnetic wave vector \({\bf Q}\)
\[\chi_{sp}({\bf q}\simeq{\bf Q},0)\simeq\frac{2}{U_{sp}\xi_{0}^{2}}\frac{1}{q^ {2}+\xi_{sp}^{-2}}, \tag{45}\]
where the bare particle-hole correlation length is defined as
\[\xi_{0}^{2}=\frac{-1}{2\chi^{(2)}({\bf Q},0)}\left.\frac{\partial^{2}\chi^{(2)} ({\bf q},0)}{\partial q_{x}^{2}}\right|_{{\bf q}={\bf Q}}, \tag{46}\]
and the spin correlation length is
\[\xi_{sp}=\xi_{0}\sqrt{\frac{U_{sp}}{\frac{2}{\chi^{(2)}(\mathbf{Q},0)}-U_{sp}}}. \tag{47}\]
The sum rule for the spin susceptibility can be rewritten as
\[n-2\langle n_{\uparrow}n_{\downarrow}\rangle-C=\frac{2}{U_{sp}\xi_{0}^{2}}\sum_ {\mathbf{q}}\frac{1}{q^{2}+\xi_{sp}^{-2}}, \tag{48}\]
where the constant \(C\) contains all the non-zero Matsubara frequency contributions to the sum rule. Hence, the left handside of the previous equation is finite. We now focus on the right handside and transform the sum to an integral
\[\sum_{\mathbf{q}}\chi_{sp}(\mathbf{q}\simeq\mathbf{Q},0)\simeq\frac{2}{U_{sp} \xi_{0}^{2}}\int\frac{d^{d}q}{(2\pi)^{d}}\frac{1}{q^{2}+\xi_{sp}^{-2}}, \tag{49}\]
where \(d\) is the spatial dimension. A more detailed analysis with power-law corrections can be found in Ref. [39].
We first consider the \(d=3\) case, where \(d^{3}q=q^{2}\sin\theta dqd\theta d\phi\). Assuming that an antiferromagnetic order is possible, we take the spin correlation length \(\xi_{sp}\) to infinity. We obtain
\[\int d^{3}q\frac{1}{q^{2}+\xi_{sp}^{-2}}\propto\int_{0}^{\Lambda}dq, \tag{50}\]
where \(\Lambda\) is a finite cutoff to take into account the regime of validity of the Ornstein-Zernicke form of the spin susceptibility. Hence, in \(d=3\), both sides of Eq. (48) take finite values even in the limit where \(\xi_{sp}\rightarrow\infty\), which means that an antiferromagnetic phase transition is possible at finite temperature. The same is true for \(d>3\).
The \(d=2\) case, for which \(d^{2}q=qdqd\phi\), is different. Indeed, taking \(\xi_{sp}\rightarrow\infty\) in \(d=2\) yields
\[\int d^{2}q\frac{1}{q^{2}+\xi_{sp}^{-2}}\propto\int_{0}^{\Lambda}\frac{dq}{q}, \tag{51}\]
which diverges logarithmically at \(q\to 0\) and leads to a contradiction in Eq. (48) because its left hand-side remains finite. Hence, in \(d=2\), the spin correlation length obtained from the TPSC+ approach never reaches an infinite value at finite temperature, in agreement with the Mermin-Wagner theorem.
### Validity in the renormalized classical regime
The onset of the renormalized-classical regime is signaled by the characteristic spin fluctuation frequency \(\omega_{SF}\propto\xi_{sp}^{-2}\) becoming smaller than the temperature \(T\)[19; 40]. In this regime the spin correlation length grows exponentially. At lower temperature, the Vilk criterion becomes satisfied. The various crossovers associated with these phenomena have been thoroughly discussed in Ref. [18]. One of the main limitations of the TPSC approach reviewed in Sec. (III) is that it is not valid deep in the renormalized classical regime of the \(2D\) Hubbard model. In this section, we discuss how the TPSC+ methods are valid in this regime whereas TPSC is not.
We consider the half-filled case of the \(2D\) Hubbard model with nearest-neighbor hopping only. As shown in Sec. (IV.2), the TPSC and the TPSC+ approaches satisfy the Mermin-Wagner theorem, constraining the spin susceptibility at \(\mathbf{Q}=(\pi,\pi)\) to finite values at finite temperature in \(2D\). With the RPA-like spin susceptibilities of TPSC and TPSC+, this imposes a condition on the value of \(U_{sp}\)
\[U_{sp}<\frac{2}{\chi^{(i)}(\mathbf{Q},0)}, \tag{52}\]
where \(i=1\) (\(i=2\)) in the case of the TPSC (TPSC+) approach.
We first consider the TPSC case. In the specific example mentioned earlier, the zero-frequency Lindhard function at \(\mathbf{Q}=(\pi,\pi)\) is
\[\chi^{(1)}(\mathbf{Q},0)=\int d\epsilon\rho(\epsilon)\frac{\tanh(\epsilon/2T)}{ \epsilon}, \tag{53}\]
where the density of states \(\rho(\epsilon)\) is
\[\rho(\epsilon)=\frac{1}{2\pi^{2}t}K\left[\sqrt{1-\left(\frac{\epsilon}{4t} \right)^{2}}\right], \tag{54}\]
with \(K\) the complete elliptic integral of the first kind [41]. The integral Eq. (53) can be solved numerically as a function of the temperature \(T\). In Fig. 1, we show that there is a divergence in \(\chi^{(1)}(\mathbf{Q},0)\) as \(T\) goes to \(0\) both from the numerical integration of Eq. (53) (panel (a), red curve) and from the numerical evaluation of Eq. (30) (panels (b) and (c), red dots). This divergence can also be shown from an analytic evaluation of Eq. (53). Indeed, in the specific case we study, there is a van Hove singularity in the density of states at the Fermi level (\(\epsilon_{F}=0\)). More specifically, the dominant contribution to \(K\) is
\[K\left[\sqrt{1-x^{2}}\right]\rightarrow-\ln x,\ \ \ x\to 0. \tag{55}\]
The integral Eq. (53) can then be written, with \(u=\epsilon/2T\),
\[\chi^{(1)}(\mathbf{Q},0) \sim\left[-\int_{1}^{\Lambda/2T}du\frac{\ln u}{u}+C\right],\] \[\sim-\ln^{2}\frac{\Lambda}{2T}, \tag{56}\]
where \(C\) includes the less singular terms that contribute to the integral, and \(\Lambda\) is a high energy cutoff. Hence, the Lindhard function diverges as the square of a logarithm at \(T\to 0\) for this model.
From the bound on \(U_{sp}\) emerging due to the Mermin-Wagner theorem Eq. (52), we conclude that, in the TPSC approach, the spin vertex \(U_{sp}\) must go to \(0\) as \(T\) goes to \(0\) in order to respect the Mermin-Wagner theorem. In panel (a) of Fig. 2, we show \(U_{sp}\) as a function of the temperature for \(U=1,\ 2,\ 3\) and \(4\) obtained from the TPSC calculation for
the \(2D\) Hubbard model with nearest-neighbor hopping only. All cases show a drop in the value of \(U_{sp}\) as the temperature goes to zero. This drop signals the entry in the renormalized classical regime. Moreover, since the value of \(U_{sp}\) is then limited by the value of the non-interacting Lindhard function, the spin vertex in this regime becomes independent of \(U\).
We now show that the TPSC+ approach does not encounter this issue. We first need to evaluate \(\chi^{(2)}(\mathbf{Q},0)\) in the renormalized classical regime where spin fluctuations are strong, starting with the computation of the self-energy that enters the interacting Green's function \(\mathcal{G}^{(2)}\). We only consider the contributions from spin fluctuations and use the Ornstein-Zernicke form of the spin susceptibility. This approximation is valid for both the TPSC+ and the TPSC+SFM methods. In 2 dimensions, we find
\[\Sigma(\mathbf{k},ik_{n})\simeq\frac{3UT}{4\xi_{0}^{2}}\int\frac{d^{2}q}{(2 \pi)^{2}}\frac{1}{q^{2}+\xi_{sp}^{-2}}\frac{1}{ik_{n}-\epsilon_{\mathbf{k+Q+q} }+\mu^{(1)}}. \tag{57}\]
Since the spin correlation length \(\xi_{sp}\) is very large in this regime, the term \(1/(q^{2}+\xi_{sp}^{-2})\) is non-negligible only in the limit where \(q\) is small. We also note that, for values of the wave vector that lie outside of the Fermi surface, the term \(\epsilon_{\mathbf{k+Q+q}}+\mu^{(1)}\) is non-zero. To compute \(\chi^{(2)}\), we must perform a sum over the whole Brillouin zone. For all these reasons, we can neglect the \(\mathbf{q}\) dependence of \(\epsilon_{\mathbf{k+Q+q}}\) in the above expression. This leads to
\[\Sigma(\mathbf{k},ik_{n})=\frac{\Delta^{2}}{ik_{n}-\epsilon_{\mathbf{k+Q}}+ \mu^{(1)}}, \tag{58}\]
where \(\Delta^{2}\) is defined as the temperature dependent quantity
\[\Delta^{2}=\frac{3UT}{4\xi_{0}^{2}}\int\frac{d^{2}q}{(2\pi)^{2}}\frac{1}{q^{2 }+\xi_{sp}^{-2}}. \tag{59}\]
We now obtain an expression for \(\chi^{(2)}\) in the limit \(T\to 0\) in the renormalized classical regime
\[\chi^{(2)}(\mathbf{Q},0) =-\frac{T}{N}\sum_{\mathbf{k},ik_{n}}\left[\mathcal{G}^{(2)}( \mathbf{k},ik_{n})\mathcal{G}^{(1)}(\mathbf{k+Q},ik_{n})\right.\] \[+\left.\mathcal{G}^{(2)}(\mathbf{k},ik_{n})\mathcal{G}^{(1)}( \mathbf{k-Q},ik_{n})\right], \tag{60}\] \[=-\frac{T}{N}\sum_{\mathbf{k},ik_{n}}\frac{2}{(ik_{n}-\varepsilon _{\mathbf{k+Q}}^{(1)})(ik_{n}-\varepsilon_{\mathbf{k}}^{(2)})-\Delta^{2}}, \tag{61}\]
where we used the equality \(\epsilon_{\mathbf{q+Q}}=\epsilon_{\mathbf{q-Q}}\), and defined \(\varepsilon_{\mathbf{k}}^{(i)}=\epsilon_{\mathbf{k}}-\mu^{(i)}\). The sum over discrete Matsubara frequencies can be performed analytically. We find
\[\chi^{(2)}(\mathbf{Q},0)=-\frac{2}{N}\sum_{\mathbf{k}}\frac{f(E_{\mathbf{k}}^{ +})-f(E_{\mathbf{k}}^{-})}{E_{\mathbf{k}}^{+}-E_{\mathbf{k}}^{-}}, \tag{62}\]
where \(f(\epsilon)\) is the Fermi-Dirac distribution. The energies \(E_{\mathbf{k}}^{\pm}\) are defined as
\[E_{\mathbf{k}}^{\pm}=\frac{1}{2}\left(\varepsilon_{\mathbf{k}}^{(2)}+ \varepsilon_{\mathbf{k+Q}}^{(1)}\pm\sqrt{\left(\varepsilon_{\mathbf{k}}^{(2)} -\varepsilon_{\mathbf{k+Q}}^{(1)}\right)^{2}+4\Delta^{2}}\right). \tag{63}\]
So far, this is a general result that can be applied to cases outside of perfect nesting. We transform the sum over \(\mathbf{k}\) into an integral and obtain, for the case of perfect nesting where \(\mu^{(1)}=\mu^{(2)}=0\),
\[\chi^{(2)}(\mathbf{Q},0)=\int d\epsilon\rho(\epsilon)\frac{\tanh(\sqrt{ \epsilon^{2}+\Delta^{2}}/2T)}{\sqrt{\epsilon^{2}+\Delta^{2}}}, \tag{64}\]
which is the analogue of Eq. (53). Like for \(\chi^{(1)}(\mathbf{Q},0)\), we solve Eq. (64) for \(\chi^{(2)}(\mathbf{Q},0)\) through a numerical integration and vary the value of \(\Delta\). Panel (a) of Fig. 1 compares our results for \(\chi^{(1)}(\mathbf{Q},0)\) and \(\chi^{(2)}(\mathbf{Q},0)\) from numerical integration. The presence of the self-energy through \(\Delta^{2}\) suppresses the divergence at low temperature progressively as we increase \(\Delta\). As \(T\) decreases, \(\chi^{(2)}(\mathbf{Q},0)\) saturates at a finite value instead of diverging as \(\chi^{(1)}\) does. Consequently, the criterion Eq. (52) can be satisfied with a finite, non-zero value of \(U_{sp}\) in the TPSC+ approach since the correlation function \(\chi^{(2)}\) remains finite.
In panel (b) of Fig. 1, we show our results for \(\chi^{(1)}(\mathbf{Q},0)\), and also for \(\chi^{(2)}(\mathbf{Q},0)\) obtained with TPSC+ calculations. Panel (c) of the same figure shows the results obtained with TPSC+SFM calculations. Though we were not able to obtain converged results at very low temperatures, we see that both the TPSC+ and the TPSC+SFM forms of \(\chi^{(2)}(\mathbf{Q},0)\) are suppressed increasingly as \(U\) increases. This is consistent with the behavior seen in panel (a) of Fig. 1 from the analytic, approximate expression of \(\chi^{(2)}\).
We show our calculated values of \(U_{sp}\) with TPSC+ in panel (b) of Fig. 2, as a function of the temperature for \(U=1,\ 2,\ 3\) and \(4\) for the \(2D\) Hubbard model with nearest-neighbor hopping only. Panel (c) of the same figure shows the results from TPSC+SFM calculations. In contrast to the TPSC case, no sharp decrease of the value of \(U_{sp}\) obtained with TPSC+ and
Figure 1: Lindhard functions \(\chi^{(1)}(\mathbf{Q},0)\) (in red) and \(\chi^{(2)}(\mathbf{Q},0)\) (in blue and green) as a function of the temperature \(T\). Lines in the panel (a) are obtained from the numerical integration of Eq. (53) for \(i=1,\ \Delta=0\) and of Eq. (64) for \(i=2,\ \Delta>0\). Dots in the panels (b) and (c) are obtained from the numerical calculation of (b) Eq. (38) for TPSC+ and (c) Eq. (39) for TPSC+SFM. The dashed lines are an interpolation. The calculations are done for the \(2D\) square lattice, at half-filling, with nearest-neighbour hopping only.
TPSC+SFM can be seen at low temperatures. \(U_{sp}\) remains strongly \(U\)-dependent. The slight downturn observed for all values of \(U\) at low temperatures can be understood through the proportionality between the spin vertex and the double occupancy due to the TPSC ansatz. In the weak correlation regime of the \(2D\) Hubbard model, a decrease in the double occupancy is observed as the temperature goes down towards zero due to an increase in antiferromagnetic correlations which in turn lead to an increase of the local moment and a corresponding decrease in double occupancy [18, 19, 42, 43].
A comparison of the double occupancies obtained with the TPSC, TPSC+ and TPSC+SFM approaches is shown in Fig. 3. In panel (a), the double occupancy calculated with TPSC drops sharply towards zero for all values of \(U\) considered. The drop occurs at higher temperature when \(U\) increases since the entry in the renormalized classical regime has the same behavior. Comparison [44] with DCA calculations [45] confirms this. However, the drop towards zero is unphysical. In contrast, the double occupancy computed with TPSC+ and TPSC+SFM shown in panels (b) and (c) exhibits the expected physical decrease as the temperature decreases, and in a less pronounced way than the TPSC case.
### Renormalized Stoner criterion
One of the main advantages of the TPSC+ methods is that in _two dimensions_ they give results for the paramagnetic pseudogap phase all the way to, and including, the zero-temperature long-range antiferromagnetic phase.
In this section then, we focus on the generalized Stoner criterion that leads to the AFM phase transition in the TPSC approach. At the Neel temperature, which is \(T_{N}=0\) in \(d=2\) but might be finite in higher dimensions, the spin susceptibility diverges according to the criterion
\[\frac{2}{U_{sp}}=\chi^{(2)}(\mathbf{Q},0). \tag{65}\]
Substituting the criterion Eq. (65) in the expression we obtained for \(\chi^{(2)}(\mathbf{Q},0)\) in the previous section, Eq. (62), we find the generalized Stoner criterion obtained from the TPSC+ and TPSC+SFM approaches at \(T_{N}\)
\[\frac{1}{U_{sp}}=-\frac{1}{N}\sum_{\mathbf{k}}\frac{f(E_{\mathbf{k}}^{+})-f(E _{\mathbf{k}}^{-})}{E_{\mathbf{k}}^{+}-E_{\mathbf{k}}^{-}} \tag{66}\]
with \(E_{\pm}\) given by Eq. (63).
In the specific case of two dimensions, where \(T_{N}=0\), this becomes
\[\frac{1}{U_{sp}}=-\frac{1}{N}\sum_{\mathbf{k}}\frac{\theta(-E_{\mathbf{k}}^{+} )-\theta(-E_{\mathbf{k}}^{-})}{E_{\mathbf{k}}^{+}-E_{\mathbf{k}}^{-}}, \tag{67}\]
where \(\theta(x)\) is the Heaviside function. Both results are analogous to the mean-field Hartree-Fock gap equation in the antiferromagnetic state [46], but with the renormalized spin vertex \(U_{sp}\) instead of the bare \(U\). From the definition of the energies \(E_{\mathbf{k}}^{\pm}\) in Eq. (63), the antiferromagnetic gap is \(2\Delta\) in specific cases where \(\mu^{(1)}=\mu^{(2)}=\epsilon_{k_{F}}=0\), such as the \(2D\) square lattice at half-filling with first-neighbor hopping only.
### Consistency between one- and two-particle properties
Consistency between one- and two-particle properties can be verified through the Galitski-Migdal equation Eq. (23) which relates the trace of \(\Sigma\mathcal{G}\), one-particle quantities, to the double occupancy. We consider this in the Matsubara-frequency and wave vector domain at the second level of approximation of the TPSC+ approach
\[\Sigma_{\sigma}^{(2)}(1,\bar{2})\mathcal{G}_{\sigma}^{(2)}(\bar{2},1^{+})= \frac{T}{N}\sum_{\mathbf{k},ik_{n}}\Sigma_{\sigma}^{(2)}(\mathbf{k},ik_{n}) \mathcal{G}_{\sigma}^{(2)}(\mathbf{k},ik_{n}). \tag{68}\]
Figure 2: Irreducible spin vertex \(U_{sp}\) obtained from (a) the TPSC, (b) TPSC+ and (c) the TPSC+SFM calculations, for \(U=1,\ 2,\ 3\) and \(4\). The TPSC approach predicts an unphysical drop of \(U_{sp}\) as the temperature goes to zero. In contrast, the values of \(U_{sp}\) obtained with TPSC+ and TPSC+SFM show no significant decrease at low temperatures in the domain of convergence of the methods. The calculations are done for the \(2D\) square lattice, at half-filling, with nearest-neighbour hopping only.
Figure 3: Double occupancy \(D=\langle n_{t}n_{\downarrow}\rangle\) obtained (a) from the TPSC, (b) TPSC+ and (c) TPSC+SFM (c) calculations, for \(U=1,\ 2,\ 3\) and \(4\). The TPSC approach predicts an unphysical drop of \(D\) as the temperature goes to zero. In contrast, the double occupancy computed with TPSC+ and TPSC+SFM does not decrease significantly with the temperature in the domain of convergence of the methods. The calculations are done for the \(2D\) square lattice, at half-filling, with nearest-neighbour hopping only.
Inserting the equation for the self-energy and using \(\chi_{sp,ch}(\mathbf{q},iq_{n})=\chi_{sp,ch}(-\mathbf{q},-iq_{n})\), we obtain
\[\Sigma^{(2)}_{\sigma}(1,\bar{2})\mathcal{G}^{(2)}_{\sigma}(\bar{2},1^{+})=\frac{Un^{2}}{4}\] \[-\frac{UT}{16N}\sum_{\mathbf{q},iq_{n}}\left[3U_{sp}\chi_{sp}( \mathbf{q},iq_{n})+U_{ch}\chi_{ch}(\mathbf{q},iq_{n})\right]\chi^{(2)}( \mathbf{q},iq_{n}), \tag{69}\]
which can be solved using the relations
\[\chi_{sp}(\mathbf{q},iq_{n})-\chi^{(2)}(\mathbf{q},iq_{n})=\frac {U_{sp}}{2}\chi_{sp}(\mathbf{q},iq_{n})\chi^{(2)}(\mathbf{q},iq_{n}), \tag{70}\] \[\chi^{(2)}(\mathbf{q},iq_{n})-\chi_{ch}(\mathbf{q},iq_{n})=\frac {U_{ch}}{2}\chi_{ch}(\mathbf{q},iq_{n})\chi^{(2)}(\mathbf{q},iq_{n}). \tag{71}\]
Substituting these relations in Eq. (69) and using the sum rules for \(\chi_{sp,ch}\) and \(\chi^{(2)}\), we find that
\[\Sigma^{(2)}_{\sigma}(1,\bar{2})\mathcal{G}^{(2)}_{\sigma}(\bar{2},1^{+})=U \langle n_{\uparrow}n_{\downarrow}\rangle, \tag{72}\]
which is the exact result expected from Eq. (23). Hence, at the second level of approximation, the TPSC+ approach shows consistency between single-particle properties such as the self-energy and the Green's function and two-particle properties like the double occupancy. This is another improvement over the original TPSC approach, in which this consistency exists with the trace of \(\Sigma^{(2)}\mathcal{G}^{(1)}\) instead of that of \(\Sigma^{(2)}\mathcal{G}^{(2)}\).
This is not true for the case of the TPSC+SFM approach: the traces of the product of the self-energy and the Green's function, both the non-interacting \(\mathcal{G}^{(1)}\) and interacting \(\mathcal{G}^{(2)}\), are not expected to yield the exact expected result Eq. (23). However, we show in Fig. 4 that the deviation between the expected result and the trace with the interacting Green's function remains of the order of a few percent, and that the deviation is smaller for the trace with the interacting Green's function than with the non-interacting one.
### Pseudogap in the \(2d\) Hubbard model
The pseudogap from antiferromagnetic fluctuations in the weak correlation regime of the Hubbard model has been observed with multiple numerical methods [18], though it was first predicted by the TPSC approach [19]. Here, we show that the same phenomenology is obtained with the TPSC+ approach.
The antiferromagnetic pseudogap appears in the renormalised classical regime, where the spin fluctuations are dominant. In this regime, we once again use the Ornstein-Zernicke of Eq.(45) for the spin susceptibility and, therefore, the self-energy of Eq.(57).
As described in Refs. [47] and [48], we evaluate the self-energy at the hot spots \(\mathbf{k}_{F}\) and at zero frequency \(\omega=i0^{+}\). To do so, we change the variables from \(\mathbf{q}-\mathbf{Q}\rightarrow\mathbf{q}\) and approximate \(\epsilon_{\mathbf{k}_{F}+\mathbf{q}}\) by \(\mathbf{v}_{\mathbf{k}_{F}}\cdot\mathbf{q}\) with the Fermi velocity at the hot spots connected by \(\mathbf{Q}\) to the Fermi wave vector we are interested in. We obtain
\[\Sigma^{R}_{cl}(\mathbf{k}_{F},0)=\frac{3UT}{4\xi_{0}^{2}}\int\frac{d^{2}q}{(2 \pi)^{2}}\frac{1}{q_{\perp}^{2}+q_{\parallel}^{2}+\xi_{sp}^{2}}\frac{1}{i0^{ +}-q_{\parallel}v_{F}}, \tag{73}\]
where the wave vector \(\mathbf{q}\) is separated in components parallel (\(q_{\parallel}\)) and perpendicular (\(q_{\perp}\)) to the Fermi velocity. The integration can then be performed in the complex plane, [47; 19] leading to the following imaginary part of the self-energy
\[\mathrm{Im}\Sigma_{cl}(\mathbf{k}_{F},0)=-\frac{3UT}{16\xi_{0}^{2}}\frac{\xi_ {sp}}{\xi_{th}}, \tag{74}\]
with \(\xi_{th}=\frac{v_{F}}{\pi T}\) the thermal de Broglie wavelength. As a reminder, the spectral weight \(A(\mathbf{k},\omega)\) is written as a function of the self-energy
\[A(\mathbf{k},\omega)=-2\frac{\mathrm{Im}\Sigma(\mathbf{k},\omega)}{(\omega- \epsilon_{\mathbf{k}}+\mu-\mathrm{Re}\Sigma(\mathbf{k},\omega))^{2}+(\mathrm{ Im}\Sigma(\mathbf{k},\omega))^{2}}. \tag{75}\]
At the Fermi level, if the absolute value of the imaginary part of the self-energy is large, the spectral weight is suppressed. Conversely, a small imaginary part of the self-energy leads to a large value of the spectral weight. Hence, from Eq. (74), the TPSC+ approach predicts a suppression of the spectral weight \(A(\mathbf{k}_{F},0)\) when the antiferromagnetic spin correlation length \(\xi_{sp}\) becomes larger than the thermal de Broglie wavelength \(\xi_{th}\), which is known as the Vilk criterion. This phenomenology corresponds to the appearance of a pseudogap from antiferromagnetic spin fluctuations. Moreover, the same arguments detailed in Ref. [19] can be used to show that two peaks appear at finite frequency in the spectral weight when the Vilk criterion is satisfied.
In Fig. 5, we show the imaginary part of the self-energy evaluated at the antinodal point \(\mathbf{k}=(\pi,0)\) as a function of the Matsubara frequency for different temperatures, at half-filling and \(U=2\). The calculations were performed on the
Figure 4: Relative deviation between the traces \(\mathrm{Tr}\Sigma^{(2)}\mathcal{G}^{(i)}\) and the expected exact result of Eq. (23) from the TPSC+SFM approach. On the left, the trace is computed with the non-interacting Green’s function \(\mathcal{G}^{(1)}\), whereas on the right, it is computed with the interacting Green’s function \(\mathcal{G}^{(2)}\). The calculations are done for the \(2D\) square lattice, at half-filling, with nearest-neighbour hopping only.
square lattice with nearest-neighbour hopping only. This figure shows numerically that all three TPSC methods can indeed show the opening of a pseudogap at weak-coupling in the \(2D\) Hubbard model, though at different temperatures.
### Limitations: conservation laws and f-sum rule
We now turn to some of the limitations of the TPSC+ approach. More specifically, we show that this method does not respect conservation laws and the f-sum rule.
In the Hubbard model, the f-sum rule for the spin and charge susceptibilities is [19]
\[\int\frac{d\omega}{\pi}\omega\chi^{\prime\prime}_{sp,ch}(\mathbf{q},\omega)= \frac{1}{N}\sum_{\mathbf{k},\sigma}\left(\epsilon_{\mathbf{k}+\mathbf{q}}+ \epsilon_{\mathbf{k}-\mathbf{q}}-2\epsilon_{\mathbf{k}}\right)n^{(2)}_{ \mathbf{k},\sigma}, \tag{76}\]
where \(n^{(2)}_{\mathbf{k},\sigma}\) is the spin- and momentum-resolved distribution function (the Fermi function in the non-interacting case) computed from the interacting Green's function. We now show that this sum rule is satisfied to some level within the original TPSC approach [19] and the TPSC+SFM modification, but is violated within the TPSC+ approach.
We start from the spectral representation of the susceptibilities
\[\chi_{sp,ch}(\mathbf{q},iq_{n}) =\int\frac{d\omega}{\pi}\frac{\chi^{\prime\prime}_{sp,ch}(\mathbf{ q},\omega)}{\omega-iq_{n}},\] \[=\frac{1}{q_{n}^{2}}\int\frac{d\omega}{\pi}\frac{\omega\chi^{ \prime\prime}_{sp,ch}(\mathbf{q},\omega)}{1+(\omega/q_{n})^{2}}, \tag{77}\]
which, at high frequency, reduces to
\[\chi_{sp,ch}(\mathbf{q},iq_{n})\simeq\frac{1}{q_{n}^{2}}\int\frac{d\omega}{ \pi}\omega\chi^{\prime\prime}_{sp,ch}(\mathbf{q},\omega). \tag{78}\]
Hence, to determine if the TPSC approaches satisfy the f-sum rule, we calculate the coefficients of the \(1/q_{n}^{2}\) term of the high-frequency expansion of the spin (or charge) susceptibility. Indeed, as seen from Eq. (78), these coefficients correspond to the left hand-side of the f-sum rule Eq. (76). We denote the coefficients with \(\alpha^{(i)}\), where \(i\) stands for TPSC, TPSC+ or TPSC+SFM, first recalling that
\[\chi_{sp,ch}(\mathbf{q},iq_{n})=\frac{\chi^{(i)}(\mathbf{q},iq_{n})}{1\mp \frac{U_{sp,ch}}{2}\chi^{(i)}(\mathbf{q},iq_{n})}. \tag{79}\]
Since the non-interacting and partially dressed susceptibilities \(\chi^{(i)}\) also behave like \(1/q_{n}^{2}\) at high frequency, only the numerators of the spin and charge susceptibilities Eq. (79) contribute to the coefficients \(\alpha^{(i)}\). We now compute the high-frequency expansion of \(\chi^{(i)}\) from the spectral representation of the Green's functions. With \(A^{(i)}_{\sigma}\) the spectral weight and \(i=1,\ 2\) denoting the noninteracting and interacting cases respectively, we find
\[\chi^{(i)}(\mathbf{q},iq_{n})=-\frac{T}{2N}\sum_{k,\sigma}\mathcal{G}^{(1)}_ {\sigma}(k)\mathcal{G}^{(i)}_{\sigma}(k+q)+[q\leftrightarrow-q],\]
\[=-\frac{T}{2N}\sum_{k,\sigma}\int\frac{d\omega d\omega^{\prime}}{(2\pi)^{2}} \frac{A^{(1)}_{\sigma}(\mathbf{k},\omega)A^{(i)}_{\sigma}(\mathbf{k}+\mathbf{ q},\omega^{\prime})}{(ik_{n}-\omega)(ik_{n}+iq_{n}-\omega^{\prime})}\]
\[+[q\leftrightarrow-q],\]
\[=-\frac{1}{N}\sum_{\mathbf{k},\sigma}\int\frac{d\omega d\omega^{\prime}}{(2\pi )^{2}}\frac{A^{(1)}_{\sigma}(\mathbf{k},\omega)A^{(i)}_{\sigma}(\mathbf{k}+ \mathbf{q},\omega^{\prime})}{q_{n}^{2}+(\omega-\omega^{\prime})^{2}}\]
\[\times(f(\omega)-f(\omega^{\prime}))(\omega-\omega^{\prime})+[\mathbf{q} \leftrightarrow-\mathbf{q}]. \tag{80}\]
From this, we obtain the coefficients \(\alpha^{(i)}\)
\[\alpha^{(i)}= -\frac{1}{2N}\sum_{\mathbf{k},\sigma}\int\frac{d\omega d\omega^{ \prime}}{(2\pi)^{2}}A^{(1)}_{\sigma}(\mathbf{k},\omega)A^{(i)}_{\sigma}( \mathbf{k}+\mathbf{q},\omega^{\prime})\] \[\times(f(\omega)-f(\omega^{\prime}))(\omega-\omega^{\prime})+[ \mathbf{q}\leftrightarrow-\mathbf{q}]. \tag{81}\]
Figure 5: Imaginary part of the self-energy computed with (a) TPSC, (b) TPSC+ and (c) TPSC+SFM at the antinodal point \(\mathbf{k}=(\pi,0)\). The calculations are done for the \(2D\) square lattice, at half-filling, with nearest-neighbour hopping only, with \(U=2\). All three methods predict the opening of a pseudogap as the temperature decreases, as shown by the value of the imaginary part of the self-energy at \(\omega_{0}\) that becomes more negative than the value at \(\omega_{1}\).
The following identities allow us to simplify these results [19, 49, 50]
\[\int\frac{d\omega}{2\pi}A^{(i)}_{\sigma}(\mathbf{k},\omega) =1, \tag{82}\] \[\int\frac{d\omega}{2\pi}A^{(i)}_{\sigma}(\mathbf{k},\omega)f( \omega) =n^{(i)}_{\mathbf{k},\sigma},\] (83) \[\int\frac{d\omega}{2\pi}A^{(i)}_{\sigma}(\mathbf{k},\omega)\omega =\epsilon_{\mathbf{k}}-\mu^{(i)}+U\frac{n}{2}\delta_{i,2},\] (84) \[\frac{1}{N}\sum_{\mathbf{k}}\int\frac{d\omega}{2\pi}A^{(i)}_{ \sigma}(\mathbf{k},\omega)\omega f(\omega) =U\langle n_{\uparrow}n_{\downarrow}\rangle\delta_{i,2}\] \[+\frac{1}{N}\sum_{\mathbf{k}}(\epsilon_{\mathbf{k}}-\mu^{(i)})n ^{(i)}_{\mathbf{k},\sigma}. \tag{85}\]
Here, \(\mu^{(i)}\) is the chemical potential, and \(n^{(i)}_{\mathbf{k},\sigma}\) is the spin- and momentum-resolved particle distribution function computed with the non-interacting (\(i=1\)) or interacting (\(i=2\)) Green's function. We also note that Eq. (85) is only valid at level \(i=2\) when \(\mathrm{Tr}[\Sigma^{(2)}\mathcal{G}^{(2)}]=U\langle n_{\uparrow}n_{\downarrow}\rangle\), which is true in the case of TPSC+. From this, we find that the coefficient of the \(1/q_{n}^{2}\) term is
\[\alpha^{(i)}= \frac{1}{2N}\sum_{\mathbf{k},\sigma}(\epsilon_{\mathbf{k}+ \mathbf{q}}+\epsilon_{\mathbf{k}-\mathbf{q}}-2\epsilon_{\mathbf{k}})(n^{(1)} _{\mathbf{k},\sigma}+n^{(i)}_{\mathbf{k},\sigma})\] \[+U\frac{n^{2}}{2}\delta_{i,2}-2U\langle n_{\uparrow}n_{\downarrow }\rangle\delta_{i,2}. \tag{86}\]
For TPSC, with \(i=1\), we obtain
\[\alpha^{(\mathrm{TPSC})}=\frac{1}{N}\sum_{\mathbf{k},\sigma}(\epsilon_{ \mathbf{k}+\mathbf{q}}+\epsilon_{\mathbf{k}-\mathbf{q}}-2\epsilon_{\mathbf{k} })n^{(1)}_{\mathbf{k},\sigma}. \tag{87}\]
The coefficient for TPSC+SFM is identical to that of TPSC since, at high frequency, the spin and charge susceptibilities are calculated from the non-interacting Lindhard function \(\chi^{(1)}\) (see Eq. (39)). This term differs from the right-hand side of the f-sum rule Eq. (76) in the particle distribution function: TPSC and TPSC+SFM satisfy the f-sum rule at the non-interacting level, but not at the interacting level.
For TPSC+, where \(i=2\), we find instead
\[\alpha^{(\mathrm{TPSC+})} =\frac{1}{2N}\sum_{\mathbf{k},\sigma}(\epsilon_{\mathbf{k}+ \mathbf{q}}+\epsilon_{\mathbf{k}-\mathbf{q}}-2\epsilon_{\mathbf{k}})(n^{(1)} _{\mathbf{k},\sigma}+n^{(2)}_{\mathbf{k},\sigma})\] \[+U\frac{n^{2}}{2}-2U\langle n_{\uparrow}n_{\downarrow}\rangle. \tag{88}\]
Compared with the right-hand side of Eq. (76), the coefficient for TPSC+ deviates from the expected value of the f-sum rule by the term \(U\frac{n^{2}}{2}-2U\langle n_{\uparrow}n_{\downarrow}\rangle\) at \(\mathbf{q}=0\). It is important to note that the f-sum rule is satisfied exactly at \(\mathbf{q}=0\) by both TPSC and TPSC+SFM.
We now assess the deviation of the coefficients Eq. (87) and Eq. (88) from the f-sum rule numerically. In practice, we evaluate: (a) the coefficients Eq. (87) and Eq. (88), (b) the expected value of the f-sum rule from the right-hand side of Eq. (76), and (c) the left-hand side of Eq. (76). This last result is obtained from a derivative in imaginary time of the spin susceptibility,
\[\int\frac{d\omega}{\pi}\omega X^{\prime\prime}_{sp,ch}(\mathbf{q},\omega)\] \[=\lim_{\eta\to 0}T\sum_{iq_{n}}(e^{-iq_{n}\eta}-e^{iq_{n}\eta})iq_{n} \chi_{sp,ch}(\mathbf{q},iq_{n}), \tag{89}\] \[=-2\left.\frac{\partial\chi_{sp,ch}(\mathbf{q},\tau)}{\partial \tau}\right|_{\tau=0+}, \tag{90}\]
which we compute numerically using finite differences.
In Fig. 6, we show the relative deviation between the coefficients computed with Eq. (90) and the expected value of the f-sum rule (right-hand side of Eq. (76)). More specifically, we evaluate this deviation at \(\mathbf{q}=(2\pi/N,0)\) with \(N=256\) the number of sites in the \(x\) direction, since we expect it to be largest at small values of \(\mathbf{q}\) where the f-sum rule should be zero. The small relative deviations shown in the panels (a) and (b) for TPSC and TPSC+SFM are due to the fact that both approaches satisfy the f-sum rule at the non-interacting level. In contrast, the large deviation seen in panel (c) for TPSC+ comes from the absolute value of the coefficient at \(\mathbf{q}=0\) for this method, \(U\frac{n^{2}}{2}-2U\langle n_{\uparrow}n_{\downarrow}\rangle\), which becomes more important as \(U\) increases.
Another property that follows from spin and charge conservation, is that the spin and charge susceptibilities evaluated at zero wave vector and finite Matsubara frequency should
Figure 6: Violation of the f-sum rule evaluated at \(\mathbf{q}=(2\pi/N,0)\) with \(N=256\) sites in the \(x\) direction and density \(n=1\) by (a) TPSC, (b) TPSC+SFM and (c) TPSC+ approaches for various values of \(U\) as a function of the temperature \(T\). More specifically, the plots show the relative deviation between the coefficient computed with the derivatives Eq. (90) and the f-sum rule \(\frac{1}{N}\sum_{\mathbf{k},\sigma}\left(\epsilon_{\mathbf{k}+\mathbf{q}}+ \epsilon_{\mathbf{k}-\mathbf{q}}-2\epsilon_{\mathbf{k}}\right)n^{(2)}_{\mathbf{ k},\sigma}\).
be zero: \(\chi_{sp,ch}({\bf q}=0,iq_{n}\neq 0)=0\). In TPSC, this is achieved because the Lindhard susceptibility is zero at wave-vector \({\bf q}=0\) for all non-zero Matsubara frequencies. This is also true in TPSC+SFM. However, it is not the case in TPSC+, where the partially dressed susceptibility \(\chi^{(2)}\) remains finite at \({\bf q}=0\) for non-zero Matsubara frequencies. We expect the largest deviation to occur at the \(n=1\) Matsubara frequency \(q_{1}\). Hence, in Fig. 7, we show for TPSC+ the value of the partially dressed susceptibility \(\chi^{(2)}({\bf q}=0,iq_{1})\) divided by \(\chi^{(2)}({\bf q}=0,iq_{0})\), the value at the Matsubara frequency \(q_{0}=0\), as a function of the temperature for \(U=1,\ 2,\ 3\) and \(4\), once again for the \(2D\) square lattice at half-filling with nearest-neighbor hopping only. The violation of the conservation laws increases as the temperature decreases and as \(U\) increases. Though the value at the \(n=1\) Matsubara frequency remains small for \(U\leq 2\) (one order of magnitude smaller than the value at zero Matsubara frequency), it reaches almost \(20\%\) of the value at the \(n=0\) Matsubara frequency at low temperatures for \(U=3\) and \(U=4\).
## V Results for the \(2d\) Hubbard model
Now that we have introduced the TPSC+ and the TPSC+SFM approaches and their theoretical basis, we apply them to the \(2D\) Hubbard model. The aim of this section is to benchmark the methods by comparing their results to available exact diagrammatic Monte Carlo results and to assess their regime of validity. We first benchmark the spin correlation length in the weak interaction regime at half-filling in Sec. (V.1). In Sec. (V.2), we benchmark the spin and charge susceptibilities away from half-filling. We end this section with the benchmark of the self-energy in Sec. (V.3), where we add a comparison to the self-energy obtained by second-order perturbation theory. All benchmarks provided here are for the \(2D\) Hubbard model with nearest-neighbor hopping only. Our energy units are \(t=1\), lattice spacing \(a=1\), Plank's constant \(\hbar=1\) and Boltzmann's constant \(k_{B}=1\). We do not put the factor \(1/2\) for the spin. Details of the implementation may be found in Appendix A.
### Spin correlation length at half-filling
In Fig. 8, we show the spin correlation length obtained for the \(2D\) Hubbard at half filling with an interaction strength \(U=2\) as a function of the inverse temperature \(\beta=1/T\). Results from all three TPSC methods are compared to the DiagMC benchmark data obtained from Ref. [18]. Panel (a) shows the absolute value of the spin correlation length, while panel (b) shows the relative deviation between the three TPSC methods and the DiagMC data. The relative deviation is calculated as \(\Delta=(\xi_{sp,\ \text{TPSC}}-\xi_{sp,\ \text{DiagMC}})/\xi_{sp,\ \text{DiagMC}}\). We first note that all three TPSC methods yield accurate results at high temperatures (\(\beta\lesssim 7\)), at a much lower computational cost than DiagMC calculations. As was shown in Ref. [18], the spin correlation length obtained from the original TPSC approach deviates strongly from the benchmark data as the temperature decreases. In TPSC, this large deviation is due to the entry in the renormalized classical regime around \(T=0.1\), below which the method is not valid anymore. Both the TPSC+ and TPSC+SFM approach offer a quantitative and qualitative improvement over the TPSC approach in the low-temperature regime of the weakly interacting \(2D\) Hubbard model. Quantitatively, the relative deviations with DiagMC reach \(1165\%\) for TPSC, \(33\%\) for TPSC+ and \(82\%\) for TPSC+SFM at \(\beta=10\).
Figure 8: Spin correlation length from TPSC (red squares), TPSC+ (blue triangles) and TPSC+SFM (green circles) calculations, compared to the DiagMC benchmark (black triangles) from Ref. [18]. The results are obtained as a function of the inverse temperature \(\beta\) for the half-filled \(2D\) Hubbard model on a square lattice with \(U=2\). Panel (a) shows the absolute value of the spin correlation length, while panel (b) shows the relative deviation between the data from all three TPSC methods and the DiagMC benchmark. The relative deviation is calculated as \(\Delta=(\xi_{sp,\ \text{TPSC}}-\xi_{sp,\ \text{DiagMC}})/\xi_{sp,\ \text{DiagMC}}\).
Figure 7: Ratio of \(\chi^{(2)}({\bf q}=0,iq_{1})/\chi^{(2)}({\bf q}=0,iq_{0})\) as a function of the temperature \(T\) obtained from TPSC+ calculations. The TPSC+ approach violates conservation laws, as shown by this non-zero ratio. The calculations are done for the \(2D\) square lattice, at half-filling, with nearest-neighbour hopping only.
### Weak to intermediate interaction regimes away from half-filling
#### v.2.1 Double occupancy
Fig. 9 shows the temperature and \(U\) dependence of the double occupancy computed with (a) TPSC, (b) TPSC+ and (c) TPSC+SFM at fixed density \(n=0.875\). We compare our results with CDet benchmark data [30]. The bottom panel of Fig. 9 shows the relative deviation between (d) TPSC, (e) TPSC+ and (f) TPSC+SFM with respect to the CDet benchmark. The relative deviation is calculated as \(\Delta=(D_{\rm{TPSC}}-D_{\rm{CDet}})/D_{\rm{CDet}}\). In the weak interaction regime (\(U\leq 3\)), all three TPSC method yield quantitatively accurate double occupancies: the relative deviations with respect to the CDet benchmark reach at most \(5\%\) (in absolute value) at all temperatures. For the higher values of \(U\) considered (\(U=4\) and \(U=5\)), the TPSC results are more accurate than the TPSC+ and TPSC+SFM ones. The deviations obtained from TPSC do not exceed \(15\%\), while they reach almost \(35\%\) for TPSC+ and TPSC+SFM for \(U=5\). Qualitatively, the double occupancy should decrease slightly as the temperature is lowered, as seen from the CDet data. This behavior is captured qualitatively at \(U=4\) and \(U=5\) by TPSC+SFM, but not by TPSC+.
#### v.2.2 Spin susceptibility
We now study the spin susceptibility away from half filling. We first illustrate the maximal value of the spin susceptibility at fixed density \(n=0.875\) as a function of \(U\) in the weak to intermediate interaction regime (\(U\leq 5\)) in Fig. 10 from (a) TPSC, (b) TPSC+ and (c) TPSC+SFM, compared with the CDet benchmark data [30]. In Appendix B, we show extended results in the strong interaction regime (up to \(U=8\)), above the validity regime of all three TPSC approaches.
In the bottom panel of Fig. 10, we show the relative deviation between (d) TPSC, (e) TPSC+ and (f) TPSC+SFM and the CDet benchmark data, calculated as \(\Delta=(\chi^{\rm{max}}_{sp,\,{\rm{TPSC}}}-\chi^{\rm{max}}_{sp,\,{\rm{CDet}}} )/\chi^{\rm{max}}_{sp,\,{\rm{CDet}}}\).
The results of all three TPSC variations are in qualitative and quantitative agreement with the exact CDet results in the weakly interacting regime (\(U\leq 2\)), where the relative deviations with respect to the benchmark are below \(10\%\) (in absolute value). The deviations increase with the interaction \(U\) for all three TPSC methods. The TPSC+ and TPSC+SFM approaches offer a significant qualitative and quantitative improvement over the original TPSC approach at low temperatures (\(T\leq 0.2\)). In this temperature regime, the deviations between the TPSC+ and TPSC+SFM data and the CDet benchmark is at most \(25\%\), whereas it exceeds \(50\%\) with TPSC. While TPSC+ is accurate at low temperatures, its deviation with the benchmark increases with temperature. The results from TPSC+SFM have the best overall qualitative and quantitative agreement with the benchmark data. In contrast, the maximal value of the spin susceptibility is systematically overestimated by TPSC and underestimated by TPSC+.
In Fig. 11, we show the maximal value of the spin susceptibility as a function of the density \(n\) and of \(U\) at fixed temperature \(T=0.2\). We first discuss the absolute TPSC results shown in panel (a) as well as their relative deviations with respect to the CDet benchmark shown in panel (d). The TPSC results for the maximal value of the spin susceptibility become more accurate as the density decreases, moving away from half-filling. More specifically, the relative deviation is below \(5\%\) at \(n=0.8\). This better agreement at low density is due to the the Hartree decoupling used for the TPSC ansatz, which
Figure 10: Top panel: Semi-logarithmic plots of the maximal value of the spin susceptibility as a function of \(U\) for five temperatures at fixed density \(n=0.875\). Results are shown for (a) TPSC, (b) TPSC+ and (c) TPSC+SFM calculations in full lines. The dotted dashed lines are the CDet data from Ref. [30]. Bottom panel: Relative deviation between the CDet benchmark data and (d) TPSC, (e) TPSC+ and (f) TPSC+SFM results.
Figure 9: Top panel: Double occupancy as a function of \(T\) and \(U\) at fixed density \(n=0.875\). Full lines are obtained with (a) TPSC, (b) TPSC+ and (c) TPSC+SFM calculations. CDet data, shown as dashed lines, come from Ref. [30]. Bottom panel: Relative deviation between the CDet benchmark data and (d) TPSC, (f) TPSC+ and (g) TPSC+SFM data, shown as \(\Delta=(D_{\rm{TPSC}}-D_{\rm{CDet}})/D_{\rm{CDet}}\).
works best in the dilute limit. In contrast, when the density is closer to half filling (\(n=1\)), TPSC shows deviations that can exceed \(25\%\) even in the weakly interacting regime (\(U=2\)). We now turn to the TPSC+ and TPSC+SFM approach results shown in panels (b) and (c) of Fig. 11 respectively, while panels (d) and (e) show their deviations to the benchmark data. Both TPSC+ and TPSC+SFM tend to underestimate the maximal value of the spin susceptibility for these model parameters. Overall, TPSC+SFM offers the best improvement over the original TPSC approach. TPSC+SFM yields accurate results for low values of \(U\) (\(U\leq 3\)), with deviations to the CDet data that are below \(10\%\) in absolute value. The same is true of the TPSC+ results at slightly lower values of \(U\) (\(U\leq 2\)).
In summary, we conclude from both Fig. 10 and Fig. 11 that the maximal value of the spin susceptibility obtained from the TPSC+SFM approach is qualitatively and reasonably quantitatively accurate in the weak to intermediate coupling regime of the \(2D\) Hubbard model, away from half filling. In contrast, the TPSC results are accurate in the dilute limit and at high temperatures, while the TPSC+ results are accurate at low temperatures, away from half filling.
#### iv.2.3 Charge susceptibility
In Fig. 12, we show the maximal value of the charge susceptibility as a function of the temperature and of \(U\) at fixed density \(n=0.875\). The results from all three TPSC methods are in qualitative agreement with the CDet benchmark data. Quantitatively, the charge susceptibility is underestimated by all three TPSC approaches for all the parameters considered here. The deviations with respect to the CDet benchmark increase with \(U\) and as the temperature decreases for the three methods. The TPSC results have the strongest deviations (about \(30\%\) at the lowest temperature considered, \(T=0.067\), and \(U=5\)), while the TPSC+SFM results have the best overall agreement with the benchmark (about \(20\%\) at most).
### Benchmark of the self-energy
In Fig. 13, we show the imaginary part of the local self-energy as a function of the Matsubara frequencies for \(T=0.1\) in the dilute limit (\(n=0.4\) and \(n=0.8\)), and for the Hubbard interaction strengths \(U=2\) and \(U=4\). The results are obtained from TPSC, TPSC+ and TPSC+SFM calculations. We compare them to DiagMC benchmark data [29] and to the self-energy \(\Sigma_{2PT}\) obtained from second-order perturbation theory (2PT)
\[\Sigma_{2PT}(\mathbf{r},\tau)=U^{2}\mathcal{G}^{(0)}(\mathbf{r},\tau) \mathcal{G}^{(0)}(\mathbf{r},\tau)\mathcal{G}^{(0)}(-\mathbf{r},-\tau). \tag{91}\]
For these model parameters, there is no significant difference between the TPSC, TPSC+ and TPSC+SFM results. However, they differ significantly from the 2PT self-energy even in the low interaction case with density \(n=0.4\) and interaction \(U=2\). This highlights that the three TPSC approaches are non-trivial in their construction. The results obtained by the three TPSC approaches are accurate at low Matsubara frequencies. These methods can hence properly describe the Fermi liquid properties of the quasiparticles for these model parameters. This is in contrast with 2PT, which is inaccurate at low frequencies even in the dilute limit \(n=0.4\). In the latter case it reaches the correct high-frequency tail of the benchmark [29] at much higher frequency than the range
Figure 11: Top panel: Semi-logarithmic plots of the maximal value of the spin susceptibility as a function of \(U\) for five different values of filling \(n\) at fixed temperature \(T=0.2\). Results are shown from (a) TPSC, (b) TPSC+ and (c) TPSC+SFM calculations in full lines. The dotted dashed lines are the CDet data from Ref. [30]. Bottom panel: Relative deviation between the CDet benchmark data and (d) TPSC, (e) TPSC+ and (f) TPSC+SFM results.
Figure 12: Top panel: Semi-logarithmic plots of the maximal value of the charge susceptibility as a function of \(U\) for five different temperatures at fixed filling \(n=0.875\). Results are shown for (a) TPSC, (b) TPSC+ and (c) TPSC+SFM calculations in full lines. The dashed lines are the CDet data from Ref. [30]. Bottom panel: Relative deviation between the CDet benchmark data and the (d) TPSC, (e) TPSC+ and (f) TPSC+SFM results.
shown in Fig. 13. The high-frequency tail of the local part of the self-energy of all three TPSC methods deviates from the benchmark data. This is more pronounced in the \(U=4\) case than for \(U=2\). The reasons for the behavior of TPSC at high frequencies is discussed in Appendix E of Ref [19]. Similar considerations apply to the other versions of TPSC.
In panels (a) and (b) of Fig. 14, we show the imaginary part of the local self-energy as a function of the Matsubara frequencies at half filling and with a Hubbard interaction strength of \(U=2\), for \(T=0.1\) and \(T=1\). We once again compare the three TPSC methods to the 2PT and benchmark [18] self-energies. Panels (c) and (d) of Fig. 14 show the relative deviation between all TPSC results and the DiagMC benchmark data.
All the TPSC methods have a good global behaviour. Their results at \(T=1\) are accurate, with a deviation to the DiagMC benchmark of at most \(2\%\). Whereas the TPSC+SFM approach yields the most accurate results for the spin and charge susceptibilities, as shown in Sec. (V.2.2) and Sec. (V.2.3), the \(T=0.1\) results shown in panels (a) and (c) of Fig. 14 show that the TPSC+ results for the local self-energy are the most accurate ones. Similar to the dilute case presented in Fig. 13, the high-frequency tail of the self-energy computed with all three TPSC methods is less accurate than that obtained with the second-order perturbation theory. Both TPSC+ and TPSC+SFM offer improved results over the original TPSC method at \(T=0.1\), with deviations to the DiagMC benchmark of the order of \(20\%\) at the lowest Matsubara frequency. In contrast, this deviation reaches about \(50\%\) with TPSC.
In Fig. 15, we focus on the low-temperature \(T=0.1\) case and show the imaginary part of the self-energy as a function of the Matsubara frequencies at two different wave vectors: the nodal point \(\mathbf{k}=(\pi/2,\pi/2)\), and the antinodal point \(\mathbf{k}=(\pi,0)\). These results are obtained at half filling and with a Hubbard interaction strength \(U=2\). In this model, DiagMC calculations show that the antiferromagnetic pseudogap should open at the antinode at the temperature \(T_{AN}^{*}=0.065\), and at the node at the temperature \(T_{N}^{*}=0.0625\)[18]. We first note that the 2PT self-energy is closer to that of Fermi liquid quasiparticles than the DiagMC exact results at both \(\mathbf{k}\)-points. The absolute value of the resulting deviation between the 2PT and DiagMC self-energies is of the order of \(20-30\%\) for the first Matsubara frequency. In contrast, all three TPSC approaches a self-energy with a smaller quasiparticle weight than the DiagMC results for both \(\mathbf{k}\)-points. The TPSC approach overestimates the temperatures \(T_{AN}^{*}\) and \(T_{N}^{*}\) at which the pseudogap opens at the antinode and at the node, respectively. This is seen in panels (a) and (b) of Fig. 15 by the value of the imaginary part of the self-energy at the first Matsubara self-energy \(\omega_{0}\), which is more negative that that at the second Matsubara frequency \(\omega_{1}\). This leads to deviations between TPSC and DiagMC that exceed \(60\%\) at the first Matsubara frequency. As was anticipated in Fig. 5, TPSC+ and TPSC+SFM also overestimate the pseudogap temperatures, but in a less pronounced way than TPSC. At \(T=0.1\), the deviations between the TPSC+(SFM) and DiagMC self-energies is of the order of \(20\%\) (\(35\%\)). This is a significant improvement over the original TPSC approach. From this, we conclude that TPSC+ is better suited at describing the self-energy than TPSC+SFM, but TPSC+SFM gives a more accurate description of the spin and charge susceptibilities.
Figure 14: Top panel: Imaginary part of the local self-energy as a function of the Matsubara frequencies at half-filling \(n=1\), and \(U=2\). Calculations are done for temperatures (a) T=0.1 and (b) T=1.0. The plots show the data from TPSC (red squares), TPSC+(blue circles), TPSC+SFM (green triangles), and 2PT (purple pentagons) calculations. The benchmark data, in dashed black lines, is obtained from Ref. [18]. Bottom panel: relative deviation between the TPSC, TPSC+, TPSC+SFM and 2PT calculations and the benchmark at temperatures (c) T=0.1 and (d) T=1.0.
Figure 13: Imaginary part of the local self-energy as a function of the Matsubara frequencies at fixed temperature \(T=0.1\), away from half-filling. Calculations are done for (a) \(U=2\) and \(n=0.8\), (b) \(U=4\) and \(n=0.8\), (c) \(U=2\) and \(n=0.4\), and (d) \(U=4\) and \(n=0.4\). Results are shown for TPSC (red squares), TPSC+ (blue circles), TPSC+SFM (green triangles) and 2PT (purple pentagons) calculations. The benchmark data, in black dashed lines, is from Ref. [29].
### Summary
In this section, we benchmarked the TPSC, TPSC+ and TPSC+SFM methods against available exact diagrammatic Monte Carlo results for the \(2D\) Hubbard model. We showed that, out of the three TPSC methods, the TPSC+ approachs yields the most accurate self-energy results, while the TPSC+SFM approach is the best at describing the spin and charge susceptibilities. By construction, the TPSC+ and TPSC+SFM approaches are not valid in the strongly interacting Hubbard model, just like TPSC. However, they extend the domain of validity of TPSC in the weak to intermediate correlation regime since they are valid at low temperatures in the renormalized classical regime of the \(2D\) Hubbard model. Future work will quantify this statement.
From our benchmark work, we conclude that all three TPSC approaches can be reliably used at high temperatures and low densities. At low temperatures and near half filling, the TPSC+ and TPSC+SFM approaches are more accurate than the original TPSC approach.
## VI Conclusion
In this work, we introduced two improved versions of the TPSC approximation for the one-band Hubbard model. We showed that both the TPSC+ and the TPSC+SFM approximations maintain some fundamental properties of TPSC: they satisfy the Pauli principle and the Mermin-Wagner theorem, they predict the pseudogap from antiferromagnetic fluctuations in the weak-interaction regime of the \(2D\) Hubbard model. The improvements brought by TPSC+ and TPSC+SFM do not come at a significant computational cost. Moreover, these approximations are valid deep in the renormalized classical regime of the \(2D\) Hubbard model where TPSC fails. Both TPSC+ and TPSC+SFM hence extend the domain of validity of TPSC. From a quantitative point of view, we showed that TPSC+SFM leads to accurate values of the spin and charge susceptibilities in the doped \(2D\) Hubbard model over a wide range of temperatures and interaction strength. For the same model, TPSC+ gives the most accurate results for the self-energy out of the three TPSC approximations considered here. However, TPSC+ does not satisfy the \(f\,\)sum rule. Our comparisons to the second-order perturbation theory self-energy illustrate the non-trivial nature of the three TPSC approaches. Finally, our work, in line with previous benchmark efforts [18, 19, 20, 25, 51, 52, 53, 54, 51], shows that TPSC and its variations are reliable methods to obtain qualitative results for the weak-interaction regime of the Hubbard model. From our benchmark work, we assess that all three TPSC methods are only valid in the weak to intermediate interaction regime of the Hubbard model and that they cannot capture the physics of the strong interaction regime (\(U\geq 5\)). Except deep in the renormalized classical regime, the quantitative improvements brought about by TPSC+ and TPSC+SFM over TPSC are similar to the ones brought about by the recently developed TPSC+DMFT approach [54].
_Acknowledgments._ We are grateful to Yan Wang for early collaborations and to Moise Rousseau for help with the convergence algorithm. We are especially grateful to the authors of Ref. [30], F. Simkovic, R. Rossi and M. Ferrero for sharing the CDet results that we used as benchmarks. We are also grateful to T. Schafer and the authors of Ref. [18] for making their benchmarks publicly available. This work has been supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) under grant RGPIN-2019-05312 (A.-M.S. T.), by a Vanier Scholarship (C. G.-N.) from NSERC and by the Canada First Research Excellence Fund. Simulations were performed on computers provided by the Canadian Foundation for Innovation, the Ministere de l'Education des Loisirs et du Sport (Quebec), Calcul Quebec, and Compute Canada.
## Appendix A Details on the implementation of the TPSC+ algorithm
In Fig. 16, we show the workflow of the TPSC+ calculations. It starts by initializing the TPSCplus class with the given input parameters. Then begins the TPSC calculation, which we use as the first guess for the self-consistency loop. The implementation for the TPSC+SFM algorithm is similar to the one described in this appendix.
The TPSC and TPSC+ approaches allows to choose the size of the reciprocal space that is relevant. We use Fast Fourier transforms (FFT) repetitively, so we choose the number of sites in one spatial direction \(n_{k}\) as a power of 2, because
Figure 15: Top panel: Imaginary part of the self-energy as a function of the Matsubara frequencies at half-filling \(n=1\), at temperature \(T=0.1\) and interaction \(U=2\), evaluated at (a) the antinode \(\mathbf{k}=(\pi,0)\) and (b) the node \(\mathbf{k}=(\pi/2,\pi/2)\). The plots show the data from TPSC (red squares), TPSC+ (blue circles), TPSC+SFM (green triangles), and 2PT (purple pentagons) calculations. DiagMC data, in dashed black lines, is obtained from Ref. [18]. Bottom panel: relative deviation between the TPSC, TPSC+ TPSC+SFM, and 2PT calculations and the DiagMC benchmark for (c) the antinode and (d) the node.
FFT works best in those cases. When \(1/\xi_{sp}\) is smaller than the k-resolution, the results are not considered valid anymore, because in this approximation of TPSC it is important that the self-energy be influenced by long-wavelength spin fluctuations. So the number \(n_{k}\) of the k-resolution should be chosen wisely considering those facts without over-using the resources.
Let the total number of wave vectors be defined as \(N=n_{k}^{2}\). The correlation functions and the self-energy are defined by convolutions in reciprocal space, which results in order \(N^{2}\) calculations for one physical quantity (susceptibilities or self-energy). It is computationally cheaper to obtain these quantities by Fast Fourier Transforms (FFT). Each FFT takes \(N\ln N\) calculations. We use the quantities in the reciprocal space, but because of the convolution properties, it is easy to determine what are those quantities in the real-space. We then have to calculate \(F(\vec{r},\tau)\propto f(\vec{r},\tau)g(\vec{r},-\tau)\), instead of \(F(\vec{q},iq_{n})\propto\sum_{k}f(\vec{k},ik_{n})g(\vec{k}+\vec{q},ik_{n}+iq_{ n})\). So in total, for one physical quantity, one executes \(2N\ln N+N\) calculations instead of \(N^{2}\) calculations.
For the calculations of physical quantities, and for handling the many-body propagators with the transitions from real-space to reciprocal-space with Matsubara frequencies, we use sparse sampling and the IR decomposition from the sparse-ir library [55; 56; 57].
The convergence criterion, as shown in Fig. 16, is the Frobenius norm, calculated with the numpy function "numpy.linalg.norm()", of the difference between the actual and previous calculations of the interacting Green's function \(\mathcal{G}^{(2)}\). When it is met, the calculation is ended. But if it is not, the new Green's function is obtained from a percentage of the actual \((j+1)\) and previous \((j)\) Green's function. The variable that represents this percentage is called \(\alpha\). It allows a better convergence, for hard cases. Also, we implemented a loop where the temperature drops slowly and where we use the answer from the previous temperature as a first guess for the next TPSC+ calculation at lower temperature, instead of starting from a new TPSC calculation. It has helped convergence at low temperatures, but it is not a drastic improvement. In even harder cases, at half-filling and at low temperature, we used the "Anderson acceleration" method [58; 59].
We have compared computing times between TPSC and TPSC+ for the parameters of Fig. 13.The computing time for TPSC+ with the size of the system being 256x256 was roughly 9 seconds and the computing time for TPSC with the same system size was roughly 3 seconds on a personal computer.
## Appendix B Benchmarks in the strong interaction regime
In this Appendix, we show results obtained with TPSC, TPSC+ and TPSC+SFM in the strong interaction regime of the \(2D\) Hubbard model. By construction, the three TPSC approaches are not intended to be valid in this regime of parameters. This is mainly due to the formulation of the TPSC ansatz Eq. (27), which is constructed through a Hartree-like decoupling.
We first illustrate the maximal value of the spin susceptibility in Fig. 17 as a function of the Hubbard interaction \(U\). The model parameters are (a) \(n=0.8\) and \(T=0.2\), (b) \(n=0.875\) and \(T=0.2\), and (c) \(n=0.875\) and \(T=0.1\). The TPSC, TPSC+ and TPSC+SFM results are compared to the CDet benchmark from Ref. [30]. The maximal value of the spin susceptibility increases with \(U\) until \(U\simeq 4\), and decreases as \(U\) increases above that point. This maximum near \(U=4\) corresponds to the onset of Heisenberg-like physics with a localization of magnetic moments. This behavior is completely
Figure 16: Algorithm’s workflow for the TPSC+ method. The input parameters are the Hubbard interaction strenght (\(U\)), the temperature (\(T\)), the filling (\(n\)), the second and third neighbour hopping terms (\(t^{\prime}\),\(t^{\prime\prime}\)), and the size of one of the two dimensions in the square reciprocal space (\(nk\)). The variable \(\alpha\) helps convergence by adding damping to the iterative process; it allows to choose the combination of the previous and present Green’s function that is input in the new calculation. The dotted lines represents Fast Fourier Transforms. The condition criteria is the Frobenius matrix norm of the difference between the iterated Green’s functions at steps \(j\) and \(j+1\).
missed by the TPSC approach, especially near half filling and at low temperatures: the maximal value of the spin susceptibility only increases with \(U\). In contrast, the TPSC+ and TPSC+SFM results do not increase as much with \(U\) and even seem to reach a plateau near \(U=8\). However, the decrease expected from the CDet data is not seen with these methods above \(U=4\), which is a clear indication of the limited interaction regimes that they can reliably access.
We now consider the following parameter set: density \(n=0.8\), temperature \(T=0.1\), and Hubbard interaction strength \(U=5\). This value of \(U\) is slightly above the expected regime of validity of the TPSC methods. We still consider this case in order to study the momentum dependence of the spin and charge susceptibilities, which we can compare here to available CDet data [30].
In Fig. 18 and Fig. 19, we show the spin susceptibility along the diagonal and along the edge of the Brillouin zone, respectively, for the parameters listed above. Both paths reveal similar information: When compared to CDet, TPSC overestimates the value of the maxima, while TPSC+ and TPSC+SFM underestimate it. The positions of the maxima obtained with the TPSC approaches are slightly shifted with respect to the CDet ones. Still, the results obtained with the three TPSC approaches are in qualitative agreement with the CDet benchmark. The separation of the maximum into two peaks around \((\pi,\pi)\) is captured by these approaches.
In Fig. 20 and Fig. 21, we show the charge susceptibility along the edge and the diagonal of the Brillouin zone, respectively. All three TPSC approaches result in lower values of \(\chi_{ch}\) than CDet, although they have the right qualitative behavior. As seen in Fig. 12, TPSC+SFM is in better agreement with the exact value of \(\chi_{ch}\) than TPSC and TPSC+.
Figure 21: Charge susceptibility along the path (0,\(\pi\)) to (0,0) in the Brillouin zone at fixed temperature \(T=0.1\), interaction \(U=5\) and filling \(n=0.8\), from (a) TPSC, (b) TPSC+ and (c) TPSC+SFM calculations. The black lines with error bars are CDet data obtained from Ref. [30]. |
2305.17234 | Unidirectional flow of flat-top solitons | We numerically demonstrate the unidirectional flow of flat-top solitons when
interacting with two reflectionless potential wells with slightly different
depths. The system is described by a nonlinear Schr\"{o}dinger equation with
dual nonlinearity. The results show that for shallow potential wells, the
velocity window for unidirectional flow is larger than for deeper potential
wells. A wider flat-top solitons also have a narrow velocity window for
unidirectional flow than those for thinner flat-top solitons. | M. O. D. Alotaibi, L. Al Sakkaf, U. Al Khawaja | 2023-05-26T19:46:34Z | http://arxiv.org/abs/2305.17234v2 | # Unidirectional flow of flat-top solitons
###### Abstract
We numerically demonstrate the unidirectional flow of flat-top solitons when interacting with two reflectionless potential wells with slightly different depths. The system is described by a nonlinear Schrodinger equation with dual nonlinearity. The results show that for shallow potential wells, the velocity window for unidirectional flow is larger than for deeper potential wells. A wider flat-top solitons also have a narrow velocity window for unidirectional flow than those for thinner flat-top solitons.
## I Introduction
Solitons are a class of nonlinear waves that demonstrate remarkable stability properties as a result of the balance between non-linear and dispersive effects within the medium [1]. They are found in a wide range of physics, including hydrodynamic tsunamis [2], fiber optic communications [3], solid-state physics [4], and the dynamics of biological molecules [5]. In addition, solitons in optics can exhibit unidirectional flow when interacting with a specific type of potential [6; 7; 8]. The significance of the unidirectional flow appealed to both fundamental scientists and applied scientists interested in developing new technologies related to soliton [9; 10; 11; 12; 13; 14; 15; 16; 17; 18]. A novel type of soliton is the flat-top soliton (FTS), characterized by a flat-top profile with a finite width and a sharp edge [19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29]. It is a solution to the nonlinear Schrodinger equations (NLSE) with dual nonlinearity. The flat top shape is due to the fact that the competition between the cubic and quintic nonlinear components in the NLSE imposes an upper limit on the wave field density. Therefore, an increase in the total norm of the field causes the width to expand while the amplitude remains constant. They have been observed in various physical systems, such as Bose-Einstein condensates [30], optical fibers [31], and microresonators [32], and have potential applications in optical communication, all-optical switching, and frequency comb generation.
In this article, we study the interaction of FTS with a particular class of potentials called Poschl-Teller potential [33; 34]. A remarkable property of Poschl-Teller potentials is that they are'reflectionless,' which means that the scattering of FTS characterized by the absence of radiation. By considering a double-well reflectionless potential of this type with slightly varying depths, we demonstrate numerically that a unidirectional flow of FTS can be achieved. To the best of our knowledge, the unidirectional flow of FTS has not been previously investigated. In order to confirm our findings, we performed the numerical calculations using two different methods, the split-step method, and the power series expansion method [35]. Both methods show similar results.
We followed the calculations in Ref. [36], where a single parameter, \(\gamma\), controls the FTS width. Depending on this value, we obtain a bright soliton or FTSs with different widths. Next, we select parameters for the potential that result in a relatively large velocity window for the unidirectional flow of a bright soliton. Then, we fix the parameters of the potential and vary \(\gamma\) to obtain FTSs and investigate their unidirectional flow.
The paper is structured as follows. In Sec. II, we present our model and define the problem. In Sec. III, we perform numerical simulations to demonstrate that the unidirectional flow of FTS can be accomplished when it interacts with a reflectionless asymmetric double-well potential. In addition, we investigate the behavior of the velocity window for unidirectional flow as the FTS width and potential depth is altered. Finally, Sec. IV concludes with a summary of our findings.
## II Setup and theoretical model
The following dimensionless NLSE with cubic and quintic nonlinearity terms describes the dynamics of FTS,
\[i\frac{\partial}{\partial t}\psi\left(x,t\right)+g_{1}\frac{ \partial^{2}}{\partial x^{2}}\psi\left(x,t\right)+g_{2}|\psi\left(x,t\right)|^ {2}\psi\left(x,t\right)\] \[+g_{3}|\psi\left(x,t\right)|^{4}\psi\left(x,t\right)+V(x)\psi \left(x,t\right)=0, \tag{1}\]
where \(\psi\left(x,t\right)\) is a complex field, \(g_{1}\) represents the strength of dispersion, \(g_{2}\), and \(g_{3}\) characterize the strengths of the two nonlinearity terms, and \(V(x)\) is an external potential. The solution to Eq. (1) has the following form,
\[\psi\left(x,t\right)=\sqrt{\frac{2u_{0}}{g_{2}\sqrt{1+\gamma}}}\] \[\times\frac{1}{\sqrt{\frac{1-\sqrt{1+\gamma}}{2\sqrt{1+\gamma}} +\cosh^{2}\left[\sqrt{\frac{u_{0}}{g_{1}}}(x-x_{0}-v_{0}t)\right]}}e^{i\phi \left(x,t\right)}, \tag{2}\]
where \(u_{0}\), \(x_{0}\) and \(v_{0}\) are arbitrary parameters that represent the amplitude, peak position, and soliton velocity, respectively. The phase is \(\phi\left(x,t\right)=u_{0}t+v_{0}\left[2\left(x-x_{0}\right)-v_{0}t\right]/ \left(4g_{1}\right)\). The parameter \(\gamma=g_{3}/g_{30}\)
where \(g_{30}=3g_{2}^{2}/16u_{0}\), determines the solution profile. Depending on the value of \(\gamma\), we may obtain a bright soliton, kink soliton, thin-top soliton, or FTS [36].
In order to obtain FTS, \(\gamma\) should be in the range of \(-1<\gamma<0\). For \(\gamma=0\), the solution reduces to the bright soliton. As \(\gamma\) approaches -1, the width of the FTS increases such that at \(\gamma=-1\), a kink soliton is obtained, which may be regarded as a FTS with infinite width. It is thus convenient to adjust the width of the FTS using \(\gamma\) according to \(\gamma=\gamma_{i}=10^{-i}-1\) such that for \(i=0\), we obtain a bright soliton, \(\gamma_{0}=0\). In the numerical simulations, the machine precision sets a maximum limit on the width of the FTS. For \(i=16\), we acquire the widest FTS that can be simulated [36]. The potential, \(V(x)\), in Eq. (1) is an asymmetric double-well potential,
\[V\left(x\right)=V_{1}\text{sech}^{2}\left[\alpha_{1}\left(x-\beta\right) \right]+V_{2}\text{sech}^{2}\left[\alpha_{2}\left(x+\beta\right)\right], \tag{3}\]
where \(V_{1,2}\), \(\alpha_{1,2}\), and \(\beta\) are the potential depths, inverse widths, and locations of the double-well, respectively.
In Fig. 1, We plot the FTS profile for various \(\gamma\) values. In the rest of the paper, we use a FTS with a width corresponding to \(\gamma_{6}\) unless stated otherwise. The potential parameters, as well as the FTS used in the numerical calculations, are depicted in Fig. 2.
Following the conventional rule of obtaining the results from two different approaches, we use two numerical methods, namely the split-step method and the power series expansion method [35], to confirm the unidirectional flow of the FTS. Both methods give similar results.
## III Numerical results
To study the unidirectional flow of the FTS, the potential is fixed at the center, the FTS is launched from both sides, and the scattered region is thus observed. We set the FTS in motion with the initial center-of-mass velocity, \(v_{0}\), toward the potential area, then calculate the reflection (R), transmission (T), or trapping (L) coefficients which are known as transport coefficients. When we send the FTS from left to right, the transport coefficients are defined as,
\[R= \frac{1}{N}\int_{-\infty}^{-\delta}|\psi\left(x,t\right)|^{2}dx, \tag{4}\] \[L= \frac{1}{N}\int_{-\delta}^{\delta}|\psi\left(x,t\right)|^{2}dx,\] \[T= \frac{1}{N}\int_{\delta}^{\infty}|\psi\left(x,t\right)|^{2}dx,\]
where \(\delta\) is the position of measurement of reflectance or transmission, set at a value slightly greater than the position of the potential boundary and \(N=\int_{-\infty}^{\infty}|\psi|^{2}dx\) is the normalization of the FTS. The \(R\) and \(T\) exchange roles if we send the FTS from right to left. The three coefficients must satisfy the conservation law \(R+T+L=1\).
In Fig. 3, we plot an FTS with a width corresponding to \(\gamma_{6}\) in Fig. 1. We first send the FTS from the left at \(x=-25\) with a velocity of \(v_{0}=0.15\). As a result, the FTS interacts with the asymmetric potential described in Fig. 2, then reflects. But when we send the FTS from the right with \(x=25\) and \(v_{0}=-0.15\), we find that the FTS transmitted over the potential. The next step is to determine whether this phenomenon occurs throughout a wide range of velocities. We do so by calculating the transport coefficients defined in Eq. (4).
In Fig. 4, we plot the transport coefficients for the FTS
Figure 1: Flat-top soliton profiles for different values of \(\gamma_{i}\) in Eq. (2). Here, \(\gamma_{i}=10^{-i}-1\). Other parameters are \(u_{0}=0.5\), \(g_{1}=0.5\), \(g_{2}=1\), \(x_{0}=-25\), and \(v_{0}=0\).
Figure 2: Flat-top soliton profile with a width corresponding to \(\gamma_{6}\) and a double-well potential. The asymmetric double-well potential parameters in Eq. (3) are \(V_{1}=-4\), \(V_{2}=-4.3\), \(\alpha_{1}=\alpha_{2}=1\), and \(\beta=6\). We use these configurations in the paper to study the unidirectional propagation of the FTS. The arrow shows the situation for a FTS approaching from the left side.
in Fig. 3. We see that there is a velocity window, \(0.14<v_{0}<0.15\), where the unidirectional flow exists. Depending on the FTS width, we may have different velocity windows for the unidirectional flow. Therefore, it is instructive to calculate the transport coefficients for a wide range of FTS widths.
Figure 5 shows the velocity window for a unidirectional flow of the FTS with varying widths but with potential parameters similar to the ones in Fig. 2. We find the velocity window for unidirectional flow equal to \(0.019\) starting from bright soliton, \(\gamma_{0}\). The velocity window shrinks as we raise the width until we reach a FTS with a width of \(\gamma_{9}\). We found that FTS with a width of \(\gamma_{10}\) and higher breaks into small parts. As a result, the velocity window is not calculated for these values.
A noteworthy observation is that the unidirectional flow
Figure 4: Transport coefficients for the FTS with a width corresponding to \(\gamma_{6}\) as shown in Fig. 3. (a) for a soliton starting from the right side and moving towards the potential well and (b) for a soliton starting from the left side of the potential well. The velocity window range for the unidirectional flow is between \(v=0.14\) and \(v=0.15\).
Figure 5: Velocity window for a unidirectional flow of FTS with different \(\gamma_{i}\) (width). Here, \(\gamma_{i}=10^{-i}-1\) such that \(\gamma_{0}=0\) corresponds to a bright soliton. We use potential parameters similar to the ones in Fig. 3.
Figure 3: Density plot of a unidirectional flow of a FTS interacting with an asymmetric double-well potential. In (a), the FTS set in motion from \(x_{0}=-25\) with velocity \(v_{0}=0.15\), and in (b), the FTS set in motion from \(x_{0}=25\) with velocity \(v_{0}=-0.15\). We use a FTS with a width corresponding to \(\gamma_{6}\) and potential parameters \(V_{1}=-4\), \(V_{2}=-4.3\), \(\alpha_{1}=\alpha_{2}=1\), and \(\beta=6\).
ceases to exist for specific FTSs, namely \(\gamma_{1}\), \(\gamma_{4}\), and \(\gamma_{8}\) as shown in Fig. 5. According to the results, there is a decrease in the velocity window as the width increases from \(\gamma_{0}\) to \(\gamma_{1}\). Eventually, the velocity window disappears completely at \(\gamma_{1}\), and then there is an increase in proximity to \(\gamma_{2}\), as illustrated in Fig. 6. Similar observations occur around \(\gamma_{4}\) and \(\gamma_{8}\). The existence of a velocity window for unidirectional flow is a well-established phenomenon, typically confined to a specific parameter space. Deviation from this region results in the cessation of unidirectional flow. The noteworthy observation is that the unidirectional flow characteristic resurfaces upon transitioning to a distinct parameter domain. A comprehensive examination could potentially elucidate the underlying physics of this phenomenon, which remains a topic for future investigation.
In Figure 7, we look at how increasing the potential depth affects the velocity window. We fix the FTS width here by selecting a FTS with a width corresponding to \(\gamma_{3}\) and changing the asymmetric double-well potential depth. It is known that only a double-well potential with a slightly different depth can generate the unidirectional flow of a soliton [6]. Therefore, we calculate the velocity window for \(V_{1,2}=-1,-1.3\) in \(V_{a}\), \(V_{1,2}=-2,-2.3\) in \(V_{b}\), \(V_{1,2}=-3,-3.3\) in \(V_{c}\) and, lastly, \(V_{1,2}=-4,-4.3\) in \(V_{d}\) which is the case in Fig. 2. We found that the velocity window for shallow potential wells is larger than for deeper potential wells. The red line in Fig. 7 guides the eye and indicates the lowering of the velocity window as we increase the depth.
## IV Conclusions
Our numerical analysis of the dynamics of the one-dimensional NLSE with competing cubic-quintic nonlinearity terms and two asymmetric reflectionless type potential wells demonstrates that the FTS can propagate in one direction for a specific velocity range and with specific parameters. To validate our findings, we employed two computational techniques, namely the split-step method and the power series expansion method. Our analysis indicates that both methodologies yield comparable outcomes. In addition, through the establishment of fixed potential parameters and utilization of bright soliton as a reference point, an examination was conducted on the velocity range pertaining to the unidirectional flow of FTSs possessing varying widths. We show that the FTSs have smaller ones for a relatively large unidirectional flow velocity window for the bright soliton. Moreover, by increasing the FTS widths, the velocity window decreases. Furthermore, for specific FTS widths, the unidirectional flow ceases to exist, resulting in symmetrical dynamics regardless of whether the FTS is introduced from the right or left of the potential. Also, upon fixing the width of the FTS and manipulating the depths of the potential, it was observed that the range of unidirectional flow velocities is greater for shallow potential wells in comparison to deeper ones.
Subsequent research endeavors could expand upon our investigation by conducting an analytical analysis of the unidirectional propagation of FTSs, with the aim of further elucidating the underlying physics responsible for the cessation of unidirectional flow in specific FTS widths.
###### Acknowledgements.
M. O. D. Alotaibi is grateful to the Physics Department at the United Arab Emirates University for their hospitality during his visit.
Figure 6: Velocity window for a unidirectional flow of FTS for widths between \(\gamma_{0}=0\) and \(\gamma_{2}=0.99\) in Fig. 5. The velocity window for the unidirectional flow at \(\gamma_{1}=0.9\) disappears and reappears when moving in either direction away from \(\gamma_{1}\).
Figure 7: Velocity window for a unidirectional flow of FTS with width corresponds to \(\gamma_{3}\). The potential parameters are similar to those given in Fig. 3, but with \(V_{1,2}=-1,-1.3\) in \(V_{a}\), \(V_{1,2}=-2,-2.3\) in \(V_{b}\), \(V_{1,2}=-3,-3.3\) in \(V_{c}\) and \(V_{1,2}=-4,-4.3\) in \(V_{d}\) |
2307.07886 | When do tripdoublet states fluoresce? A theoretical study of copper(II)
porphyrin | Open-shell molecules rarely fluoresce, due to their typically faster
non-radiative relaxation rates compared to closed-shell ones. Even rarer is the
fluorescence from states that have two more unpaired electrons than the
open-shell ground state, for example tripdoublet states (a triplet excitation
antiferromagnetically coupled to a doublet state). The description of the
latter states by U-TDDFT is notoriously inaccurate due to large spin
contamination. In this work, we applied our spin-adapted TDDFT method, X-TDDFT,
and the static-dynamic-static second order perturbation theory (SDSPT2), to the
study of the excited states as well as their relaxation pathways of copper(II)
porphyrin; previous experimental works suggested that the photoluminescence of
some substituted copper(II) porphyrins originate from a tripdoublet state,
formed by a triplet ligand $\pi\to\pi^*$ excitation. Our results demonstrated
favorable agreement between the X-TDDFT, SDSPT2 and experimental excitation
energies, and revealed noticeable improvements of X-TDDFT compared to U-TDDFT,
suggesting that X-TDDFT is a reliable tool for the study of tripdoublet
fluorescence. Intriguingly, the aforementioned tripdoublet state is the lowest
doublet excited state and lies only slightly higher than the lowest quartet
state, which explains why the tripdoublet of copper(II) porphyrin is long-lived
enough to fluoresce; an explanation for this unusual state ordering is given.
Indeed, thermal vibration correlation function (TVCF)-based calculations of
internal conversion, intersystem crossing, and radiative transition rates
confirm that copper(II) porphyrin emits thermally activated delayed
fluorescence (TADF) and a small amount of phosphorescence at low temperature
(83 K), in accordance with experiment. The present contribution is concluded by
a few possible approaches of designing new molecules that fluoresce from
tripdoublet states. | Xingwen Wang, Chenyu Wu, Zikuan Wang, Wenjian Liu | 2023-07-15T21:19:09Z | http://arxiv.org/abs/2307.07886v1 | # When do tripdoublet states fluoresce? A theoretical study of copper(II) porphyrin
###### Abstract
Open-shell molecules rarely fluoresce, due to their typically faster non-radiative relaxation rates compared to closed-shell ones. Even rarer is the fluorescence from states that have two more unpaired electrons than the open-shell ground state, for example tripdoublet states (a triplet excitation antiferromagnetically coupled to a doublet state). The description of the latter states by U-TDDFT is notoriously inaccurate due to large spin contamination. In this work, we applied our spin-adapted TDDFT method, X-TDDFT, and the static-dynamic-static second order perturbation theory (SDSPT2), to the study of the excited states as well as their relaxation pathways of copper(II) porphyrin; previous experimental works suggested that the photoluminescence of some substituted copper(II) porphyrins originate from a tripdoublet state, formed by a triplet ligand \(\pi\rightarrow\pi^{*}\) excitation. Our results demonstrated favorable agreement between the X-TDDFT, SDSPT2 and experimental excitation energies, and revealed noticeable improvements of X-TDDFT compared to U-TDDFT, suggesting that X-TDDFT is a reliable tool for the study of tripdoublet fluorescence. Intriguingly, the aforementioned tripdoublet state is the lowest doublet excited state and lies only slightly higher than the lowest quartet state, which explains why the tripdoublet of copper(II) porphyrin is long-lived enough to fluoresce; an explanation for this unusual state ordering is given. Indeed, thermal vibration correlation function (TVCF)-based calculations of internal conversion, intersystem crossing, and radiative transition rates confirm that copper(II) porphyrin emits thermally activated delayed fluorescence (TADF) and a small amount of phosphorescence at low temperature (83 K), in accordance with experiment. The present contribution is concluded by a few possible approaches of designing new molecules that fluoresce from tripdoublet states.
[MISSING_PAGE_POST]
Introduction
Fluorescence, while ubiquitous in organic and organometallic molecules, is in most cases observed in closed-shell systems. It is well-known that introducing an open-shell impurity, such as dioxygen[1], a stable organic radical[2] or a transition metal ion[3], frequently quenches the fluorescence of a closed-shell molecule[4]. One reason of this phenomenon is that the addition of an unpaired electron to a system typically introduces additional low-lying states, in particular charge transfer states that involve an electron exciting from or out of the new open-shell orbital (O). Moreover, while spin-conserving single excitations of a singlet reference determinant from closed-shell (C) to vacant-shell (V) orbitals, hereafter termed CV excitations following our previous works[5; 6; 7; 8], give rise to \(n_{\rm C}n_{\rm V}\) singlet excited states and \(n_{\rm C}n_{\rm V}\) triplet excited states (where \(n_{\rm C}\) and \(n_{\rm V}\) are the number of closed-shell an vacant-shell orbitals, respectively), with an \(M_{S}=1/2\) doublet determinant one obtains \(2n_{\rm C}n_{\rm V}\) excitations that are mixtures of doublets and quartets (the \(\Psi_{i}^{\bar{a}}\) and \(\Psi_{i}^{a}\) determinants in Figure 1; here orbitals without overbars denote \(\alpha\) orbitals, and those with overbars denote \(\beta\) ones). They can be linearly combined to make \(n_{\rm C}n_{\rm V}\) pure doublet states, but the other linear combination remains a mixture of doublet and quartet:
\[\Psi_{\rm singdoublet} = \frac{1}{\sqrt{2}}\left(\Psi_{i}^{a}+\Psi_{\bar{i}}^{\bar{a}} \right), \tag{1}\] \[\Psi_{\rm mixed} = \frac{1}{\sqrt{2}}\left(\Psi_{i}^{a}-\Psi_{\bar{i}}^{\bar{a}} \right). \tag{2}\]
In spin-adapted TDDFT methods, the latter are spin-adapted to give \(2n_{\rm C}n_{\rm V}\) pure doublet states and \(n_{\rm C}n_{\rm V}\) quartet states, by mixing with the \(n_{\rm C}n_{\rm V}\) spin flip-up excitations from the \(M_{S}=-1/2\) component of the reference determinant, i.e. the \(\Psi_{it}^{\bar{t}a}\) determinants in Figure 1[5; 6; 7]:
\[\Psi_{\rm tripdoublet} = \frac{1}{\sqrt{6}}\left(-\Psi_{i}^{a}+\Psi_{i}^{\bar{a}}+2\Psi_{ it}^{\bar{t}a}\right), \tag{3}\] \[\Psi_{\rm quartet} = \frac{1}{\sqrt{3}}\left(\Psi_{i}^{a}-\Psi_{i}^{\bar{a}}+\Psi_{ it}^{\bar{t}a}\right). \tag{4}\]
Note that both the "singdoublets" and "tripdoublets" are pure doublet states. While the singdoublets Eq. 1 (which we called the CV(0) states in our previous works[5; 6; 7]) are direct analogs of singlet excited states out of a singlet reference, the tripdoublets Eq. 3 (CV(1) states) do not have analogs in closed-shell systems, and create extra spin-allowed
non-radiative relaxation pathways compared to when the reference determinant is singlet. This further contributes to the short excited state lifetimes of doublet systems. As a consequence, doublet molecules (and open-shell molecules in general) are rarely fluorescent.
Still, there exist open-shell molecules that do fluoresce, which have found applications in e.g. organic light-emitting diodes (OLEDs)[9; 10]. However, their fluorescence usually originates from an excited state that has only one unpaired electron, i.e. a CO or OV excited state (where CO stands for a single excitation from a closed-shell orbital to an open-shell one; similar for OV), instead of a CV excited state. This can be partly rationalized by approximating the excitation energies of the system by orbital energy differences. Under this approximation, there is at least one CO state and one OV state below any given CV state, since the lowest CV excitation energy is the sum of the excitation energies of a CO state and an OV state (Figure 1). Therefore, the lowest CV state tends to not be the lowest excited state of the system, and thus usually has more energetically accessible non-radiative relaxation pathways than the low-lying CO and OV states do, rendering fluorescence from CV states especially hard to achieve. To counter this, one may try to inhibit the non-radiative relaxation of the CV state to lower excited states. However, the sheer number of non-radiative relaxation pathways that one would have to inhibit poses a great challenge for designing an open-shell molecule that fluoresces from a CV state. Alternatively, one may design a system where the orbital energy difference approximation fails dramatically, allowing the lowest CV state to become the first excited state. In this case, the fluorescence from the CV state only needs to compete with the intersystem crossings (ISCs) to the lowest quartet state(s) and the internal conversion (IC) to the ground state, which are the only two energy downhill non-radiative relaxation pathways available to the CV state. In particular, note that when the CV excitations shown in Figure 1 linearly combine to give singdoublets, tripdoublets and quartets via Eqs. 3-4, there is an energy splitting that usually places the quartet below the tripdoublet, and the tripdoublet below the singdoublet; while the former is a consequence of Hund's rule, the latter can be rationalized by applying Hund's rule after neglecting the coupling of the open-shell orbital to the closed-shell and vacant-shell ones. This gives tripdoublets a much greater chance than singdoublets for emitting fluorescence with an appreciable quantum yield. Nevertheless, the singdoublet-tripdoublet splitting appears to be small in general, compared to the orbital energy difference that one would have to overcome, which can amount to several eVs. Hence, even the fluorescence from tripdou
blets proves to be scarce.
The present paper represents a preliminary attempt to unveil some of the factors that enable an open-shell molecule to fluoresce from a tripdoublet state, via a case study of copper(II) porphyrin complexes. Copper(II) porphyrin complexes, like most porphyrin complexes, show two intense visible absorption bands near 390-420 nm and 520-580 nm[11; 12]; they are conventionally termed the B and Q bands, respectively. Gouterman et al.[12] studied the luminescence of copper(II) porphyrin molecules in the solid state by exciting their Q bands, suggesting that the emission may originate from one of the two low-lying \(\pi\rightarrow\pi^{*}\) states, \({}^{2}\)T or \({}^{4}\)T (here the 2, 4 represent the overall spin multiplicity of the complex, and T denotes that the "local" spin multiplicity of the porphyrin ring is triplet). They speculated that a rapid equilibrium may exist between the \({}^{2}\)T and \({}^{4}\)T states. The equilibrium ratio of these two states is largely dependent on the energy gap (\(\Delta E_{\rm DQ}\)) between them and the temperature, via the Boltzmann distribution. The radiative transition from the \({}^{2}\)T state to the ground state is spin-allowed, making it much faster than the phosphorescence from the \({}^{4}\)T state. Thus, when \(\Delta E_{\rm DQ}\) is small and the temperature is high, the experimentally observed rapid emission is predominantly from the \({}^{2}\)T state. Conversely, when \(\Delta E_{\rm DQ}\) is large and the temperature is low, a slow emission attributed to the phosphorescence of the \({}^{4}\)T state was observed instead, due to the concentration of the \({}^{4}\)T state largely overwhelming that of the \({}^{2}\)T state. Thus, molecules such as copper 2,3,7,8,12,13,17,18-octaalkylporphyrin
Figure 1: Schematic depictions of closed-open (CO), open-vacant (OV), and closed-vacant (CV) excitations, and their approximate excitation energies as predicted from restricted open-shell Kohn-Sham (ROKS) orbital energy differences.
(CuOAP), which possess small \(\Delta E_{\rm DQ}\) values, exhibit luminescence primarily in the form of fluorescence from the \({}^{2}\)T state at liquid nitrogen temperature, whereas copper 5,10,15,20-tetraphenylporphyrin (CuTPP) with a larger \(\Delta E_{\rm DQ}\) mainly undergoes phosphorescence from the \({}^{4}\)T state at the same temperatures. The unsubstituted copper porphyrin (CuP) is the most interesting of all, as pure phosphorescence was observed at low temperatures (35 K), which gradually gives way to fluorescence when the temperature was elevated, eventually giving pure fluorescence at 143 K[13]. Similar results have been obtained by following works with different techniques and/or solvents[14; 15].
The simple and intuitive picture has since been supplemented by subsequent works, which also excited the B band, and proposed that charge transfer (CT) states may play an important role in the relaxation of the initial bright state to the essentially dark \({}^{2}\)T state. Holten et al.[16] investigated the excited state relaxation processes of CuTPP and CuOEP at different temperatures and in different solvents, proposing possible pathways involving intermediate states that are probably ligand-to-metal CT (LMCT) states. This is supported by the gas-phase mass spectrometry experiments by Ha-Thi et al.[17], although the precise composition of the CT state remains uncertain. Understanding the excited-state relaxation pathways of copper porphyrins is crucial for gaining insights into their photophysical processes and controlling their optical properties. In particular, whether the CT state(s) (or any other excited states) lie below the \({}^{2}\)T state may have a profound influence on whether the \({}^{2}\)T state fluoresces or not, as follows from Kasha's rule. Meanwhile, the energy gap of the \({}^{2}\)T and \({}^{4}\)T states is important for the relative concentration of the two states, and therefore the relative intensities of fluorescence from the \({}^{2}\)T state and the phosphorescence from the \({}^{4}\)T state, i.e. whether the experimentally observed luminescence should be attributed to fluorescence or phosphorescence, or both.
Despite the importance of tripdoublet fluorescence and the long history of experimental studies of copper porphyrins, accurate computational studies of this system prove to be difficult, as traditional unrestricted single-reference methods like U-TDDFT suffer from severe spin contamination issues, leading to systematically underestimated excitation energies. In particular, tripdoublet states are the worst scenario for U-TDDFT, as the errors of the U-TDDFT \(\langle S^{2}\rangle\) values of tripdoublet states reach the theoretical maximum of singly excited states, i.e. 2, when the reference state itself is not spin-contaminated[5; 6; 7; 8; 18]. While multireference methods trivially solve the spin contamination problems, it is notoriously
difficult to obtain an accurate multireference description of the electronic structure of metalloporphyrins, due to the complex interplay between static and dynamic correlation. In this study, we employed the methods developed by our group, namely X-TDDFT[7; 8] and SDSPT2[19; 20] (static-dynamic-static second-order perturbation theory), to address these challenges and provide a rational description of the photophysical processes in copper porphyrin molecules. As the first rigorous spin-adapted TDDFT method[7], X-TDDFT gives spin-adapted excited states even when the reference state is open-shell, thereby generally giving better excitation energies, as well as better transition matrix elements involving the excited states. The recent development of the analytic gradient of X-TDDFT[8] allowed us to use X-TDDFT for excited state geometry optimization and seminumerical Hessian calculations as well. For vertical excitation calculations, we could afford to use SDSPT2, which also served as a reference for benchmarking X-TDDFT and U-TDDFT.
## II Computational details
All DFT, TDDFT, and SDSPT2 calculations were performed using a development version of the Beijing Density Functional (BDF) package[21; 22; 23; 24; 25]. Geometry optimizations were conducted using the PBE0[26; 27] functional and x2c-SVPall[28] basis set in the gas phase, including Grimme's D3 dispersion correction[29; 30], as implemented in the BDF software; relativistic effects were considered at the spin-free exact two component (sf-X2C) level[31; 32; 33; 34]. For transition metal complexes (especially when excited states are considered), the choice of the optimum functional may not be obvious. Herein, four different functionals
Figure 2: Molecular structures of CuP, CuOEP and CuTPP.
(BP86[35; 36; 37], B3LYP[38; 39], PBE0 and \(\omega\)B97X[40]) were benchmarked against SDSPT2 and experimental results, and the PBE0 functional was chosen based on its satisfactory and uniform accuracy (see Section III.1 for details). The orbital diagrams were drawn and visualized with VMD v.1.9.4[41], using cube files generated with the help of Multiwfn v.3.8(dev)[42].
The calculations of ISC rate constants were conducted by the ESD module of the ORCA program, version 5.0.4[43; 44; 45; 46], using the thermal vibration correlation function (TVCF) method based on a multimode harmonic oscillator model. Other rate constants involved in the excited state relaxation process were calculated by the MOMAP package, version 2022A[47; 48; 49], again using the TVCF method and a harmonic approximation of the potential energy surfaces. The default parameters of the two programs were used in all TVCF calculations, except for the "tmax" parameter in the MOMAP calculations (which controls the propagation time of the TVCF), which was set to 3000 fs. All necessary transition matrix elements, including the transition dipole moments, non-adiabatic coupling matrix elements (NACMEs)[50; 51; 52], spin-orbit coupling matrix elements (SOCMEs)[51; 53; 54], as well as the seminumerical Hessians necessary for the TVCF calculations, were calculated by BDF. Note however that all NACMEs were computed by U-TDDFT instead of X-TDDFT, since the theory of X-TDDFT NACMEs has not been developed yet; similarly, geometry optimization and frequency calculations of the \({}^{4}\)T\({}_{1}\) state were performed at the unrestricted Kohn-Sham (UKS) level, which is justified by the small spin contamination (\(\langle S^{2}\rangle\) deviation \(<0.1\)) of this state. The ALDA0 noncollinear exchange-correlation (XC) kernel[55] was used in all spin flip-up Tamm-Dancoff approximation (TDA) calculations (i.e. calculation of quartet states from a doublet reference), which has proven essential for obtaining correct spin state splittings[56]. Duschinsky rotation was considered whenever applicable. The Herzberg-Teller effect was only considered while calculating the radiative relaxation rates, but not the ISC rates, due to program limitations; however this should not change the qualitative conclusions of this paper, since all ISC processes whose Franck-Condon contributions are negligible or zero are expected to contribute negligibly to the photophysics of CuP. Although we have implemented the interface for calculating the Herzberg-Teller effect of phosphorescence by BDF and MOMAP, the computation of the geometric derivatives of the doublet-quartet transition dipole moments by finite differences proved to be numerically ill-behaved, as the \(M_{S}=\pm 1/2\) and \(M_{S}=\pm 3/2\) microstates of the \({}^{4}\)T state mix
strongly when the geometry is perturbed; note that this phenomenon seems to be related to the involvement of quartet states, since we have never observed similar behavior in triplet phosphorescence rate calculations. We thus estimated the total phosphorescence rate by assuming that the ratios of the Franck-Condon and Herzberg-Teller rates are the same for fluorescence and phosphorescence. This treatment is justified by the observation that the geometries and vibrational frequencies of the \({}^{2}\)T\({}_{1}\) and \({}^{4}\)T\({}_{1}\) states are very similar.
The active space of the SDSPT2 calculations was selected through the iCAS (imposed automatic selection and localization of complete active spaces) method[57], and the orbitals were optimized using the iCISCF (iterative configuration interaction (iCI)-based multiconfigurational self-consistent field (SCF) theory) method[58], which provided a reference wavefunction for the SDSPT2 calculation. An active space of CAS(13,14) was used in this study. The B-band, Q-band and CT states involved in the excited state relaxation process mainly involve the Cu 3d and 4d orbitals, plus the four porphyrin \(\pi\) orbitals of the Gouterman four-orbital model[11], making a minimal active space of CAS(13,14). The chosen active space thus properly describes the primary excited states of interest for investigation. Expanding the active space further would result in unnecessary computational overhead without providing additional insights. All SDSPT2 calculations reported herein include the Pople correction.
## III Results and Discussion
### Absorption process
As is well-known, density functionals generally have difficulties with simultaneously describing local excitation (LE) and CT states with good accuracy. Since we could only afford to do the geometry optimizations and frequency calculations under the DFT and TDDFT levels, a suitable functional that qualitatively reproduces the SDSPT2 excitation energies has to be chosen by comparing the TDDFT vertical absorption energies of a few common functionals with SDSPT2 data. B3LYP and PBE0 are generally common choices for the excited states of metalloporphyrins, and BP86 is often used to optimize their ground-state structures. Pure functionals usually tend to underestimate excitation energies, but empirically, their description of the Q band (an LE state) is better than hybrid functionals, as will
be confirmed by our calculation results. As CT states are involved in the relaxation process of the excited states of copper porphyrin, range-separated hybrid functionals (which provide good descriptions of CT states in general) may prove to be suitable as well. These considerations gave a list of four representative functionals, BP86, B3LYP, PBE0 and \(\omega\)B97X, that were subjected to benchmark calculations.
Different functionals display distinct behaviors for the excitation energies of CuP compared to the results obtained from SDSPT2, as shown in Figure 4. The two characteristic absorption bands of the porphyrin molecule correspond to the \({}^{2}\)S\({}_{1}\) (Q band) and \({}^{2}\)S\({}_{2}\) (B band) states, which are the only bright states of most porphyrin complexes in the visible region. They are also the only excited states for which accurate experimental vertical absorption energies are available: in benzene they have been measured as 2.25 and 3.15 eV, respectively[12]. Moreover, the absorption energy of the \({}^{2}\)T\({}_{1}\) state has been measured by fluorescence excitation spectra experiments, but only for certain substituted porphyrins: for example, the \({}^{2}\)T\({}_{1}\) absorption energy of CuEtio (Etio = etioporphyrin I) was measured in _n_-octane as 1.81 eV, while the emission energy from the same state in the same solvent was 1.79 eV[12]. Assuming that the Stokes shift of the \({}^{2}\)T\({}_{1}\) state is independent of the porphyrin substituents, and combined with the experimental emission energy of the \({}^{2}\)T\({}_{1}\) state of CuP in the same solvent (1.88 eV)[12], we obtain an estimate of the experimental \({}^{2}\)T\({}_{1}\) absorption energy of CuP as 1.90 eV. Gratifyingly, the SDSPT2 excitation energies of all three states agree with the experimental values to within 0.2 eV, which is typical of the accuracy of SDSPT2[59] and confirms the suitability of SDSPT2 as a benchmark reference for CuP. The BP86 functional performs better for these two states, with results closer to the SDSPT2 calculations, suggesting its suitability for localized excitations in the porphyrin system. However, the BP86 functional performs poorly in describing the dark charge transfer (CT) states, significantly underestimating their energies, as expected. In contrast, the range-separated functional \(\omega\)B97X shows good agreement with the CT states compared to SDSPT2 results, accurately reproducing their energies. However, the \(\omega\)B97X functional's description of the LE states (\({}^{2}\)S\({}_{1}\) and \({}^{2}\)S\({}_{2}\)) is rather poor, with energies notably higher than the SDSPT2 results. The PBE0 and B3LYP functionals represent compromises between the two kinds of functionals and provide more accurate overall descriptions of the LE and CT states, giving results closer to the SDSPT2 calculations. Considering the overall performance in describing different states, the PBE0 functional slightly outperforms B3LYP,
leading to its selection for the remaining part of the present study.
The \({}^{2}\)S\({}_{1}\) and \({}^{2}\)S\({}_{2}\) states are almost spin-adapted states with minimal spin contamination, even at the U-TDDFT level (Table 1), since they are dominated by singdoublet excitations. As shown in Figure 4, both X-TDDFT and U-TDDFT provide similar descriptions for these two states; note however that functionals with large amounts of HF exchange generally overestimate the excitation energies of these two states, especially \({}^{2}\)S\({}_{2}\). At the TDDFT levels, the CT states are dominated by CO-type excitations (from \(\pi\) to 3d\({}_{x^{2}-y^{2}}\)), which are also spin-adapted. Both U-TDDFT and X-TDDFT show comparable performance in describing the CT states. However, both methods display large errors compared to SDSPT2 for the CT states. Table 1 presents the excitation energies and the corresponding dominant excited state compositions, computed at the ground state structure of CuP. It can be observed that the CT states are predominantly composed of double excitations, which are not accurately captured by single-reference methods. Despite this, functionals with large amounts of HF exchange still perform notably better, as is generally expected for CT states. The \({}^{2}\)T\({}_{1}\) and \({}^{2}\)T\({}_{2}\) states correspond to tripdoublet excitations (from \(\pi\) to \(\pi^{*}\)), and they suffer from significant spin contamination at the U-TDDFT level, since instead of pure doublets, U-TDDFT can only describe these tripdoublet states as a heavy mixture of doublets and quartets, e.g.:
\[\Psi(^{2}\text{T}_{1})^{\text{U-TDDFT}}\approx-\sqrt{\frac{1}{3}}\Psi(^{2} \text{T}_{1})^{\text{X-TDDFT}}+\sqrt{\frac{2}{3}}\Psi(^{4}\text{T}_{1},M_{S}=1 /2)^{\text{X-TDDFT}}, \tag{5}\]
as follows from Eqs. 2-4. U-TDDFT thus systematically underestimates the excitation energies of the \({}^{2}\)T\({}_{1}\) and \({}^{2}\)T\({}_{2}\) states, since the energies of quartets are in general lower than the corresponding tripdoublets, as discussed in the Introduction. In Section III.3 we will also see that part of the underestimation is due to the failure of U-TDDFT to reproduce the energy degeneracy of \(\Psi(^{4}\text{T}_{1},M_{S}=1/2)\) and \(\Psi(^{4}\text{T}_{1},M_{S}=3/2)\). On the other hand, X-TDDFT avoids spin contamination through implicitly incorporating extra double excitations necessary for spin-adapting the tripdoublet states (Eq. 3), and therefore performs systematically better than U-TDDFT for all the functionals studied herein. The improvements of the excitation energies (\(\sim\) 0.05 eV) may seem small, but have profound influences on the magnitude and even the sign of the \({}^{2}\)T\({}_{1}\)-\({}^{4}\)T\({}_{1}\) gap, and therefore on the ratio of fluorescence and phosphorescence emission, as will be detailed in Section III.3.
Already from the calculated absorption energies, one can draw some conclusions about the photophysical processes of CuP. The vertical absorption energies of the \({}^{2}\)T\({}_{1}\), \({}^{2}\)S\({}_{1}\), \({}^{2}\)CT\({}_{1}\)
\({}^{2}\)CT\({}_{2}\) an \({}^{2}\)S\({}_{2}\) states of CuP have the intriguing property of being roughly equidistant with very small spacings (0.2-0.4 eV), and the \({}^{2}\)T\({}_{2}\) state is furthermore nearly degenerate with \({}^{2}\)S\({}_{1}\). Therefore, once CuP is excited to the bright \({}^{2}\)S\({}_{1}\) or \({}^{2}\)S\({}_{2}\) states by visible light, the molecule is expected to undergo a cascade of ultrafast IC processes, all the way till the lowest doublet state, \({}^{2}\)T\({}_{1}\). The availability of an ultrafast IC cascade also means the ISC from these high-lying excited states are probably unimportant, especially considering that copper is a relatively light element. These findings are in qualitative agreement with the experimental observation that the \({}^{2}\)S\({}_{2}\) states of substituted copper(II) porphyrins relax to the \({}^{2}\)T\({}_{1}\) states in gas phase through a two-step process via the intermediacy of a CT state, with time constants 65 fs and 350-2000 fs, respectively, depending on the substituents[17]. In solution, the \({}^{2}\)S\({}_{1}\) state of Cu(II) protoporphyrin IX dimethyl ester was known to relax to \({}^{2}\)T\({}_{1}\) within 8 ps[14], and for CuTPP as well as CuOEP the same relaxation was also found to occur within the picosecond timescale[15]. Recently, the decay rates of the \({}^{2}\)S\({}_{1}\) state were measured as 50 fs and 80 fs for CuTPP and CuOEP, respectively, in cyclohexane[60]. The \({}^{2}\)S\({}_{1}\) state lifetime of CuP itself was also estimated, although indirectly from the natural width of the 0-0 peak of the Q band, as 30 fs[61]. Quantitative computation of these IC rates is however beyond the scope of the paper, as the narrow energy gaps and possible involvement of conical intersections probably necessitate nonadiabatic molecular dynamics simulations. Nevertheless, a \({}^{2}\)S\({}_{2}\rightarrow^{2}\)CT\(\rightarrow^{2}\)S\({}_{1}\)/\({}^{2}\)T\({}_{2}\rightarrow^{2}\)T\({}_{1}\) IC pathway can still be tentatively proposed based on the energy ordering alone. Finally, it is worth noting that the use of the accurate SDSPT2 method, as opposed to TDDFT, is crucial for obtaining a reliable estimate of the qualitative trend of the excited state energies. BP86 predicts that the CT states lie below the \({}^{2}\)T states, leading to a qualitatively wrong IC pathway; \(\omega\)B97X, on the other hand, grossly overestimates the energy of \({}^{2}\)S\({}_{2}\) and would underestimate its tendency to undergo IC to the CT states (Figure 4). While B3LYP and PBE0 predict reasonable excited state orderings, their accuracy for the CT states cannot be expected in advance without the input of a higher-level computational method, due to the lack of experimental data of the CT states as well as the presence of double excitation contributions in the CT states, which cannot be correctly described under the adiabatic TDDFT framework.
### Analysis of the equilibrium geometries of the first doublet excited state
Since all higher lying excited states are predicted to convert to \({}^{2}\)T\({}_{1}\) over a short timescale, to study the luminescence of CuP (and probably also other Cu(II) porphyrin complexes bearing alkyl or aryl substituents, given that these substituents do not change excitation energies drastically[12]), it should suffice to study the radiative and non-radiative processes starting from the \({}^{2}\)T\({}_{1}\) state. As \({}^{2}\)T\({}_{1}\) is the lowest doublet state, we expect that its lifetime is long enough for it to relax to its equilibrium structure, before any further transitions occur. Therefore, accurately predicting the equilibrium geometry of the \({}^{2}\)T\({}_{1}\) state is crucial for subsequent studies.
Some selected bond lengths for the optimized ground state and excited state structures are provided in Table 2. The difference in ground state bond lengths between the UKS and ROKS methods is extremely small (\(<0.0001\) A), as can be seen from their root mean square deviation (RMSD), which can be attributed to the extremely small UKS spin contamination of the ground state of CuP (\(\langle S^{2}\rangle_{\rm PBE0}=0.7532\)). The doubly degenerate \({}^{2}\)T\({}_{1}\) state, which belongs to the doubly degenerate E\({}_{u}\) irreducible representation (irrep) under the D\({}_{4h}\) group, undergoes Jahn-Teller distortion to give a D\({}_{2h}\) structure, where two of the opposing Cu-N
Figure 3: ROKS frontier molecular orbitals of CuP, computed at the sf-X2C-PBE0/x2c-SVPall level of theory.
bonds are elongated but the corresponding pyrrole rings remain almost intact, while the other two Cu-N bonds are almost unchanged but the corresponding pyrrole rings exhibit noticeable deformation. The U-TDDFT and X-TDDFT bond lengths of the \({}^{2}\)T\({}_{1}\) state show larger deviations than the UKS and ROKS ground state ones, with the largest deviation exceeding 0.001 A (the C-C (mn) bond), which is also reflected in the RMSD values. However, the structure differences are still small on an absolute scale. This suggests that the coupling of the unpaired Cu(II) d electron and the porphyrin triplet is weak, so that a reasonable tri-doublet state geometry is obtained even if this coupling is described qualitatively incorrectly (as in U-TDDFT). By contrast, our previous benchmark studies on small molecules (where the coupling between unpaired electrons is much larger) revealed that X-TDDFT improves the U-TDDFT bond lengths by 0.01-0.05 A on average, depending on the functional and
\begin{table}
\begin{tabular}{c c c} \hline State & \(\Delta\)E & \(\Delta\langle\)S\({}^{2}\rangle\) & Dominant transitions \\ \hline \({}^{2}\)T\({}_{1}\) & 2.08 & 1.9994 & \(\pi\)(a\({}_{2u}\)) \(\rightarrow\)\(\pi^{*}\)(e\({}_{g}\)) 87.1\% \\ \({}^{2}\)T\({}_{2}\) & 2.30 & 1.9968 & \(\pi\)(a\({}_{1u}\)) \(\rightarrow\)\(\pi^{*}\)(e\({}_{g}\)) 86.7\% \\ \({}^{2}\)S\({}_{1}\) & 2.37 & 0.0031 & \(\pi\)(a\({}_{1u}\)) \(\rightarrow\)\(\pi^{*}\)(e\({}_{g}\)) 56.9\%, \(\pi\)(a\({}_{2u}\)) \(\rightarrow\)\(\pi^{*}\)(e\({}_{g}\)) 36.1\% \\ \({}^{2}\)CT\({}_{1}\) & 2.73 & 0.0101 [\(\pi\)(a\({}_{2u}\)) \(\rightarrow\) Cu 3d\({}_{x^{2}-y^{2}}\)(b\({}_{1g}\)) + Cu 3d\({}_{xz}\)/3d\({}_{yz}\)(e\({}_{g}\)) \(\rightarrow\)\(\pi^{*}\)(e\({}_{g}\))] 51.0\% \\ \(\pi\)(a\({}_{2u}\)) \(\rightarrow\) Cu 3d\({}_{x^{2}-y^{2}}\)(b\({}_{1g}\)) 39.6\% \\ \({}^{2}\)CT\({}_{2}\) & 2.93 & 0.0064 [\(\pi\)(a\({}_{1u}\)) \(\rightarrow\) Cu 3d\({}_{x^{2}-y^{2}}\)(b\({}_{1g}\)) + Cu 3d\({}_{xz}\)/3d\({}_{yz}\)(e\({}_{g}\)) \(\rightarrow\)\(\pi^{*}\)(e\({}_{g}\))] 42.4\% \\ \(\pi\)(a\({}_{1u}\)) \(\rightarrow\) Cu 3d\({}_{x^{2}-y^{2}}\)(b\({}_{1g}\)) 34.5\% \\ \({}^{2}\)S\({}_{2}\) & 3.30 & 0.0115 & \(\pi\)(a\({}_{2u}\)) \(\rightarrow\)\(\pi^{*}\)(e\({}_{g}\)) 52.5\%, \(\pi\)(a\({}_{1u}\)) \(\rightarrow\)\(\pi^{*}\)(e\({}_{g}\)) 31.8\% \\ \hline \end{tabular}
\end{table}
Table 1: The SDSPT2 excitation energies (in eV) computed at the sf-X2C-PBE0/x2c-SVPall ground state structure of CuP, along with the corresponding excited state compositions. \(\Delta\langle\)S\({}^{2}\rangle\): difference of the excited state’s \(\langle\)S\({}^{2}\rangle\) value with the ground state \(\langle\)S\({}^{2}\rangle\), computed at the U-TD-PBE0 level. Transitions in square brackets represent double excitations.
the molecule[7; 8].
### Relaxation Processes of the \({}^{2}\)T\({}_{1}\) state
As revealed by the above analyses, the relaxation process from high-lying excited states to the \({}^{2}\)T\({}_{1}\) state is rapid, and the only energetically accessible relaxation pathways are the radiative (fluorescence) and non-radiative (IC) relaxations from \({}^{2}\)T\({}_{1}\) to the ground state \({}^{2}\)S\({}_{0}\), as well as the ISC from \({}^{2}\)T\({}_{1}\) to \({}^{4}\)T\({}_{1}\). The \({}^{4}\)T\({}_{1}\) state can furthermore convert back to the \({}^{2}\)T\({}_{1}\) state through reverse ISC (RISC), or relax to the ground state via radiative (phosphorescence) or non-radiative (ISC) pathways (Figure 5).
Before we discuss the quantitative values of transition rates, we first analyze the relevant electronic states from the viewpoint of point group symmetry. The equilibrium structures of the \({}^{2}\)T\({}_{1}\) and \({}^{4}\)T\({}_{1}\) states are both distorted owing to the Jahn-Teller effect, and possess only D\({}_{2h}\) symmetry, compared to the D\({}_{4h}\) symmetry of the ground state equilibrium structure. The implications are two-fold: the double degeneracy of the \({}^{n}\)T\({}_{1}\)(\(n=2\) or \(4\)) state at the D\({}_{4h}\) geometry (where they both belong to the E\({}_{u}\) irrep) is lifted to give two adiabatic states, hereafter termed the \({}^{n}\)T\({}_{1}\)(1) and \({}^{n}\)T\({}_{1}\)(2) states, respectively, where \({}^{n}\)T\({}_{1}\)(1) is the state with the lower energy; and the potential energy surface of the \({}^{n}\)T\({}_{1}\)(1) state has two
Figure 4: Errors of different excited states of CuP with respect to SDSPT2 values. \({}^{a}\)TDA
chemically equivalent D\({}_{2h}\) minima, \({}^{n}\)T\({}_{1}\)(1)(X) and \({}^{n}\)T\({}_{1}\)(1)(Y), where different pairs of Cu-N bonds are lengthened and shortened (see the schematic depictions in Figure 5). Although \({}^{n}\)T\({}_{1}\)(1)(X) and \({}^{n}\)T\({}_{1}\)(1)(Y) are on the same adiabatic potential energy surface, their electronic wavefunctions represent different diabatic states, as they belong to the B\({}_{3u}\) and B\({}_{2u}\) irreps, respectively. The \({}^{n}\)T\({}_{1}\)(1)(X) structure is diabatically connected to \({}^{n}\)T\({}_{1}\)(2)(Y) (i.e. the \({}^{n}\)T\({}_{1}\)(1)(Y)) and \({}^{n}\)T\({}_{1}\)(1)(Y) (i.e. the \({}^{n}\)T\({}_{1}\)(1)(Y)). The \({}^{n}\)T\({}_{1}\)(1)(X) structure is diabatically connected to \({}^{n}\)T\({}_{1}\)(2)(Y) (i.e. the \({}^{n}\)T\({}_{1}\)(1)(Y)) and \({}^{n}\)T\({}_{1}\)(1)(Y) (i.e. the \({}^{n}\)T\({}_{1}\)(1)(Y)). The \({}^{n}\)T\({}_{1}\)(1)(X) structure is diabatically connected to \({}^{n}\)T\({}_{1}\)(2)(Y) (i.e. the \({}^{n}\)T\({}_{1}\)(1)(Y)) and \({}^{n}\)T\({}_{1}\)(1)(Y) (i.e. the \({}^{n}\)T\({}_{1}\)(1)(Y)). The \({}^{n}\)T\({}_{1}\)(1)(X) structure is diabatically connected to \({}^{n}\)T\({}_{1}\)(2)(Y) (i.e. the \({}^{n}\)T\({}_{1}\)(1)(Y)) and \({}^{n}\)T\({}_{1}\)(1)(Y) (i.e. the \({}^{n}\)T\({}_{1}\)(1)(Y)). The \({}^{n}\)T\({}_{1}\)(1)(X) structure is diabatically connected to \({}^{n}\)T\({}_{1}\)(2)(Y) (i.e. the \({}^{n}\)T\({}_{1}\)(1)(Y)) and \({}^{n}\)T\({}_{1}\)(1)(Y) (i.e. the \({}^{n}\)T\({}_{1}\)(1)(Y)). The \({}^{n}\)T\({}_{1}\)(1)(Y) structure is diabatically connected to \({}^{n}\)T\({}_{1}\)(2)(Y) (i.e. the \({}^{n}\)T\({}_{1}\)(1)(Y)) and \({}^{n}\)T\({}_{1}\)(1)(Y) (i.e. the \({}^{n}\)T\({}_{1}\)(1)(Y)).
\({}^{n}\)T\({}_{1}\)(2) state at the equilibrium structure of \({}^{n}\)T\({}_{1}\)(1)(Y)) via a D\({}_{4h}\) conical intersection, while \({}^{n}\)T\({}_{1}\)(1)(Y) is diabatically connected to \({}^{n}\)T\({}_{1}\)(2)(X) via the same conical intersection. Thus, the \({}^{n}\)T\({}_{1}\)(2)(X) and \({}^{n}\)T\({}_{1}\)(2)(Y) states are expected to undergo ultrafast IC from the D\({}_{4h}\) conical intersection, to give the \({}^{n}\)T\({}_{1}\)(1)(Y) and \({}^{n}\)T\({}_{1}\)(1)(X) states as the main products, respectively. The direct transition from \({}^{n}\)T\({}_{1}\)(2) to states other than \({}^{n}\)T\({}_{1}\)(1) can therefore be neglected.
From the irreps of the electronic states, we conclude that certain ISC transitions are forbidden by spatial symmetry. These include the transitions between \({}^{2}\)T\({}_{1}\)(1)(X) and \({}^{4}\)T\({}_{1}\)(1)(X), between \({}^{2}\)T\({}_{1}\)(1)(Y) and \({}^{4}\)T\({}_{1}\)(1)(Y), and between any one of the \({}^{4}\)T\({}_{1}\)(1) structures and \({}^{2}\)S\({}_{0}\). All IC and radiative transitions, plus the ISC transitions between \({}^{2}\)T\({}_{1}\)(1)(X) and \({}^{4}\)T\({}_{1}\)(1)(Y) as well as between \({}^{2}\)T\({}_{1}\)(1)(Y) and \({}^{4}\)T\({}_{1}\)(1)(X), are symmetry allowed. While symmetry forbidden ISC processes can still gain non-zero rates from the Herzberg-Teller effect, we deem that the rates are not large enough to have any noticeable consequences. On one hand, the two symmetry forbidden ISC pathways between the \({}^{2}\)T\({}_{1}\)(1) and \({}^{4}\)T\({}_{1}\)(1) states are overshadowed by the two symmetry allowed ones, so that the total ISC rate between \({}^{2}\)T\({}_{1}\)(1) and \({}^{4}\)T\({}_{1}\)(1) is undoubtedly determined by the latter alone. The ISC from \({}^{4}\)T\({}_{1}\)(1) to \({}^{2}\)S\({}_{0}\), on the other hand, has to compete with the IC process from \({}^{2}\)T\({}_{1}\)(1) to \({}^{2}\)S\({}_{0}\) in order to affect the quantum yield or the dominant relaxation pathway of the system noticeably, but the latter process is both spin-allowed and spatial symmetry-allowed, while the former is forbidden in both aspects. We therefore neglect all ISC rates whose Franck-Condon contributions are zero by spatial symmetry.
We then calculated the rate constants for all transitions between \({}^{2}\)T\({}_{1}\), \({}^{4}\)T\({}_{1}\) and \({}^{2}\)S\({}_{0}\) whose rates are non-negligible, by the TVCF method. The rates (Figure 5) were calculated at 83 K, the temperature used in the quantum yield studies of Ref. [12]; the latter studies gave a luminescence quantum yield of 0.09, in a solvent mixture of diethyl ether, isopentane, dimethylformamide and ethanol. The accurate treatment of solvation effects is however complicated and beyond the scope of the paper, so that all transition rates were computed in the gas phase. Our calculated \(k_{\rm ISC}\) from \({}^{2}\)T\({}_{1}\) to \({}^{4}\)T\({}_{1}\) is only slightly larger than the \(k_{\rm IC}\) from \({}^{2}\)T\({}_{1}\) to \({}^{2}\)S\({}_{0}\), suggesting that treating the \({}^{2}\)T\({}_{1}\) and \({}^{4}\)T\({}_{1}\) states as a rapid equilibrium (as in, e.g. Ref. [62] and [13]) is not justified at least in the gas phase. At 83 K, the RISC from \({}^{4}\)T\({}_{1}\) to \({}^{2}\)T\({}_{1}\) is 11 % of the forward ISC rate. Both rates are in favorable agreement with the experimental values of Cu(II) protoporphyrin IX dimethyl ester in benzene,
\(k_{\rm ISC}=1.6\times 10^{9}\)s\({}^{-1}\) and \(k_{\rm RISC}=5.6\times 10^{8}\)s\({}^{-1}\), at room temperature[14]. Our computed ISC and RISC rates give a \({}^{2}\)T\({}_{1}\)-to-\({}^{4}\)T\({}_{1}\) equilibrium concentration ratio of 1:9.0 when \(k_{\rm IC}\) is neglected, but our kinetic simulation shows that the steady state concentration ratio is 1:14.9 when the latter is considered, further illustrating that treating the \({}^{2}\)T\({}_{1}\)-\({}^{4}\)T\({}_{1}\) interconversion as a fast equilibrium can lead to noticeable error. Nevertheless, the fluorescence rate of \({}^{2}\)T\({}_{1}\) still exceeds the phosphorescence rate of \({}^{4}\)T\({}_{1}\) by three orders of magnitude, which more than compensates for the low steady state concentration of \({}^{2}\)T\({}_{1}\). Similar conclusions could be derived from the rates reported in Ref. [62] (\(3.6\times 10^{3}\)s\({}^{-1}\) and \(8.3\times 10^{-1}\)s\({}^{-1}\), respectively), calculated from semiempirical exchange and SOC integrals and experimental absorption oscillator strengths, which agree surprisingly well with the rates that we obtained here. Kinetic simulation suggests that 99.6 % of the total luminescence at this temperature is contributed by fluorescence, and only 0.4 % is due to phosphorescence. This can be compared with the experimental finding by Bohandy and Kim[13] that the phosphorescence of CuP at 86 K is observable as a minor 0-0 peak besides the 0-0 fluorescence peak, with a fluorescence to phosphorescence ratio of about 5:1 to 10:1 (as estimated from Fig. 5 of Ref. [13]); however note that this study was performed in a triphenylene solid matrix.
The total luminescence quantum yield is predicted by our kinetic simulations to be \(1.9\times 10^{-5}\), three orders of magnitude smaller than the experimental quantum yield (0.09) in solution. We believe one possible reason is that the \({}^{2}\)T\({}_{1}\)-\({}^{4}\)T\({}_{1}\) gap of CuP is larger in solution than in the gas phase. This can already be seen from the experimental \({}^{2}\)T\({}_{1}\)-\({}^{4}\)T\({}_{1}\) 0-0 gaps of CuP in solid matrices with different polarities: the 0-0 gap was measured in polymethylmethacrylate as 500 cm\({}^{-1}\)[63], but 310-320 cm\({}^{-1}\) in \(n\)-octane[61] and 267 cm\({}^{-1}\) in triphenylene[13]. Therefore, the 0-0 gap in the gas phase is probably smaller than 267 cm\({}^{-1}\), and indeed, our X-TDDFT calculations predict an adiabatic \({}^{2}\)T\({}_{1}\)-\({}^{4}\)T\({}_{1}\) gap of 92 cm\({}^{-1}\) in the gas phase. The larger \({}^{2}\)T\({}_{1}\)-\({}^{4}\)T\({}_{1}\) gap in solution compared to the gas phase is expected to introduce a Boltzmann factor of \(\exp{(-(E_{\rm sol}-E_{\rm gas})/RT)}\) to \(k_{\rm RISC}\), while changing the other rates negligibly. Setting \(E_{\rm sol}=\)267 cm\({}^{-1}\) and \(E_{\rm gas}=\)92 cm\({}^{-1}\), we obtain a solution phase \(k_{\rm RISC}\) of \(9.02\times 10^{5}\)s\({}^{-1}\), from which kinetic simulations give a fluorescence-phosphorescence ratio of 12:1, in quantitative agreement with experiment[13]. Setting \(E_{\rm sol}=\)500 cm\({}^{-1}\) (as appropriate for the polar solvent used in Ref. [12]) gives \(k_{\rm RISC}=1.60\times 10^{4}\)s\({}^{-1}\), and a total luminescence quantum yield of \(1.1\times 10^{-4}\), with 18 % contribution from fluorescence and 82 % from phosphorescence. The remaining discrepancy (\(\sim\) 800x) of the experimental and
calculated quantum yields can be attributed to the restriction of the molecular vibrations of CuP by the low temperature (and thus viscous) solvent, which is expected to suppress the IC process significantly.
Interestingly, U-TDA completely fails to reproduce the qualitative picture of Figure 5 and predicts a \({}^{2}\)T\({}_{1}\)-\({}^{4}\)T\({}_{1}\) adiabatic gap of the wrong sign (-276 cm\({}^{-1}\)), violating Hund's rule. At first sight, this may seem surprising: since the U-TDA "tripdoublet state" is a mixture of the true tripdoublet state and the quartet state, the U-TDA \({}^{2}\)T\({}_{1}\) energy should lie in between the energies of the true \({}^{2}\)T\({}_{1}\) state and the \({}^{4}\)T\({}_{1}\) state, which means that the U-TDA \({}^{2}\)T\({}_{1}\)-\({}^{4}\)T\({}_{1}\) gap should be smaller than the X-TDA gap but still have the correct sign. However, the U-TDA \({}^{2}\)T\({}_{1}\) state is contaminated by the _M\({}_{S}=1/2\)_ component of the \({}^{4}\)T\({}_{1}\) state (Eq. 5), while a spin flip-up U-TDA calculation of the \({}^{4}\)T\({}_{1}\) state gives its _M\({}_{S}=3/2\)_ component. The two spin components obviously have the same energy in the exact non-relativistic theory and in all rigorous spin-adapted methods, but not in U-TDA, even when the ground state is not spin-contaminated[55; 56]. This shows that the restoration of the degeneracy of spin multiplets by the random phase approximation (RPA) correction in X-TDDFT[7] indeed leads to qualitative improvement of the excitation energies, instead of being merely a solution to a conceptual problem. It also shows that estimating the tripdoublet energy by extrapolating from the energies of the spin-contaminated tripdoublet and the quartet by e.g. the Yamaguchi method[64] does not necessarily give a qualitatively correct estimate of the spin-pure tripdoublet energy. The inverted doublet-quartet gap introduces qualitative defects to the computed photophysics of CuP. Already when the doublet-quartet gap is zero, the Boltzmann factor is expected to raise the \(k_{\rm RIC}\) to \(9.24\times 10^{7}\)s\({}^{-1}\), reducing the ratio of phosphorescence in the total luminescence to 0.08 %. Further raising the quartet to reproduce the U-TDA doublet-quartet gap will reduce the \(k_{\rm ISC}\) to \(1.42\times 10^{6}\)s\({}^{-1}\), which reduces the ratio of phosphorescence to 0.0007 %. These values are obviously in much worse agreement with the experiments[13].
Finally, we briefly comment on the luminescence lifetimes. The luminescence of CuP is known to decay non-exponentially[63], so its luminescence lifetime can only be approximately determined. The luminescence lifetime of CuP has been determined as 400 \(\mu\)s[63] at 80 K in polymethylmethacrylate, and a biexponential decay with lifetimes 155 and 750 \(\mu\)s was reported[12] at 78 K in methylphthalylethyllylycolate. The same references also reported that the luminescence lifetimes of CuOEP and CuTPP are also within the 50-800
\(\mu\)s range. However, in a room temperature toluene solution the luminescence lifetimes of CuOEP and CuTPP were reported to be 115 and 30 ns, respectively[65], and a few nanoseconds in the gas phase[17]. If we define the luminescence lifetime as the time needed for \(1-1/e\approx 63.2\%\) of the luminescence to be emitted, then kinetic simulations from our X-TDDFT rate constants give a gas-phase luminescence lifetime of 70 ns at 83 K, which is much shorter than the low-temperature condensed phase results but in very good agreement with the room-temperature solution phase experiments. The fact that the computed lifetime is one order of magnitude longer than the gas-phase experimental result is probably due to the \({}^{2}\)T\({}_{1}\) state being vibrationally excited in the experimental study. However, using the ISC and RISC rate constants consistent with the U-TDA doublet-quartet gap, one obtains a lifetime of 8.5 fs, which seems somewhat too short given that the highly vibrationally excited CuP molecules in Ref. [17], which carry all the excess thermal energy (\(>1\) eV) after the IC process from B band excitation, should have a much shorter lifetime than a "cold" CuP molecule at 83 K. Thus, our results suggest that X-TDDFT/X-TDA seems to give a more accurate luminescence lifetime than U-TDDFT/U-TDA, and also confirm that the discrepancy of the experimental and calculated quantum yields is probably due to suppression of the IC of \({}^{2}\)T\({}_{1}\) by the low temperature solvent.
### Discussions
As mentioned in the Introduction, the simple orbital energy difference model based on a restricted open-shell determinant (Figure 1) predicts that the lowest tripdoublet is at least the third lowest doublet excited state of any doublet molecule (as long as the ROKS ground state satisfies the _aufbau_ rule), since the tripdoublet is higher than at least one CO state and at least one OV state. It therefore comes as a surprise that the lowest three spin-conserving excited states of CuP are all tripdoublets (Table 1), not to mention that all of them are doubly degenerate, even though the ROKS ground state of CuP is indeed an _aufbau_ state (Figure 6). This suggests a failure of the ROKS orbital energy difference model.
To understand why the ROKS orbital energies fail qualitatively for describing the excited state ordering of CuP, despite that the X-TDDFT method (which uses the ROKS
determinant as the reference state) still gives reasonable excitation energies as compared to SDSPT2, we note that the \(\alpha\) and \(\beta\) Fock matrices of an ROKS calculation are in general not diagonal under the canonical molecular orbital (CMO) basis. Only the unified coupling
Figure 5: Radiative and non-radiative relaxation pathways of the \({}^{2}\)T\({}_{1}\) state. Both the \({}^{2}\)T\({}_{1}\) and \({}^{4}\)T\({}_{1}\) states are splitted by the Jahn-Teller effect to give two adiabatic states, labeled (1) and (2). Each of the (1) states have two equivalent D\({}_{2h}\) equilibrium structures, labeled (X) and (Y). The (2) states do not have equilibrium structures and are connected with the corresponding (1) states via conical intersections. The adiabatic excitation energies of the (1) states, as well as the energies of the (2) states at the equilibrium geometries of their corresponding (1) states, are shown on the left. The transition rates are calculated at 83 K in the gas phase. The forward and reverse ISC rates between \({}^{2}\)T\({}_{1}\)(1)(X) and \({}^{4}\)T\({}_{1}\)(1)(Y) are equal to those between \({}^{2}\)T\({}_{1}\)(1)(Y) and \({}^{4}\)T\({}_{1}\)(1)(X) by symmetry, but the former ISC processes are omitted for clarity. Transition rates that are obviously equal by symmetry reasons are shown only once.
operator \(\mathbf{R}\), assembled from blocks of the CMO Fock matrices,
\[\mathbf{R}=\left(\begin{array}{ccc}\frac{1}{2}(\mathbf{F}_{\mathrm{CC}\alpha}+ \mathbf{F}_{\mathrm{CC}\beta})&\mathbf{F}_{\mathrm{CO}\beta}&\frac{1}{2}( \mathbf{F}_{\mathrm{CV}\alpha}+\mathbf{F}_{\mathrm{CV}\beta})\\ \mathbf{F}_{\mathrm{OC}\beta}&\frac{1}{2}(\mathbf{F}_{\mathrm{OO}\alpha}+ \mathbf{F}_{\mathrm{OO}\beta})&\mathbf{F}_{\mathrm{OV}\alpha}\\ \frac{1}{2}(\mathbf{F}_{\mathrm{VC}\alpha}+\mathbf{F}_{\mathrm{VC}\beta})& \mathbf{F}_{\mathrm{VO}\alpha}&\frac{1}{2}(\mathbf{F}_{\mathrm{VV}\alpha}+ \mathbf{F}_{\mathrm{VV}\beta})\end{array}\right), \tag{6}\]
is diagonal[66]. Note that herein we have used the Guest-Saunders parameterization[67] of the diagonal blocks of \(\mathbf{R}\), which is the default choice of the BDF program, although our qualitative conclusions are unaffected by choosing other parameterizations. However, the leading term of the X-TDDFT calculation is not simply given by the eigenvalue differences
Figure 6: ROKS and UKS orbital energies of CuP at the X-TDDFT and U-TDDFT \({}^{2}\)T\({}_{1}\) equilibrium geometries, respectively, computed at the sf-X2C-PBE0/x2c-SVPall level of theory.
of \(\mathbf{R}\),
\[\Delta^{\prime}_{ia\sigma,jb\tau}=\delta_{\sigma\tau}\delta_{ij}\delta_{ab}(R_{ aa}-R_{ii}), \tag{7}\]
but rather from the \(\alpha\) and \(\beta\) Fock matrices themselves via
\[\Delta_{ia\sigma,jb\tau}=\delta_{\sigma\tau}(\delta_{ij}F_{ab\sigma}-\delta_{ ab}F_{ji\sigma}). \tag{8}\]
Here \(i,a\) represent occupied CMOs, \(j,b\) virtual CMOs, and \(\sigma,\tau\) spin indices. For the diagonal matrix element of an arbitrary single excitation, Eq. 8 and Eq. 7 differ by the following term:
\[\Delta_{ia\sigma,ia\sigma}-\Delta^{\prime}_{ia\sigma,ia\sigma}=\frac{1}{2} \left((F_{aa\sigma}-F_{aa\sigma^{\prime}})-(F_{ii\sigma}-F_{ii\sigma^{\prime} })\right), \tag{9}\]
where \(\sigma^{\prime}\) is the opposite spin of \(\sigma\). For a general hybrid functional, the Fock matrix element differences in Eq. 9 are given by (where \(p\) is an arbitrary CMO, \(c_{\mathrm{x}}\) is the proportion of HF exchange, and \(v^{\mathrm{xc}}\) is the XC potential)
\[F_{pp\beta}-F_{pp\alpha}=c_{\mathrm{x}}(pt|pt)+(v^{\mathrm{xc}}_{pp\beta}-v^{ \mathrm{xc}}_{pp\alpha}), \tag{10}\]
assuming, for the sake of simplicity, that there is only one open-shell orbital \(t\) in the reference state. Assuming that the XC potential behaves similarly as the exact exchange potential, the difference Eq. 10 is positive, and should usually be the largest when \(p=t\), while being small when \(p\) is spatially far from \(t\). The corollary is that the orbital energy difference approximation Eq. 7 should agree well with the X-TDDFT leading term Eq. 8 for CV excitations (where the difference is proportional to the small exchange integral \((pt|pt)\)), but underestimate the excitation energies of CO and OV excitations by a correction proportional to the large \((tt|tt)\) integral.
The underestimation of CO and OV excitation energies by ROKS orbital energy differences opens up the possibility of engineering a system to break the \(\omega_{ia}>\max(\omega_{it},\omega_{ta})\) constraint inherent in the ROKS orbital energy difference model, and make the lowest doublet excited state a tripdoublet. Possible approaches include:
1. Increase the difference Eq. 9 for the CO and OV states, while keeping it small for the lowest CV state, so that all CO and OV states are pushed above the lowest CV state. This is most easily done by making the open-shell orbital \(t\) very compact, which naturally leads to a larger \(F_{tt\beta}-F_{tt\alpha}\) (due to a larger \((tt|tt)\)) but a smaller \(F_{pp\beta}-F_{pp\alpha},p\in\{i,a\}\) (due to a small absolute overlap between the \(p\) and \(t\) orbitals).
2. Reduce the orbital energy gap between the highest doubly occupied orbital and the lowest unoccupied orbital, which also helps to reduce the excitation energy of the lowest CV state. However, a too small orbital energy gap will favor the IC of the triploublet to the ground state, which may quench the fluorescence of the triploublet. As already mentioned in Section III.3, the IC rate of CuP is already large enough to make CuP only barely fluorescent (quantum yield \(\sim 10^{-5}\)) in the gas phase, and a viscous solvent seems to be required to suppress the IC contribution and make the fluorescence stronger.
Now, it becomes evident that CuP fits the above design principles very well. The unpaired electron in the ground state of CuP is on the Cu 3d\({}_{x^{2}-y^{2}}\) orbital (Figure 3), which is spatially localized. Moreover, the Cu 3d\({}_{x^{2}-y^{2}}\) orbital occupies a different part of the molecule than the ligand \(\pi\) and \(\pi^{*}\) orbitals, which results in a small absolute overlap between the orbitals and helps to reduce the effect of Eq. 9 on the CV excitation energies. To quantitatively assess the effect of Eq. 9 on the CO and OV excitation energies, we note that the X-TDDFT leading term Eq. 8 is nothing but the UKS orbital energy difference, if the shape differences of the UKS and ROKS orbitals are neglected. Therefore, we have plotted the UKS orbital energies of CuP in Figure 6 as well. Intriguingly, the \(\alpha\) Cu 3d\({}_{x^{2}-y^{2}}\) orbital now lies below the porphyrin \(\pi\)(a\({}_{1u}\)) and \(\pi\)(a\({}_{2u}\)) orbitals, while the \(\beta\) Cu 3d\({}_{x^{2}-y^{2}}\) orbital lies above the porphyrin \(\pi^{*}\)(e\({}_{g}\)) orbitals. Therefore, the differences of UKS orbital energies predict that the lowest excited states of CuP are the CV states obtained from exciting an electron from \(\pi\)(a\({}_{1u}\)) and \(\pi\)(a\({}_{2u}\)) to \(\pi^{*}\)(e\({}_{g}\)). This is not only consistent with our U-TD-PBE0 excitation energies, but also the X-TD-PBE0 and SDSPT2 results (save for the \({}^{2}\)S\({}_{2}\) state at all the three levels of theory, as well as the \({}^{2}\)S\({}_{1}\) state at the TDDFT level, which are higher than the CO-type CT states computed at the respective levels of theory), despite that the latter methods are spin-adapted. Note also that although the \((tt|tt)\) integral leads to a huge splitting between the \(\alpha\) and \(\beta\) Cu 3d\({}_{x^{2}-y^{2}}\) orbitals, the splitting is only barely enough for the UKS orbital energy differences to predict a triploublet first excited state: if the \(\beta\) Cu 3d\({}_{x^{2}-y^{2}}\) orbital were just 0.2 eV lower, one would predict that the CO-type CT excitation \(\pi\)(a\({}_{2u}\))\(\rightarrow\)Cu 3d\({}_{x^{2}-y^{2}}\) is lower than the lowest triploublet \(\pi\)(a\({}_{2u}\))\(\rightarrow\)\(\pi^{*}\)(e\({}_{g}\)). This can be compared with the 0.65 eV gap between the SDSPT2 \({}^{2}\)T\({}_{1}\) and \({}^{2}\)CT\({}_{1}\) states, computed at the ground state structure of CuP (Table 1). Alternatively, one may say that the HOMO-LUMO gap of the porphyrin ligand is barely narrow enough to fit within the energy window between the \(\alpha\) and \(\beta\) Cu
3d\({}_{x^{2}-y^{2}}\) orbitals, which clearly illustrates the importance of using a narrow-gap ligand for designing systems with a tripdoublet first excited state.
To conclude this section, we briefly note that making the first doublet excited state a tripdoublet state does not guarantee the realization of tripdoublet fluorescence. Two remaining potential obstacles are (1) the IC of the tripdoublet state to the ground state and (2) the ISC of the tripdoublet to the lowest quartet state (which is almost always lower than the lowest tripdoublet state owing to Hund's rule). Both can be inhibited by making the molecule rigid, which is indeed satisfied by the porphyrin ligand in CuP. Alternatively, if the ISC from the first quartet state to the ground state is slow (as is the case of CuP, thanks to the spatial symmetry selection rules), and the gap between the first doublet and the first quartet is comparable to the thermal energy \(kT\) at the current temperature, then the quartet state can undergo RISC to regenerate the tripdoublet state, which can then fluoresce. This is well-known as the thermally activated delayed fluorescence (TADF) mechanism[68; 69; 70], although existing TADF molecules typically fluoresce from singlets and use a triplet "reservoir state" to achieve delayed fluorescence. In order for the TADF mechanism to outcompete the phosphorescence from the first quartet state, both the phosphorescence rate and the doublet-quartet gap have to be small. While the low phosphorescence rate of CuP can be explained by the fact that copper is a relatively light element, the small \({}^{2}\)T\({}_{1}\)-\({}^{4}\)T\({}_{1}\) gap of CuP can be attributed to the distributions of the frontier orbitals of CuP. Recall that the X-TDDFT gap between a tripdoublet excitation Eq. 3 and the associated quartet excitation Eq. 4 is exactly given by the X-RPA gap[7], which is equal to \(\frac{3}{2}\left((it|it)+(ta|ta)\right)\). However, both of the two integrals are small for the \({}^{2}\)T\({}_{1}\) and \({}^{4}\)T\({}_{1}\) states of CuP, since the orbitals \(i\) and \(a\) reside on the ligand while \(t\) is localized near the metal atom (Figure 3). Such a clean spatial separation of the metal and ligand CMOs (despite the close proximity of the metal and the ligand) can further be attributed to the fact that the Cu 3d\({}_{x^{2}-y^{2}}\) orbital has a different irrep than those of the ligand \(\pi\) and \(\pi^{\star}\) orbitals, preventing the delocalization of the open-shell orbital to the \(\pi\) system of the porphyrin ligand; while the Cu 3d\({}_{x^{2}-y^{2}}\) orbital can still delocalize through the \(\sigma\) bonds of the ligand, the delocalization is of limited extent due to the rather local electronic structures of typical \(\sigma\) bonds (Figure 3). Incidentally, the only other class of tripdoublet-fluorescing metalloporphyrins that we are aware of, i.e. vanadium(IV) oxo porphyrin complexes[62; 71], are characterized by a single unpaired electron in the 3d\({}_{xy}\) orbital, whose mixing with the ligand \(\pi\) and \(\pi^{\star}\) orbitals is also hindered by
symmetry mismatches. Whether this can be extended to a general strategy of designing molecules that fluoresce from tripoloublet states (or more generally, molecules that possess small doublet-quartet gaps) will be explored in the future. Finally, we briefly note that the design of doublet molecules with TADF and/or phosphorescence is also an interesting subject and deserves attention in its own right.
## IV Conclusion
Fluorescence of open-shell molecules from tripoloublet states is a rare and underexplored phenomenon, for which traditional excited state methods such as U-TDDFT are unreliable due to severe spin contamination. In this work, we employed the high-precision method SDSPT2 to obtain accurate excitation energies of the CuP molecule, which suggests that the bright states obtained by light absorption relax to the lowest doublet state, \({}^{2}\)T\({}_{1}\), via a cascade of ultrafast IC processes, in agreement with experiments. Contrary to predictions from ROKS orbital energy differences, \({}^{2}\)T\({}_{1}\) is a tripoloublet state composed of a triplet ligand state antiferromagnetically coupled with the unpaired electron of Cu(II). Using the SDSPT2 results as a benchmark, we found that the X-TDDFT method provides a more accurate description of the \({}^{2}\)T\({}_{1}\) state (which exhibits considerable spin contamination) compared to U-TDDFT, while for the CO excitations, U-TDDFT and X-TDDFT show similar performance.
In addition to vertical absorption calculations and structural analyses, we conducted a detailed analysis of the relaxation rate constants of the excited states of CuP. Our results suggest that, in the gas phase and at low temperature (83 K), CuP emits fluorescence from the lowest tripoloublet state \({}^{2}\)T\({}_{1}\) with a very small quantum yield (\(\sim 10^{-5}\)), and the contribution of phosphorescence is negligible. These results complement the experimental results in solution phase and solid matrix, which gave a lower but still greater than unity fluorescence-to-phosphorescence ratio and a much higher luminescence quantum yield. Furthermore, we confirm the presence of an equilibrium between the first doublet state \({}^{2}\)T\({}_{1}\) and the first quartet state \({}^{4}\)T\({}_{1}\), the latter of which functions as a reservoir of the \({}^{2}\)T\({}_{1}\) state, although the steady state concentration ratio of these two states deviates noticeably from their equilibrium constant. CuP therefore represents an interesting example of a TADF
molecule that emits fluorescence through a doublet-doublet transition, instead of the much more common singlet-singlet pathway. Notably, U-TDA predicts a doublet-quartet gap of the wrong sign, due to the spin contamination of the doublet state as well as the breaking of the spin multiplet degeneracy of the quartet state. Although the error is small (\(<0.05\) eV), it translates to a large error in the luminescence lifetime and (even more) the contribution of phosphorescence to the total luminescence. This again highlights the importance of using spin-adapted approaches in the study of open-shell systems, even when the excitation energy errors of unrestricted methods are small.
Based on the computational results, we proposed a few possible approaches that can be used to design new doublet molecules that fluoresce from tripdoublets: (1) keep the open-shell orbital of the molecule spatially compact, to open up a gap between the \(\alpha\) and \(\beta\) UKS orbital energies of the open-shell orbital; (2) make the gap between the highest doubly occupied orbital and the lowest vacant orbital small enough so that both orbitals fit into the gap between the \(\alpha\) and \(\beta\) open-shell orbitals, but not overly small as to encourage IC of the lowest tripdoublet state to the ground state; (3) make the molecule rigid to minimize unwanted non-radiative relaxation processes; (4) avoid introducing heavy elements in order to suppress unwanted ISC and phosphorescence processes; and (5) localize the open-shell orbital and the frontier \(\pi/\pi^{*}\) orbitals onto different molecular fragments, and (if possible) make them belong to different irreps, to minimize the doublet-quartet gap. We hope that the present work will facilitate the discovery of novel molecules that fluoresce from tripdoublet states. Moreover, we expect that the success of the X-TDDFT and SDSPT2 methods will encourage the use of these two methods in the excited state studies of other systems.
## Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
## Author Contributions
WL and ZW conceived the topic, supervised the work and critically revised the manuscript. XW and ZW performed computational investigations and wrote the first draft. ZW and CW
provided scientific advice and validated the data. All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication.
## Funding
This work was supported by the National Natural Science Foundation of China (Grant Nos. 21833001, 21973054, 22101155), Mountain Tai Climbing Program of Shandong Province, and Key-Area Research and Development Program of Guangdong Province (Grant No. 2020B0101350001). ZW gratefully acknowledges generous financial support by the Max Planck society.
## Acknowledgments
The authors acknowledge the computational software provided by the Institute of Scientific Computing Software in Shandong University and Qingdao BDF Software Technology Co., Ltd.
## Data Availability Statement
The original contributions presented in the study are included in the article/Supplementary Materials. Further inquiries can be directed to the corresponding authors. |
2305.07027 | EfficientViT: Memory Efficient Vision Transformer with Cascaded Group
Attention | Vision transformers have shown great success due to their high model
capabilities. However, their remarkable performance is accompanied by heavy
computation costs, which makes them unsuitable for real-time applications. In
this paper, we propose a family of high-speed vision transformers named
EfficientViT. We find that the speed of existing transformer models is commonly
bounded by memory inefficient operations, especially the tensor reshaping and
element-wise functions in MHSA. Therefore, we design a new building block with
a sandwich layout, i.e., using a single memory-bound MHSA between efficient FFN
layers, which improves memory efficiency while enhancing channel communication.
Moreover, we discover that the attention maps share high similarities across
heads, leading to computational redundancy. To address this, we present a
cascaded group attention module feeding attention heads with different splits
of the full feature, which not only saves computation cost but also improves
attention diversity. Comprehensive experiments demonstrate EfficientViT
outperforms existing efficient models, striking a good trade-off between speed
and accuracy. For instance, our EfficientViT-M5 surpasses MobileNetV3-Large by
1.9% in accuracy, while getting 40.4% and 45.2% higher throughput on Nvidia
V100 GPU and Intel Xeon CPU, respectively. Compared to the recent efficient
model MobileViT-XXS, EfficientViT-M2 achieves 1.8% superior accuracy, while
running 5.8x/3.7x faster on the GPU/CPU, and 7.4x faster when converted to ONNX
format. Code and models are available at
https://github.com/microsoft/Cream/tree/main/EfficientViT. | Xinyu Liu, Houwen Peng, Ningxin Zheng, Yuqing Yang, Han Hu, Yixuan Yuan | 2023-05-11T17:59:41Z | http://arxiv.org/abs/2305.07027v1 | # EfficientViT: Memory Efficient Vision Transformer with
###### Abstract
Vision transformers have shown great success due to their high model capabilities. However, their remarkable performance is accompanied by heavy computation costs, which makes them unsuitable for real-time applications. In this paper, we propose a family of high-speed vision transformers named EfficientViT. We find that the speed of existing transformer models is commonly bounded by memory inefficient operations, especially the tensor reshaping and element-wise functions in MHSA. Therefore, we design a new building block with a sandwich layout, i.e., using a single memory-bound MHSA between efficient FFN layers, which improves memory efficiency while enhancing channel communication. Moreover, we discover that the attention maps share high similarities across heads, leading to computational redundancy. To address this, we present a cascaded group attention module feeding attention heads with different splits of the full feature, which not only saves computation cost but also improves attention diversity. Comprehensive experiments demonstrate EfficientViT outperforms existing efficient models, striking a good trade-off between speed and accuracy. For instance, our EfficientViT-M5 surpasses MobileNetV3-Large by 1.9% in accuracy, while getting 40.4% and 45.2% higher throughput on Nvidia V100 GPU and Intel Xeon CPU, respectively. Compared to the recent efficient model MobileViT-XXS, EfficientViT-M2 achieves 1.8% superior accuracy, while running 5.8x/3.7x faster on the GPU/CPU, and 7.4x faster when converted to ONNX format. Code and models are available at here.
## 1 Introduction
Vision Transformers (ViTs) have taken computer vision domain by storm due to their high model capabilities and superior performance [18, 44, 69]. However, the constantly improved accuracy comes at the cost of increasing model sizes and computation overhead. For example, SwinV2 [43] uses 3.0B parameters, while V-MoE [62] taking 14.7B parameters, to achieve state-of-the-art performance on ImageNet [17]. Such large model sizes and the accompanying heavy computational costs make these models unsuitable for applications with real-time requirements [86, 40, 78].
There are several recent works designing light and efficient vision transformer models [9, 19, 29, 49, 50, 56, 79, 81]. Unfortunately, most of these methods aim to reduce model parameters or Flops, which are indirect metrics for speed and do not reflect the actual inference throughput of models. For example, MobileViT-XS [50] using 700M Flops runs much slower than DeiT-T [69] with 1,220M Flops on an Nvidia V100 GPU. Although these methods have achieved good performance with fewer Flops or parameters, many of them do not show significant wall-clock speedup against standard isomorphic or hierarchical transformers, _e.g._, DeiT [69] and Swin [44], and have not gained wide adoption.
To address this issue, in this paper, we explore how to go faster with vision transformers, seeking to find principles for designing efficient transformer architectures. Based on the prevailing vision transformers DeiT [69] and Swin [44], we systematically analyze three main factors that affect model inference speed, including memory access, computation redundancy, and parameter usage. In particular, we find that the speed of transformer models is commonly memory-bound. In other words, memory accessing delay prohibits the full utilization of the computing power in GPU/CPUs [21, 32, 72], leading to a critically negative impact on the runtime speed of transformers [15, 31]. The
Figure 1: Speed and accuracy comparisons between EfficientViT (Ours) and other efficient CNN and ViT models tested on an Nvidia V100 GPU with ImageNet-1K dataset [17].
most memory-inefficient operations are the frequent tensor reshaping and element-wise functions in multi-head self-attention (MHSA). We observe that through an appropriate adjustment of the ratio between MHSA and FFN (feed-forward network) layers, the memory access time can be reduced significantly without compromising the performance. Moreover, we find that some attention heads tend to learn similar linear projections, resulting in redundancy in attention maps. The analysis shows that explicitly decomposing the computation of each head by feeding them with diverse features can mitigate this issue while improving computation efficiency. In addition, the parameter allocation in different modules is often overlooked by existing lightweight models, as they mainly follow the configurations in standard transformer models [44, 69]. To improve parameter efficiency, we use structured pruning [45] to identify the most important network components, and summarize empirical guidance of parameter reallocation for model acceleration.
Based upon the analysis and findings, we propose a new family of memory efficient transformer models named EfficientViT. Specifically, we design a new block with a sandwich layout to build up the model. The sandwich layout block applies a single memory-bound MHSA layer between FFN layers. It reduces the time cost caused by memory-bound operations in MHSA, and applies more FFN layers to allow communication between different channels, which is more memory efficient. Then, we propose a new cascaded group attention (CGA) module to improve computation efficiency. The core idea is to enhance the diversity of the features fed into the attention heads. In contrast to prior self-attention using the same feature for all heads, CGA feeds each head with different input splits and cascades the output features across heads. This module not only reduces the computation redundancy in multi-head attention, but also elevates model capacity by increasing network depth. Last but not least, we redistribute parameters through expanding the channel width of critical network components such as value projections, while shrinking the ones with lower importance like hidden dimensions in FFNs. This reallocation finally promotes model parameter efficiency.
Experiments demonstrate that our models achieve clear improvements over existing efficient CNN and ViT models in terms of both speed and accuracy, as shown in Fig. 1. For instance, our EfficientViT-M5 gets 77.1% top-1 accuracy on ImageNet with throughput of 10,621 images/s on an Nvidia V100 GPU and 56.8 images/s on an Intel Xeon E5-2690 v4 CPU @ 2.60GHz, outperforming MobileNetV3-Large [26] by 1.9% in accuracy, 40.4% in GPU inference speed, and 45.2% in CPU speed. Moreover, EfficientViT-M2 gets 70.8% accuracy, surpassing MobileViT-XXS [50] by 1.8%, while running 5.8\(\times\)/3.7\(\times\) faster on the GPU/CPU, and 7.4\(\times\) faster when converted to ONNX [3] format. When deployed on the mobile chipset, _i.e_., Apple A13 Bionic chip in iPhone 11, EfficientViT-M2 model runs 2.3\(\times\) faster than MobileViT-XXS [50] using the CoreML [1].
In summary, the contributions of this work are two-fold:
* We present a systematic analysis on the factors that affect the inference speed of vision transformers, deriving a set of guidelines for efficient model design.
* We design a new family of vision transformer models, which strike a good trade-off between efficiency and accuracy. The models also demonstrate good transfer ability on a variety of downstream tasks.
## 2 Going Faster with Vision Transformers
In this section, we explore how to improve the efficiency of vision transformers from three perspectives: memory access, computation redundancy, and parameter usage. We seek to identify the underlying speed bottlenecks through empirical studies, and summarize useful design guidelines.
### Memory Efficiency
Memory access overhead is a critical factor affecting model speed [15, 28, 31, 65]. Many operators in transformer [71], such as frequent reshaping, element-wise addition, and normalization are memory inefficient, requiring time-consuming access across different memory units, as shown in Fig. 2. Although there are some methods proposed to address this issue by simplifying the computation of standard softmax self-attention, _e.g_., sparse attention [75, 34, 57, 61] and low-rank approximation [11, 51, 74], they often come at the cost of accuracy degradation and limited acceleration.
In this work, we turn to save memory access cost by reducing memory-inefficient layers. Recent studies reveal that memory-inefficient operations are mainly located in MHSA rather than FFN layers [31, 33]. However, most existing ViTs [18, 44, 69] use an equivalent number of these two layers, which may not achieve the optimal efficiency. We thereby explore the optimal allocation of MHSA and FFN layers in small models with fast inference. Specifically, we scale down Swin-T [44] and DeiT-T [69] to several small subnetworks with 1.25\(\times\) and 1.5\(\times\) higher inference throughput, and compare the performance of subnetworks with different proportions of MHSA layers. As shown in Fig. 3, subnetworks with 20%-40% MHSA layers tend to get better accuracy. Such ratios are much smaller than the
Figure 2: Runtime profiling on two standard vision transformers Swin-T and DeiT-T. Red text denotes memory-bound operations, _i.e_., the time taken by the operation is mainly determined by memory accesses, while time spent in computation is much smaller.
typical ViTs that adopt 50% MHSA layers. Furthermore, we measure the time consumption on memory-bound operations to compare memory access efficiency, including reshaping, element-wise addition, copying, and normalization. Memory-bound operations is reduced to 44.26% of the total runtime in Swin-T-1.25\(\times\) that has 20% MHSA layers. The observation also generalizes to DeiT and smaller models with 1.5\(\times\) speed-up. It is demonstrated that _reducing MHSA layer utilization ratio appropriately can enhance memory efficiency while improving model performance._
### Computation Efficiency
MHSA embeds the input sequence into multiple subspaces (heads) and computes attention maps separately, which has been proven effective in improving performance [18, 69, 71]. However, attention maps are computationally expensive, and studies have shown that a number of them are not of vital importance [52, 73]. To save computation cost, we explore how to reduce redundant attention in small ViT models. We train width downscaled Swin-T [44] and DeiT-T [69] models with 1.25\(\times\) inference speed-up, and measure the maximum cosine similarity of each head and the remaining heads within each block. From Fig. 4, we observe there exists high similarities between attention heads, especially in the last blocks. The phenomenon suggests that many heads learn similar projections of the same full feature and incur computation redundancy. To explicitly encourage the heads to learn different patterns, we apply an intuitive solution by feeding each head with only a split of the full feature, which is similar to the idea of group convolution in [10, 87]. We train the variants of downscaled models with the modified MHSA, and also compute the attention similarities in Fig. 4. It is shown that _using different channel-wise splits of the feature in different heads, instead of using the same full feature for all heads as MHSA, could effectively mitigate attention computation redundancy._
### Parameter Efficiency
Typical ViTs mainly inherit the design strategies from NLP transformer [71], _e.g._, using an equivalent width for \(Q\),\(K\),\(V\) projections, increasing heads over stages, and setting the expansion ratio to 4 in FFN. For lightweight models, the configurations of these components need to be carefully re-designed [7, 8, 39]. Inspired by [82, 45], we adopt Taylor structured pruning [53] to automatically find the important components in Swin-T and DeiT-T, and explore the underlying principles of parameter allocation. The pruning method removes unimportant channels under a certain resource constraint and keeps the most critical ones to best preserve the accuracy. It uses the multiplication of gradient and weight as channel importance, which approximates the loss fluctuation when removing channels [38].
The ratio between the remaining output channels to the input channels is plotted in Fig. 5, and the original ratios in the unpruned model are also given for reference. It is observed that: 1) The first two stages preserve more dimensions, while the last stage keeps much less; 2) The \(Q\),\(K\) and FFN dimensions are largely trimmed, whereas the dimension of \(V\) is almost preserved and diminishes only at the last few blocks. These phenomena show that _1 ) the typical channel configuration, that doubles the channel after each stage [44] or use equivalent channels for all blocks [69], may produce substantial redundancy in last few blocks; 2) The redundancy in \(Q\),\(K\) is much larger than \(V\) when they have the same dimensions. \(V\) prefers a relative large channels, being close to the input embedding dimension._
## 3 Efficient Vision Transformer
Based upon the above analysis, in this section, we propose a new hierarchical model with fast inference named EfficientViT. The architecture overview is shown in Fig. 6.
Figure 4: The average maximum cosine similarity of each head in different blocks. **Left:** downscaled Swin-T models. **Right:** downscaled DeiT-T models. Blue lines denote Swin-T-1.25\(\times\)/DeiT-T-1.25\(\times\) model, while darkblue lines denote the variants that feed each head with only a split of the full feature.
Figure 5: The ratio of the channels to the input embeddings before and after pruning Swin-T. Baseline accuracy: 79.1%; pruned accuracy: 76.5%. Results for DeiT-T are given in the supplementary.
Figure 3: The accuracy of downscaled baseline models with different MHSA layer proportions, where the dots on each line represent subnetworks with similar throughput. **Left:** Swin-T as the baseline. **Right:** DeiT-T as the baseline. The 1.25\(\times\)/1.5\(\times\) denote accelerating the baseline models by 1.25/1.5 times, respectively.
### EfficientViT Building Blocks
We propose a new efficient building block for vision transformer, as shown in Fig. 6 (b). It is composed of a memory-efficient sandwich layout, a cascaded group attention module, and a parameter reallocation strategy, which focus on improving model efficiency in terms of memory, computation, and parameter, respectively.
_Sandwich Layout._ To build up a memory-efficient block, we propose a sandwich layout that employs less memory-bound self-attention layers and more memory-efficient FFN layers for channel communication. Specifically, it applies a single self-attention layer \(\Phi_{i}^{\mathrm{A}}\) for spatial mixing, which is sandwiched between FFN layers \(\Phi_{i}^{\mathrm{F}}\). The computation can be formulated as:
\[X_{i+1}=\prod^{\mathcal{N}}\Phi_{i}^{\mathrm{F}}(\Phi_{i}^{\mathrm{A}}(\prod^{ \mathcal{N}}\Phi_{i}^{\mathrm{F}}(X_{i}))), \tag{1}\]
where \(X_{i}\) is the full input feature for the \(i\)-th block. The block transforms \(X_{i}\) into \(X_{i+1}\) with \(\mathcal{N}\) FFNs before and after the single self-attention layer. This design reduces the memory time consumption caused by self-attention layers in the model, and applies more FFN layers to allow communication between different feature channels efficiently. We also apply an extra token interaction layer before each FFN using a depthwise convolution (DWConv) [27]. It introduces inductive bias of the local structural information to enhance model capability [14].
_Cascaded Group Attention._ Attention head redundancy is a severe issue in MHSA, which causes computation inefficiency. Inspired by group convolutions in efficient CNNs [10, 37, 64, 87], we propose a new attention module named cascaded group attention (CGA) for vision transformers. It feeds each head with different splits of the full features, thus explicitly decomposing the attention computation across heads. Formally, this attention can be formulated as:
\[\begin{split}\widetilde{X}_{ij}=Attn(X_{ij}W_{ij}^{\mathrm{Q}},X_ {ij}W_{ij}^{\mathrm{K}},X_{ij}W_{ij}^{\mathrm{V}}),\\ \widetilde{X}_{i+1}=Concat[\widetilde{X}_{ij}]_{j=1:h}W_{i}^{ \mathrm{P}},\end{split} \tag{2}\]
where the \(j\)-th head computes the self-attention over \(X_{ij}\), which is the \(j\)-th split of the input feature \(X_{i}\), _i.e._, \(X_{i}=[X_{i1},X_{i2},\dots,X_{ik}]\) and \(1\leq j\leq h\). \(h\) is the total number of heads, \(W_{ij}^{\mathrm{Q}}\), \(W_{ij}^{\mathrm{K}}\), and \(W_{ij}^{\mathrm{V}}\) are projection layers mapping the input feature split into different subspaces, and \(W_{i}^{\mathrm{P}}\) is a linear layer that projects the concatenated output features back to the dimension consistent with the input.
Although using feature splits instead of the full features for each head is more efficient and saves computation overhead, we continue to improve its capacity, by encouraging the \(Q\), \(K\), \(V\) layers to learn projections on features with richer information. We compute the attention map of each head in a cascaded manner, as illustrated in Fig. 6 (c), which adds the output of each head to the subsequent head to refine the feature representations progressively:
\[X_{ij}^{{}^{\prime}}=X_{ij}+\widetilde{X}_{i(j-1)},\quad 1<j\leq h, \tag{3}\]
where \(X_{ij}^{{}^{\prime}}\) is the addition of the \(j\)-th input split \(X_{ij}\) and the \((j-1)\)-th head output \(\widetilde{X}_{i(j-1)}\) calculated by Eq. (2). It replaces \(X_{ij}\) to serve as the new input feature for the \(j\)-th head when calculating the self-attention. Besides, another token
Figure 6: Overview of EfficientViT. (a) Architecture of EfficientViT; (b) Sandwich Layout block; (c) Cascaded Group Attention.
interaction layer is applied after the \(Q\) projection, which enables the self-attention to jointly capture local and global relations and further enhances the feature representation.
Such a cascaded design enjoys two advantages. First, feeding each head with different feature splits could improve the diversity of attention maps, as validated in Sec. 2.2. Similar to group convolutions [10, 87], the cascaded group attention could save the Flops and parameters by \(h\times\), since the input and output channels in the \(QKV\) layers are reduced by \(h\times\). Second, cascading the attention heads allows for an increase of network depth, thus further elevating the model capacity without introducing any extra parameters. It only incurs minor latency overhead since the attention map computation in each head uses smaller \(QK\) channel dimensions.
_Parameter Reallocation._ To improve parameter efficiency, we reallocate the parameters in the network by expanding the channel width of critical modules while shrinking the unimportant ones. Specifically, based on the Taylor importance analysis in Sec. 2.3, we set small channel dimensions for \(Q\) and \(K\) projections in each head for all stages. For the \(V\) projection, we allow it to have the same dimension as the input embedding. The expansion ratio in FFN is also reduced from 4 to 2 due to its parameter redundancy. With the proposed reallocation strategy, the important modules have larger number of channels to learn representations in a high dimensional space, which prevent the loss of feature information. Meanwhile, the redundant parameters in unimportant modules are removed to speed up inference and enhance the model efficiency.
### EfficientViT Network Architectures
The overall architecture of our EfficientViT is presented in Fig. 6 (a). Concretely, we introduce overlapping patch embedding [20, 80] to embed 16\(\times\)16 patches into tokens with \(C_{1}\) dimension, which enhances the model capacity in low-level visual representation learning. The architecture contains three stages. Each stage stacks the proposed EfficientViT building blocks and the number of tokens is reduced by 4\(\times\) at each subsampling layer (2\(\times\) subsampling of the resolution). To achieve efficient subsampling, we propose an EfficientViT subsample block which also has the sandwich layout, except that the self-attention layer is replaced by an inverted residual block to reduce the information loss during subsampling [63, 26]. It is worth noting that we adopt BatchNorm (BN) [30] instead of LayerNorm (LN) [2] throughout the model, as BN can be folded into the preceding convolution or linear layers, which is a runtime advantage over LN. We also use ReLU [54] as the activation function, as the commonly used GELU [25] or HardSwish [26] are much slower, and sometimes not well-supported by certain inference deployment platforms [1, 3].
We build our model family with six different width and depth scales, and set different number of heads for each stage. We use fewer blocks in early stages than late stages similar to MobileNetV3 [26] and LeViT [20], since that the processing on early stages with larger resolutions is more time consuming. We increase the width over stages with a small factor (\(\leq\) 2) to alleviate redundancy in later stages, as analyzed in Sec. 2.3. The architecture details of our model family are presented in Tab. 1. \(C_{i}\), \(L_{i}\), and \(H_{i}\) refer to the width, depth, and number of heads in the \(i\)-th stage.
## 4 Experiments
### Implementation Details
We conduct image classification experiments on ImageNet-1K [17]. The models are built with PyTorch 1.11.0 [59] and Timm 0.5.4 [77], and trained from scratch for 300 epochs on 8 Nvidia V100 GPUs using AdamW [46] optimizer and cosine learning rate scheduler. We set the total batchsize as 2,048. The input images are resized and randomly cropped into 224\(\times\)224. The initial learning rate is 1\(\times\)10\({}^{-3}\) with weight decay of 2.5\(\times\)10\({}^{-2}\). We use the same data augmentation as [69], including Mixup [85], auto-augmentation [13], and random erasing [88]. In addition, we provide throughput evaluation on different hardware. For GPU, we measure the throughput on an Nvidia V100, with the maximum power-of-two batchsize that fits in memory following [20, 69]. For CPU and ONNX, we measure the runtime on an Intel Xeon E5-2690 v4 @ 2.60 GHz processor, with batchsize 16 and run the model in a single thread following [20]. We also test the transferability of EfficientViT on downstream tasks. For the experiments on downstream image classification, we finetune the models for 300 epochs following [86], using AdamW [46] with batchsize 256, learning rate 1\(\times\)10\({}^{-3}\) and weight-decay 1\(\times\)10\({}^{-8}\). We use RetinaNet [41] for object detection on COCO [42], and train the models for 12 epochs (1\(\times\) schedule) with the same settings as [44] on mmdetection [6]. For instance segmentation, please refer to the supplementary.
### Results on ImageNet
We compare EfficientViT with prevailing efficient CNN and ViT models on ImageNet [17], and report the results in Tab. 2 and Fig. 1. The results show that, in most cases, our EfficientViT achieves the best accuracy and speed trade-off across different evaluation settings.
_Comparisons with efficient CNNs._ We first compare EfficientViT with vanilla CNN models, such as MobileNets
\begin{table}
\begin{tabular}{l|l l l} \hline \hline Model & \(\{C_{1},C_{2},C_{3}\}\) & \(\{L_{1},L_{2},L_{3}\}\) & \(\{H_{1},H_{2},H_{3}\}\) \\ \hline EfficientViT-M0 & \{64, 128, 192\} & \{1, 2, 3\} & \(\{4,4,4\}\) \\ EfficientViT-M1 & \{128, 144, 192\} & \{1, 2, 3\} & \(\{2,\,3,\,3\}\) \\ EfficientViT-M2 & \{128, 192, 224\} & \{1, 2, 3\} & \(\{4,\,3,\,2\}\) \\ EfficientViT-M3 & \{128, 240, 320\} & \{1, 2, 3\} & \(\{4,\,3,\,4\}\) \\ EfficientViT-M4 & \{128, 256, 384\} & \{1, 2, 3\} & \(\{4,\,4,\,4\}\) \\ EfficientViT-M5 & \{192, 288, 384\} & \{1, 3, 4\} & \(\{3,\,3,\,4\}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Architecture details of EfficientViT model variants.
[26, 63] and EfficientNet [67]. Specifically, compared to MobileNetV2 1.0\(\times\)[63], EfficientViT-M3 obtains 1.4% better top-1 accuracy, while running at 2.5\(\times\) and 3.0\(\times\) faster speed on V100 GPU and Intel CPU, respectively. Compared to the state-of-the-art MobileNetV3-Large [26], EfficientViT-M5 achieves 1.9% higher accuracy yet runs much faster, _e.g._, 40.5% faster on the V100 GPU and 45.2% faster on the Intel CPU but is 11.5% slower as ONNX models. This may because reshaping is slower in ONNX implementation, which is inevitable in computing self-attention. Moreover, EfficientViT-M5 achieves comparable accuracy with the searched model EfficientNet-B0 [67], while runs 2.3\(\times\)/1.9\(\times\) faster on the V100 GPU/Intel CPU, and 2.1\(\times\) faster as ONNX models. Although our model uses more parameters, it reduces memory-inefficient operations that affect the inference speed and achieves higher throughput.
_Comparisons with efficient ViTs._ We also compare our models with recent efficient vision transformers [5, 50, 51, 56, 5] in Tab. 2. In particular, when getting similar performance on ImageNet-1K [17], our EfficientViT-M4 runs 4.4\(\times\) and 3.0\(\times\) faster than the recent EdgeViT-XXS [56] on the tested CPU and GPU devices. Even converted to ONNX runtime format, our model still gets 3.7\(\times\) higher speed. Compared to the state-of-the-art MobileViTV2-0.5 [51], our EfficientViT-M2 achieves slightly better performance with higher throughput, _e.g._, 3.4\(\times\) and 3.5\(\times\) higher throughput tested on the GPU and CPU devices, respectively. Furthermore, we compare with tiny variants of state-of-the-art large ViTs in Tab. 3. PoolFormer-12S [83] has comparable accuracy with EfficientViT-M5 yet runs 3.0\(\times\) slower on the V100 GPU. Compared to Swin-T [44], EfficientViT-M5 is 4.1% inferior in accuracy yet is 12.3\(\times\) faster on the Intel CPU, demonstrating the efficiency of the proposed design. In addition, we present the speed evaluation and comparison on mobile chipsets in the supplementary material.
_Finetune with higher resolutions._ Recent works on ViTs have demonstrated that finetuning with higher resolutions can further improve the capacity of the models. We also finetune our largest model EfficientViT-M5 to higher resolutions. EfficientViT-M5\(\uparrow\)384 reaches 79.8% top-1 accuracy with throughput of 3,986 images/s on the V100 GPU, and EfficientViT-M5\(\uparrow\)512 further improves the top-1 accuracy to 80.8%, demonstrating the efficiency on processing images with larger resolutions and the good model capacity.
\begin{table}
\begin{tabular}{l|c c|c c c|c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Top-1} & \multirow{2}{*}{Top-5} & \multicolumn{3}{c|}{Throughput (images/s)} & \multirow{2}{*}{Flops} & \multirow{2}{*}{Params} & \multirow{2}{*}{Input} & \multirow{2}{*}{Epochs} \\ \cline{3-3} \cline{6-9} & & (\%) & & GPU & CPU & & ONNX & (M) & (M) & \\ \hline
**EfficientViT-M0** & **63.2** & 85.4 & 27644 & 228.4 & 340.1 & 79 & 2.3 & 224 & 300 \\ \hline MobileNetV3-Small [26] & 67.4 & - & 19738 & 156.5 & 231.7 & 57 & 2.5 & 224 & 600 \\
**EfficientViT-M1** & **68.4** & 88.7 & 20093 & 126.9 & 215.9 & 167 & 3.0 & 224 & 300 \\ Mobile-Former-52M [9] & 68.7 & - & 3141 & 32.8 & 21.5 & 52 & 3.5 & 224 & 450 \\ MobileViT-XXS [50] & 69.0 & - & 4456 & 29.4 & 41.7 & 410 & 1.3 & 256 & 300 \\ ShuffleNetV2 1.0\(\times\)[48] & 69.4 & 88.9 & 13301 & 106.7 & 177.0 & 146 & 2.3 & 224 & 300 \\ MobileViTV2-0.5 [51] & 70.2 & - & 5142 & 34.4 & 44.9 & 466 & 1.4 & 256 & 300 \\
**EfficientViT-M2** & **70.8** & 90.2 & 18218 & 121.2 & 158.7 & 201 & 4.2 & 224 & 300 \\ \hline MobileOne-S0 [70] & 71.4 & - & 11320 & 67.4 & 128.6 & 274 & 2.1 & 224 & 300 \\ MobileNetV2 1.0\(\times\)[63] & 72.0 & 91.0 & 6534 & 32.5 & 80.4 & 300 & 3.4 & 224 & 300 \\
**EfficientViT-M3** & **73.4** & 91.4 & 16644 & 96.4 & 120.8 & 263 & 6.9 & 224 & 300 \\ GhostNet 1.0\(\times\)[23] & 73.9 & 91.4 & 7382 & 57.3 & 77.0 & 141 & 5.2 & 224 & 300 \\ NASNet-A-Mobile [89] & 74.1 & - & 2623 & 19.8 & 25.5 & 564 & 5.3 & 224 & 300 \\
**EfficientViT-M4** & **74.3** & 91.8 & 15914 & 88.5 & 108.6 & 299 & 8.8 & 224 & 300 \\ \hline EdgeViT-XXS [56] & 74.4 & - & 3638 & 28.2 & 29.6 & 556 & 4.1 & 224 & 300 \\ MobileViT-XS [50] & 74.7 & - & 3344 & 11.1 & 20.5 & 986 & 2.3 & 256 & 300 \\ ShuffleNetV2 2.0\(\times\)[48] & 74.9 & 92.4 & 6962 & 37.9 & 52.3 & 591 & 7.4 & 224 & 300 \\ MobileNetV3-Large [26] & 75.2 & - & 7560 & 39.1 & 70.5 & 217 & 5.4 & 224 & 600 \\ MobileViTV2-0.75 [51] & 75.6 & - & 3350 & 16.0 & 22.7 & 1030 & 2.9 & 256 & 300 \\ MobileOne-S1 [70] & 75.9 & - & 6663 & 30.7 & 51.1 & 825 & 4.8 & 224 & 300 \\ GLiT-Tiny [5] & 76.4 & - & 3516 & 17.5 & 15.7 & 1333 & 7.3 & 224 & 300 \\ EfficientNet-B0 [67] & 77.1 & 93.3 & 4532 & 30.2 & 29.5 & 390 & 5.3 & 224 & 350 \\
**EfficientViT-M5** & **77.1** & 93.4 & 10621 & 56.8 & 62.5 & 522 & 12.4 & 224 & 300 \\ \hline
**EfficientViT-M4\(\uparrow\)384** & **79.8** & 95.0 & 3986 & 15.8 & 22.6 & 1486 & 12.4 & 384 & 330 \\
**EfficientViT-M5\(\uparrow\)512** & **80.8** & 95.5 & 2313 & 8.3 & 10.5 & 2670 & 12.4 & 512 & 360 \\ \hline \hline \end{tabular}
\end{table}
Table 2: EfficientViT image classification performance on ImageNet-1K [17] with comparisons to state-of-the-art efficient CNN and ViT models trained without extra data. Throughput is tested on Nvidia V100 for GPU and Intel Xeon E5-2690 v4 @ 2.60 GHz processor for CPU and ONNX, where larger throughput means faster inference speed. \(\uparrow\): finetune with higher resolution.
### Transfer Learning Results
To further evaluate the transfer ability, we apply EfficientViT on various downstream tasks.
_Downstream Image Classification._ We transfer EfficientViT to downstream image classification datasets to test its generalization ability: 1) CIFAR-10 and CIFAR-100 [36]; 2) fine-grained classification: Flowers [55], Stanford Cars [35], and Oxford-IIIT Pets [58]. We report the results in Tab. 4. Compared to existing efficient models [18, 26, 63, 89], our EfficientViT-M5 achieves comparable or slightly better accuracy across all datasets with much higher throughput. An exception lies in Cars, where our model is slightly inferior in accuracy. This may because the subtle differences between classes lie more in local details thus is more feasible to be captured with convolution.
_Object Detection._ We compare EfficientViT-M4 with efficient models [63, 68, 66, 12, 22, 26] on the COCO [42] object detection task, and present the results in Tab. 5. Specifically, EfficientViT-M4 surpasses MobileNetV2 [63] by 4.4% AP with comparable Flops. Compared to the searched method SPOS [22], our EfficientViT-M4 uses 18.1% fewer Flops while achieving 2.0% higher AP, demonstrating its capacity and generalization ability in different vision tasks.
### Ablation Study
In this section, we ablate important design elements in the proposed EfficientViT on ImageNet-1K [17]. All models are trained for 100 epochs to magnify the differences and reduce training time [20]. Tab. 6 reports the results.
_Impact of the sandwich layout block._ We first present an ablation study to verify the efficiency of the proposed sandwich layout design, by replacing the sandwich layout block with the original Swin block [44]. The depth is adjusted to {2, 2, 3} to guarantee similar throughput with EfficientViT-M4 for a fair comparison. The top-1 accuracy degrades by 3.0% at a similar speed, verifying that applying more FFNs instead of memory-bound MHSA is more effective for small models. Furthermore, to analyze the impact of the number of FFNs \(\mathcal{N}\) before and after self-attention, we change the number from 1 to 2 and 3. The number of blocks is reduced accordingly to maintain similar throughput. As presented in Tab. 6 (#3 and #4), further increasing the number of FFNs is not effective due to the lack of long-range spatial relation and \(\mathcal{N}\)=1 achieves the best efficiency.
_Impact of the cascaded group attention._ We have proposed CGA to improve the computation efficiency of MHSA. As shown in Tab. 6 (#5 and #6), replacing CGA with MHSA decreases the accuracy by 1.1% and ONNX speed by 5.9%, suggesting that addressing head redundancy improves the model efficiency. For the model without the cascade operation, its performance is comparable with MHSA but worse than CGA, demonstrating the efficacy of enhancing the feature representations of each head.
_Impact of the parameter reallocation._ Our EfficientViT-M4 yields 1.4%/1.5% higher top-1 accuracy, 4.9%/3.8% higher GPU throughput than the models without \(QKV\) channel dimension reallocation or FFN ratio reduction, respectively, indicating the effectiveness of parameter reallocation (#1 _vs._ #7, #8). Moreover, we study the choices of \(QK\) dimension in each head and the ratio of \(V\) dimension to the input embedding in Fig. 7. It is shown that the performance is improved gradually as \(QK\) dimension increases
\begin{table}
\begin{tabular}{l l|c|c c} \hline \hline \multirow{2}{*}{\#} & \multirow{2}{*}{Ablation} & \multicolumn{2}{c|}{Throughput (imgs/s)} \\ \cline{3-5} & & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{GPU} & \multicolumn{1}{c}{ONNX} \\ \hline
1 & EfficientViT-M4 & 71.3 & 15914 & 108.6 \\ \hline
2 & Sandwich \(\rightarrow\) Swin [44] & 68.3 & 15804 & 114.5 \\
3 & \(\mathcal{N}\) = 1 \(\rightarrow\) 2 & 70.2 & 14977 & 112.3 \\
4 & \(\mathcal{N}\) = 1 \(\rightarrow\) 3 & 65.7 & 15856 & 139.7 \\ \hline
5 & CGA \(\rightarrow\) MHSA [18] & 70.2 & 16243 & 102.2 \\
6 & Cascade \(\rightarrow\) None & 69.8 & 16411 & 111.0 \\ \hline
7 & QKV allocation \(\rightarrow\) None & 69.9 & 15132 & 103.1 \\
8 & FFN ratio \(2\to 4\) & 69.8 & 15310 & 112.4 \\ \hline
9 & DWConv \(\rightarrow\) None & 69.9 & 16325 & 110.4 \\
10 & BN \(\rightarrow\) LN [2] & 70.4 & 15643 & 103.6 \\
11 & ReLU \(\rightarrow\) Hswich [26] & 72.2 & 15887 & 87.5 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Ablation for EfficientViT-M4 on ImageNet-1K [17] dataset. Top-1 accuracy, GPU and ONNX throughput are reported.
\begin{table}
\begin{tabular}{l|c|c|c c|c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{Top-1} & \multicolumn{2}{c|}{Throughput (img/s)} & \multirow{2}{*}{Flops} & \multirow{2}{*}{Params} \\ & & (\%) & & & & (G) & (M) \\ \hline PVTV2-Bo [76] & 70.5 & 3507 & 12.7 & 18.5 & 0.6 & 1.4 \\ T2-ViT-7 [84] & 71.7 & 1156 & 22.5 & 16.1 & 1.1 & 4.3 \\ DeiT-7 [69] & 72.2 & 4631 & 26.0 & 25.1 & 1.3 & 5.9 \\ PoolFormer-12S [83] & 77.2 & 3534 & 10.4 & 14.6 & 1.9 & 12.0 \\ EffFormer-L1 [40] & 79.2 & 4465 & 12.9 & 21.2 & 1.3 & 12.3 \\ Swin-T [44] & **81.2** & 1393 & 4.6 & 6.4 & 4.5 & 29.0 \\ \hline
**EfficientViT-M5** & 77.1 & **10621** & **56.8** & **62.5** & 0.5 & 12.4 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison with the tiny variants of state-of-the-art large-scale ViTs on ImageNet-1K [17].
\begin{table}
\begin{tabular}{l|c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{RetinaNet 1\(\times\)} & \multicolumn{2}{c}{Flops} \\ \cline{2-7} & AP & AP\({}_{50}\) & AP\({}_{75}\) & AP\({}_{x}\) & AP\({}_{m}\) & AP\({}_{l}\) & (M) \\ \hline MobileNetV2 [63] & 28.3 & 46.7 & 29.3 & 14.8 & 30.7 & 38.1 & 300 & 3.4 \\ MobileNetV3 [26] & 29.9 & 49.3 & 30.8 & 14.9 & 33.3 & 41.1 & 217 & 5.4 \\ SPOS [22] & 30.7 & 49.8 & 32.2 & 15.4 & 33.9 & 41.6 & 365 & 4.3 \\ NHSANet-A2 [63] & 30.5 & 50.2 & 32.0 & 16.6 & 34.1 & 41.1 & 340 & 4.8 \\ FairNAS-C [12] & 31.2 & 50.8 & 32.7 & 16.3 & 34.4 & 42.3 & 325 & 5.6 \\ MixNet-M [68] & 31.3 & 51.7 & 32.4 & 17.0 & 35.0 & 41.9 & 360 & 5.0 \\
**EfficientViT-M4** & **32.7** & **52.2** & **34.1** & **17.6** & **35.3** & **46.0** & 299 & 8.8 \\ \hline \hline \end{tabular}
\end{table}
Table 5: EfficientViT object detection performance on COCO val2017 [42] with comparisons to other efficient models.
from 4 to 16, while further increasing it gives inferior performance. Besides, the performance improves from 70.3% to 71.3% when increasing the ratio between \(V\) dimension and input embedding from 0.4 to 1.0. When further enlarging the ratio to 1.2, it only gets 0.1% improvements. Therefore, setting the channels of \(V\) close to the input embedding achieves the best parameter efficiency, which meets our analysis in Sec. 2.3 and design strategy.
_Impact of other components._ We ablate the impact of using DWConv for token interaction, the normalization layer, and the activation function, as presented in Tab. 6 (#9, #10, and #11). With DWConv, the accuracy improves by 1.4% with a minor latency overhead, demonstrating the effectiveness of introducing local structural information. Replacing BN with LN decreases accuracy by 0.9% and GPU speed by 2.9%. Using HardSwish instead of ReLU improves accuracy by 0.9% but leads to a large drop of 20.0% ONNX speed. The activation functions are element-wise operations that occupy a considerable amount of processing time on GPU/CPU [15, 48, 72], thus utilizing ReLU instead of more complicated activation functions is of better efficiency.
_Results of 1,000 training epochs and distillation._ Tab. 7 shows the results with 1,000 training epochs and knowledge distillation using RegNetY-16GF [60] as the teacher model following [20] on ImageNet-1K [17] and ImageNet-ReaL [4]. Compared to LeViT-128S [20], EfficientViT-M4 surpasses it by 0.5% on ImageNet-1K and 1.0% on ImageNet-ReaL, respectively. For the inference speed, our model has 34.2% higher throughput on ONNX and also shows superiority on other settings. The results demonstrate that the strong capability and generalization ability of EfficientViT can be further explored with longer training schedules.
## 5 Related Work
_Efficient CNNs._ With the demand of deploying CNNs on resource-constrained scenarios, efficient CNNs have been intensively studied in literature [23, 24, 26, 67, 68, 63, 87]. Xception [10] proposes an architecture built with depth-wise separable convolutions. MobileNetV2 [63] builds an inverted residual structure which expands the input to a higher dimension. MobileNetV3 [26] and EfficientNet [67] resort to neural architecture search techniques to design compact models. To boost the actual speed on hardware, ShuffleNetV2 [48] introduces channel split and shuffle operations to improve the information communication among channel groups. However, the spatial locality of convolutional kernels hampers CNN models from capturing long-range dependencies, thus limiting their model capacity.
_Efficient ViTs._ ViT and its variants [18, 44, 69, 76] have achieved success on various vision tasks. Despite the superior performance, most of them are inferior to typical CNNs in inference speed. Some efficient transformers have been proposed recently and they fall into two camps: 1) efficient self-attention; and 2) efficient architecture design. Efficient self-attention methods reduce the cost of softmax attention via sparse attention [57, 56, 34, 75] or low-rank approximation [11, 51, 74]. However, they suffer from performance degradation with negligible or moderate inference acceleration over softmax attention [71]. Another line of work combines ViTs with lightweight CNNs to build efficient architectures [9, 47, 50, 81, 49]. LVT [81] proposes enhanced attention mechanisms with dilated convolution to improve the model performance and efficiency. Mobile-Former [9] designs a parallel CNN-transformer block to encode both local features and global interaction. However, most of them target at minimizing Flops and parameters [16], which could have low correlations with actual inference latency [70] and still inferior to efficient CNNs in speed. Different from them, we explore models with fast inference by directly optimizing their throughput on different hardware and deployment settings, and design a family of hierarchical models with a good trade-off between speed and accuracy.
## 6 Conclusion
In this paper, we have presented a systematic analysis on the factors that affect the inference speed of vision transformers, and proposed a new family of fast vision transformers with memory-efficient operations and cascaded group attention, named EfficientViT. Extensive experiments have demonstrated the efficacy and high speed of EfficientViT, and also show its superiority on various downstream benchmarks.
_Limitations._ One limitation of EfficientViT is that, despite its high inference speed, the model size is slightly larger compared to state-of-the-art efficient CNN [26] due to the extra FFNs in the introduced sandwich layout. Besides, our models are designed manually based on the derived guidelines on building efficient vision transformers. In future work, we are interested in reducing the model size and incorporating automatic search techniques to further enhance the model capacity and efficiency.
**Acknowledgement.** Prof. Yuan was partially supported by Hong Kong Research Grants Council (RGC) General Research Fund 112111221, and Innovation and Technology Commission-Innovation and Technology Fund ITS/100/20.
\begin{table}
\begin{tabular}{l|c c c|c c c|c c} \hline \multirow{2}{*}{Model} & \multicolumn{3}{c|}{ImageNet (\%)} & \multicolumn{3}{c|}{Throughput (imgs/s)} & \multicolumn{3}{c}{Flops Params} \\ \cline{2-9} & Top-1 & Top-\(\dagger^{1}\) & Real\(\dagger\) & GPU & CPU & ONNX & (M) & (M) \\ \hline LeViT-128S [20] & 73.6 & 76.6 & 82.6 & 14457 & 82.3 & 80.9 & 305 & **7.8** \\
**EfficientViT-M4** & **74.3** & **77.1** & **83.6** & **15914** & **88.5** & **108.6** & **299** & 8.8 \\ \hline \end{tabular}
\end{table}
Table 7: Performance comparison on ImageNet-1K [17] and ImageNet-ReaL. [4]. Results with \(\dagger\) are trained with 1,000 epochs and knowledge distillation following LeViT [20].
Figure 7: Ablation on the \(QK\) dimension of each head and the ratio of \(V\) dimension to the input embedding. |
2304.10114 | Edge general position sets in Fibonacci and Lucas cubes | A set of edges $X\subseteq E(G)$ of a graph $G$ is an edge general position
set if no three edges from $X$ lie on a common shortest path in $G$. The
cardinality of a largest edge general position set of $G$ is the edge general
position number of $G$. In this paper edge general position sets are
investigated in partial cubes. In particular it is proved that the union of two
largest $\Theta$-classes of a Fibonacci cube or a Lucas cube is a maximal edge
general position set. | Sandi Klavžar, Elif Tan | 2023-04-20T06:40:57Z | http://arxiv.org/abs/2304.10114v1 | # Edge general position sets in Fibonacci and Lucas cubes
###### Abstract
A set of edges \(X\subseteq E(G)\) of a graph \(G\) is an edge general position set if no three edges from \(X\) lie on a common shortest path in \(G\). The cardinality of a largest edge general position set of \(G\) is the edge general position number of \(G\). In this paper edge general position sets are investigated in partial cubes. In particular it is proved that the union of two largest \(\Theta\)-classes of a Fibonacci cube or a Lucas cube is a maximal edge general position set.
\({}^{1}\) Faculty of Mathematics and Physics, University of Ljubljana, Slovenia
[email protected]
\({}^{2}\) Faculty of Natural Sciences and Mathematics, University of Maribor, Slovenia
\({}^{3}\) Institute of Mathematics, Physics and Mechanics, Ljubljana, Slovenia
\({}^{4}\) Department of Mathematics, Ankara University, Ankara, Turkey
[email protected]
**Keywords**: general position set; edge general position set; partial cube; Fibonacci cube; Lucas cube
**Mathematics Subject Classification (2020)**: 05C12, 05C70
## 1 Introduction
A set of vertices \(S\subseteq V(G)\) of a graph \(G=(V(G),E(G))\) is a _general position set_ if no three vertices from \(S\) lie on a common shortest path of \(G\). Similarly, a set of edges \(X\subseteq E(G)\) of \(G\) is an _edge general position set_ if no three edges from \(X\) lie on a common shortest path. The cardinality of a largest general position set (resp. edge general position set) of \(G\) is the _general position number_ (resp. _edge general position number_) and denoted by \(\mathrm{gp}(G)\) (resp. \(\mathrm{gp}_{\mathrm{e}}(G)\)).
General position sets in graphs have already received a lot of attention. They were introduced in [16, 28] and we refer to [1, 12, 21, 27, 31] for a selection of
further developments. On the other hand, the edge version of this concept has been introduced only recently in [17]. In this paper we continue this line of the research.
To determine the general position number of hypercubes turns out to be a very difficult problem, cf. [15]. On the other hand, a closed formula for the edge general position number of hypercubes has been determined in [17]. Combining the facts that the edge general position number is doable on hypercubes and that hypercubes form the cornerstone of the class of partial cubes, we focus in this paper on the edge general position number of two important and interesting families of partial cubes, Fibonacci cubes and Lucas cubes. The first of these two classes of graphs was introduced in [8] as a model for interconnection networks. In due course, these graphs have found numerous applications elsewhere and are also extremely interesting in their own right. Lucas cubes, introduced in [20], form a class of graphs which naturally symmetrises the Fibonacci cubes and also have many interesting properties. The state of research up to 2013 on these classes of graphs (and some additional related ones) is summarised in the survey paper [11], the following list of papers is a selection from subsequent research [4, 5, 6, 9, 18, 19, 22, 23, 24, 25, 29].
The rest of this paper is organized as follows. In the next section we define the concepts discussed in this paper, introduce the required notation, and recall a known result. In Section 3 we discuss partial cubes and the interdependence of their edge general position sets and \(\Theta\)-classes. In Section 4 we prove that the union of two largest \(\Theta\)-classes of a Fibonacci cube or a Lucas cube is always a maximal edge general position set. We conjecture that for Fibonacci cubes these sets are also maximum general position sets and show that this is not the case for Lucas cubes.
## 2 Preliminaries
Unless stated otherwise, graphs considered in this paper are connected. The path of order \(n\) is denoted by \(P_{n}\). The _Cartesian product_\(G\,\square\,H\) of graphs \(G\) and \(H\) has vertices \(V(G)\times V(H)\) and edges \((g,h)(g^{\prime},h^{\prime})\), where either \(g=g^{\prime}\) and \(hh^{\prime}\in E(H)\), or \(h=h^{\prime}\) and \(gg^{\prime}\in E(G)\). The \(r\)_-dimensional hypercube_\(Q_{r}\), \(r\geq 1\), is a graph with \(V(Q_{r})=\{0,1\}^{r}\), and there is an edge between two vertices if and only if they differ in exactly one coordinate. That is, if \(x=(x_{1},\ldots,x_{r})\) and \(y=(y_{1},\ldots,y_{r})\) are vertices of \(Q_{r}\), then \(xy\in E(Q_{r})\) if and only if there exists \(j\in[r]=\{1,\ldots,r\}\) such that \(x_{j}\neq y_{j}\) and \(x_{i}=y_{i}\) for every \(i\neq j\). \(Q_{r}\), \(r\geq 2\), can also be described as the Cartesian product \(Q_{r-1}\,\square\,P_{2}\).
The _distance_\(d_{G}(u,v)\) between vertices \(u\) and \(v\) of a graph \(G=(V(G),E(G))\) is the number of edges on a shortest \(u,v\)-path. A subgraph \(H\) of a graph \(G\) is _isometric_ if \(d_{H}(x,y)=d_{G}(x,y)\) holds for every pair of vertices \(x,y\) of \(H\). We also say that \(H\) is _isometrically embedded_ into \(G\). Isometric subgraphs of hypercubes are known as _partial cubes_.
A _Fibonacci string_ of length \(n\geq 1\) is a binary string that contains no consecutive 1s. The _Fibonacci cube_\(\Gamma_{n}\), \(n\geq 1\), is the graph whose vertices are all Fibonacci strings of length \(n\), two vertices being adjacent if they differ in a single coordinate. \(\Gamma_{n}\) can be equivalently defined as an induced subgraph of \(Q_{n}\) obtained from \(Q_{n}\) by removing all the vertices that contain at least one pair of consecutive 1s. Further, the _Lucas cube_\(\Lambda_{n}\), \(n\geq 1\), is obtained from \(\Gamma_{n}\) by removing the vertices that start and end with 1. See Fig. 1 for \(\Gamma_{5}\) and \(\Lambda_{5}\) and note that the latter is obtained from the former by removing the vertices 10001 and 10101.
It is well-known that the order of \(\Gamma_{n}\) is \(F_{n+2}\), where \(F_{n}\) are the _Fibonacci numbers_ defined by the recurrence \(F_{n+2}=F_{n+1}+F_{n}\), \(n\geq 0\), with the initial terms \(F_{0}=0\) and \(F_{1}=1\). Also, the order of \(\Lambda_{n}\) is \(L_{n}\), where \(L_{n}\) are the _Lucas numbers_ defined by the same recurrence relation with the initial terms \(L_{0}=2\) and \(L_{1}=1\).
To complete the preliminaries, we recall the following inequality on Fibonacci numbers.
**Lemma 2.1**: [2, Corollary] _If \(n\) is a positive integer and \(0\leq i\leq\lfloor n/2\rfloor\), then \(F_{n}\geq F_{i}F_{n-i+1}\)._
## 3 On edge general position sets in partial cubes
In this section we recall several results on the edge general position sets in partial cubes from [17] and derive some new results. This will motivate us to consider edge general position sets in Fibonacci cubes and in Lucas cubes in the next section.
Let \(G\) be a graph. Then we say that edges \(xy\) and \(uv\) of \(G\) are in the _Djokovic-Winkler relation_\(\Theta\) if \(d_{G}(x,u)+d_{G}(y,v)\neq d_{G}(x,v)+d_{G}(y,u)\)[3, 30]. A connected
graph \(G\) is a partial cube if and only if \(G\) is bipartite and \(\Theta\) is transitive [30]. It follows that \(\Theta\) partitions the edge set of a partial cube into \(\Theta\)-_classes_. Moreover, if \(G\) is a partial cube isometrically embedded into \(Q_{n}\) such that for each \(i\in[n]\) there exits an edge \(xy\in E(G)\subseteq E(Q_{n})\) with \(x_{i}\neq y_{i}\), then \(G\) contains exactly \(n\)\(\Theta\)-classes. We will denote them by \(\Theta_{1}(G),\ldots,\Theta_{n}(G)\), where \(\Theta_{i}(G)\), \(i\in[n]\), consists of the edges of \(G\) which differ in coordinate \(i\).
We first recall the following result.
**Lemma 3.1**: [17, Lemma 3.1] _If \(G\) is a partial cube embedded into \(Q_{n}\), then \(\Theta_{i}(G)\cup\Theta_{j}(G)\) is an edge general position set of \(G\)._
Using Lemma 3.1 it was proved in [17] that \(\operatorname{gp_{e}}(Q_{r})=2^{r}\). It was also proved that \(\operatorname{gp_{e}}(P_{r}\operatorname{\square}P_{r})=4r-8\) for \(r\geq 4\), see [17, Theorem 4.1]. Note however, that \(|\Theta_{i}(P_{r}\operatorname{\square}P_{r})|=r\) for each \(i\in[2r-2]\). Hence Lemma 3.1 only yields \(\operatorname{gp_{e}}(P_{r}\operatorname{\square}P_{r})\geq 2r\) which is arbitrary away from the optimal value. Moreover, if \(r\geq 5\), then by [17, Theorem 4.2] a largest edge general position set of \(P_{r}\operatorname{\square}P_{r}\) is unique and is not a union of some \(\Theta\)-classes. On the other hand, we can have large edge general position sets which are the union of many \(\Theta\)-classes as the following result asserts, where by an _end block_ of a graph \(G\) we mean a block of \(G\) which contains exactly one cut vertex.
To prove the next proposition, we recall the following auxiliary result.
**Lemma 3.2**: [14, Lemma 6.4] _Let \(H\) be an isometric subgraph of \(G\) and let \(e\) and \(f\) be edges from different blocks of \(H\). Then \(e\) is not in relation \(\Theta\) with \(f\) in \(G\)._
**Proposition 3.3**: _Let \(B_{1},\ldots,B_{k}\) be the end blocks of a partial cube \(G\) and for \(i\in[k]\) let \(\Theta^{i}(G)\) be an arbitrary \(\Theta\)-class of \(G\) with an edge in \(B_{i}\). Then \(\bigcup_{i\in[k]}\Theta^{i}(G)\) is an edge general position set of \(G\)._
**Proof.** By Lemma 3.2, \(\Theta^{i}(G)\subseteq E(B_{i})\) and thus \(\Theta^{i}(G)\) also forms a \(\Theta\)-class of \(B_{i}\). Consider now an arbitrary shortest path \(P\) of \(G\) and suppose it contains an edge \(e_{i}\) of some \(\Theta^{i}(G)\). Then by the above, \(e_{i}\) is the only edge of \(P\) from \(\Theta^{i}(G)\). If \(P\) contains an edge \(e_{j}\) from some other \(\Theta^{j}(G)\), then also \(e_{j}\) is the only edge of \(P\) from \(\Theta^{j}(G)\). Moreover, in this case, \(E(P)\cap\bigcup_{i\in[k]}\Theta^{i}(G)=\{e_{i},e_{j}\}\) because all the edges of \(P\) which do not lie in \(B_{i}\cup B_{j}\) are from blocks which are not end blocks. \(\square\)
Note that Proposition 3.3 implies that the set of leaves of a tree \(T\) forms an edge general position set. For another example of an edge general position which is the union of many \(\Theta\)-classes see Fig. 2.
To finish this section consider the partial cube \(G\) from Fig. 3. In the left figure, the union of its two largest \(\Theta\)-classes forms an edge general position set of cardinality \(8\). In the middle figure, the union of four \(\Theta\)-classes also forms an edge general position set of \(G\) of cardinality \(8\). Finally, since we can cover the edges of \(G\) by four shortest paths as shown in the right figure, we have \(\operatorname{gp_{e}}(G)\leq 8\) which means that both indicated sets are largest edge general position sets.
## 4 On edge general position sets in Fibonacci and Lucas cubes
Fibonacci cubes and Lucas cubes are partial cubes [10]. Thus all the results and comments of the previous section can be applied to them. The cardinality of the \(\Theta\)-classes of Fibonacci cubes was independently determined in [13, 26], and the cardinality of the \(\Theta\)-classes of Lucas cubes in [13]. These results read as follows.
**Proposition 4.1**: _(i) If \(n\geq 1\) and \(i\in[n]\), then \(|\Theta_{i}(\Gamma_{n})|=F_{i}F_{n-i+1}\)._
_(ii) If \(n\geq 1\) and \(i\in[n]\), then \(|\Theta_{i}(\Lambda_{n})|=F_{n-1}\)._
To find large edge general position sets in \(\Gamma_{n}\) we can apply Lemma 3.1. For this sake we first answer the question for which \(i\) and \(j\) the value \(|\Theta_{i}(\Gamma_{n})\cup\Theta_{j}(\Gamma_{n})|\) is maximum.
**Proposition 4.2**: _If \(n\geq 2\), then_
\[2F_{n}=\max\{|\Theta_{i}(\Gamma_{n})|+|\Theta_{j}(\Gamma_{n})|:\ i,j\in[n],i \neq j\}.\]
Figure 3: Two largest edge general position sets of a partial cube and a cover of it by shortest paths
Figure 2: An edge general position set
**Proof.** Set \(M=\max\{|\Theta_{i}(\Gamma_{n})|+|\Theta_{j}(\Gamma_{n})|:\ i,j\in[n],i\neq j\}\). Using Proposition 4.1 and Lemma 2.1 we can then estimate as follows:
\[M =\max\{|\Theta_{i}(\Gamma_{n})|+|\Theta_{j}(\Gamma_{n})|:\ i,j\in[n ],i\neq j\}\] \[=\max\{F_{i}F_{n-i+1}+F_{j}F_{n-j+1}:\ i,j\in[n],i\neq j\}\] \[\leq\max\{F_{n}+F_{n}:\ i,j\in[n],i\neq j\}\] \[=2F_{n}\,.\]
On the other hand, \(|\Theta_{1}(\Gamma_{n})|+|\Theta_{n}(\Gamma_{n})|=F_{n}+F_{n}\), hence we can conclude that \(M=2F_{n}\). \(\Box\)
**Theorem 4.3**: _If \(n\geq 2\), then \(\Theta_{1}(\Gamma_{n})\cup\Theta_{n}(\Gamma_{n})\) is a maximal edge general position set of \(\Gamma_{n}\). Moreover, \(\mbox{\rm gp}_{\rm e}(\Gamma_{n})\geq 2F_{n}\)._
**Proof.** To prove the first assertion, we will use the fact that a shortest path can contain at most one edge from \(\Theta_{1}(\Gamma_{n})\) and at most one edge from \(\Theta_{n}(\Gamma_{n})\), cf. [7, Lemma 11.1]. Hence we only need to prove that no edge can be added to \(\Theta_{1}(\Gamma_{n})\cup\Theta_{n}(\Gamma_{n})\) in order to keep the edge general position property.
The statement of the theorem clearly holds for \(\Gamma_{2}\), and can be easily verified for \(\Gamma_{3}\) and \(\Gamma_{4}\). In the rest we may thus assume that \(n\geq 5\). Consider an arbitrary edge \(e=uv\in\Theta_{i}(\Gamma_{n})\), where \(2\leq i\leq n-1\). We may without loss of generality assume that \(u_{i}=0\) and \(v_{i}=1\). We need to show that \(e\) lies on some shortest path that contains one edge from \(\Theta_{1}(\Gamma_{n})\) and one edge from \(\Theta_{n}(\Gamma_{n})\). We distinguish the following two cases.
**Case 1**: \(i\in\{2,n-1\}\).
Suppose first that \(i=2\). In this case \(u=000\ldots\) and \(v=010\ldots\) If \(u_{n}=v_{n}=1\), then the following path
\[y =1\,0\,0\,\ldots\,0\,0\] \[x =0\,0\,0\,\ldots\,0\,0\] \[u =0\,0\,0\,\ldots\,0\,1\] \[v =0\,1\,0\,\ldots\,0\,1\]
is a shortest \(x,y\)-path in \(\Gamma_{n}\) that contains \(yx\in\Theta_{1}(\Gamma_{n})\) and \(xu\in\Theta_{n}(\Gamma_{n})\). If \(u_{n}=v_{n}=0\), then we consider two subcases. In the first one, \(u=000\ldots 00\) and \(v=010\ldots 00\). Then the following shortest path
\[x =1\,0\,0\,\ldots\,0\,0\] \[u =0\,0\,0\,\ldots\,0\,0\] \[v =0\,1\,0\,\ldots\,0\,0\] \[y =0\,1\,0\,\ldots\,0\,1\]
contains \(xu\in\Theta_{1}(\Gamma_{n})\) and \(vy\in\Theta_{n}(\Gamma_{n})\). In the second subcase we consider \(u=000\ldots 10\) and \(v=010\ldots 10\), in which case we have \(u=000\ldots 010\) and \(v=010\ldots 010\). Then the path
\[x =1\,0\,0\,\ldots\,0\,1\,0\] \[u =0\,0\,0\,\ldots\,0\,1\,0\] \[v =0\,1\,0\,\ldots\,0\,1\,0\] \[y =0\,1\,0\,\ldots\,0\,0\,0\] \[z =0\,1\,0\,\ldots\,0\,0\,1\]
is a shortest path in \(\Gamma_{n}\) and contains \(xu\in\Theta_{1}(\Gamma_{n})\) and \(yz\in\Theta_{n}(\Gamma_{n})\). For instance, if \(n=5\), then the path constructed is: \(x=10010\), \(u=00010\), \(v=01010\), \(y=01000\), \(z=01001\).
We have thus considered all the subcases when \(i=2\). By the symmetry of Fibonacci strings, the case \(i=n-1\) can be done analogously.
**Case 2**: \(2<i<n-1\).
In this case we have \(u=\ldots 000\ldots\) and \(v=\ldots 010\ldots\) Assume first that \(u_{1}=v_{1}=1\) and \(u_{n}=v_{n}=1\), so that \(u=10\ldots 010\ldots 01\) and \(v=10\ldots 000\ldots 01\). Then the path
\[x =0\,0\,\ldots\,0\,1\,0\ldots 0\,1\] \[v =1\,0\,\ldots\,0\,1\,0\ldots 0\,1\] \[u =1\,0\,\ldots\,0\,0\,0\ldots 0\,1\] \[y =1\,0\,\ldots\,0\,0\,0\ldots 0\,0\]
is a shortest path in \(\Gamma_{n}\) and contains edges \(xv\in\Theta_{1}(\Gamma_{n})\) and \(uy\in\Theta_{n}(\Gamma_{n})\). For instance, if \(n=5\), then the path constructed is: \(x=00101\), \(v=10101\), \(u=10001\), \(y=10000\).
If \(u\) and \(v\) start and end by \(00\), we simply change the first and the last bit to construct a required shortest path. Assume next that \(u=01\ldots 000\ldots 10\) and \(v=01\ldots 010\ldots 10\). Because \(v\) starts and ends with \(0\), and contains at least three \(1\)s, in this case we have \(n\geq 7\). Then the path
\[x =1\,0\,\ldots\,0\,1\,0\ldots 1\,0\] \[x^{\prime} =0\,0\,\ldots\,0\,1\,0\ldots 1\,0\] \[v =0\,1\,\ldots\,0\,1\,0\ldots 1\,0\] \[u =0\,1\,\ldots\,0\,0\,0\ldots 1\,0\] \[y =0\,1\,\ldots\,0\,0\,0\ldots 0\,0\] \[y^{\prime} =0\,1\,\ldots\,0\,0\ldots 0\,1\]
is a shortest path in \(\Gamma_{n}\) which contains \(xx^{\prime}\in\Theta_{1}(\Gamma_{n})\) and \(yy^{\prime}\in\Theta_{n}(\Gamma_{n})\). The final cases when \(u\) and \(v\) begin by \(0\) and end by \(1\) (or the other way around) are done by combining the above paths.
By Proposition 4.1(i), \(|\Theta_{1}(\Gamma_{n})|=|\Theta_{n}(\Gamma_{n})|=F_{n}\), hence \(\mbox{gp}_{\rm e}(\Gamma_{n})\geq|\Theta_{1}(\Gamma_{n})|+|\Theta_{n}(\Gamma_ {n})|=2F_{n}\). \(\Box\)
With a lot of effort, we can further prove that for any \(i\) and \(j\), the set \(\Theta_{i}(\Gamma_{n})\cup\Theta_{j}(\Gamma_{n})\) is a maximal edge general position set. However, since \(\Theta_{1}(\Gamma_{n})\cup\Theta_{n}(\Gamma_{n})\) is a largest such a set, we omit the long case analysis here.
Based on Theorem 4.3 we wonder whether \(\Theta_{1}(\Gamma_{n})\cup\Theta_{n}(\Gamma_{n})\) is not only a maximal edge general position set of \(\Gamma_{n}\) but also a maximum edge general position set. While we have no answer in general, we next show that this is true up to dimension \(n\leq 5\). For \(n\leq 4\) this can be easily checked, and for \(n=5\) we have the following.
**Proposition 4.4**: \(\mbox{gp}_{\rm e}(\Gamma_{5})=10\)_._
**Proof.** By Theorem 4.3 we have \(\mbox{gp}_{\rm e}(\Gamma_{5})\geq 10\). Consider now the four paths in \(\Gamma_{5}\) as indicated in Fig. 4 and note that each of them is a shortest path. The only edges not contained in one of these four paths are \(e\), \(e^{\prime}\), and \(e^{\prime\prime}\), see Fig. 4 again.
Let \(X\) be an arbitrary edge general position set of \(\Gamma_{5}\). Since each of the four paths from Fig. 4 is a shortest path, each of then can contain at most two edges from \(X\) and thus \(|X|\leq 4\cdot 2+3=11\). Supposing that \(|X|=11\), we must have \(e,e^{\prime},e^{\prime\prime}\in X\). However, by inspection we can now infer that if \(e,e^{\prime},e^{\prime\prime}\in X\), then there are only \(5\) more additional edges that could possibly lie in \(X\), hence if \(e,e^{\prime},e^{\prime\prime}\in X\), then actually \(|X|\leq 8\) holds. \(\Box\)
Based on Theorem 4.3 and Proposition 4.4 we pose:
Figure 4: Four shortest paths in \(\Gamma_{5}\)
**Conjecture 4.5**: _If \(n\geq 2\), then \(\mathrm{gp_{e}}(\Gamma_{n})=2F_{n}\)._
For the Lucas cubes we have the following result parallel to Theorem 4.3.
**Theorem 4.6**: _If \(n\geq 4\), then \(\Theta_{1}(\Lambda_{n})\cup\Theta_{n}(\Lambda_{n})\) is a maximal edge general position set of \(\Lambda_{n}\). Moreover, \(\mathrm{gp_{e}}(\Lambda_{n})\geq 2F_{n-1}\)._
**Proof.** We proceed parallel with the proof of Theorem 4.3. More precisely, we need to show that no edge can be added to \(\Theta_{1}(\Lambda_{n})\cup\Theta_{n}(\Lambda_{n})\) in order to keep the edge general position property. Then all the paths constructed in the proof of Theorem 4.3 contain no vertex which would start and end with \(1\), hence the same paths are suitable also for the present proof. The only exception appears to be the path as constructed in the first subcase of Case 2. However, this subcase is not relevant in the present proof because the vertices \(u\) and \(v\) are not vertices of \(\Lambda_{n}\) and hence we need not consider them here. Finally, by Proposition 4.1(ii), \(|\Theta_{1}(\Lambda_{n})|=|\Theta_{n}(\Lambda_{n})|=F_{n-1}\), hence \(\mathrm{gp_{e}}(\Lambda_{n})\geq|\Theta_{1}(\Lambda_{n})|+|\Theta_{n}(\Lambda _{n})|=2F_{n-1}\). \(\square\)
By Theorem 4.6, \(\mathrm{gp_{e}}(\Lambda_{5})\geq 6\). The following result then comes as a surprise.
**Proposition 4.7**: \(\mathrm{gp_{e}}(\Lambda_{5})=7\)_._
**Proof.** Let \(X\) be an arbitrary edge general position set of \(\Lambda_{5}\) and let \(u=00000\). Let \(F=\{e_{1},e_{2},e_{3},e_{4},e_{5}\}\) be the set of the edges incident to \(u\), where \(e_{1}=\{u,10000\}\), \(e_{2}=\{u,01000\}\), \(e_{3}=\{u,00100\}\), \(e_{4}=\{u,00010\}\), and \(e_{5}=\{u,00001\}\). See Fig. 5.
Since \(F\) is an edge general position set, we consider the following cases.
**Case 1**: \(|X\cap F|=5\).
In this case no edge from \(E(\Lambda_{5})\backslash F\) can be added to \(X\), so \(|X|=5\).
Figure 5: \(\Lambda_{5}\) and its largest edge general position set
**Case 2**: \(|X\cap F|=4\).
We may without loss of generality assume that \(F=\{e_{1},e_{3},e_{5},e_{2}\}\). Then only the edges \(\{01010,00010\}\) and \(\{10010,00010\}\) can be added to \(X\), hence in this case we have \(|X|\leq 6\).
**Case 3**: \(|X\cap F|=3\).
By the symmetry of \(\Lambda_{5}\) it suffices to distinguish the following two subcases.
(i) \(X\) contains three consecutive edges, say \(X\cap F=\{e_{1},e_{3},e_{5}\}\). Then at most two edges of \(E(\Lambda_{5})\backslash F\) can be added to \(X\).
(ii) \(X\) does not contain three consecutive edges, say \(X\cap F=\{e_{1},e_{2},e_{3}\}\). Then four edges of \(E(\Lambda_{5})\backslash F\) can be added to \(X\) as shown in the right-hand side of Fig. 5. So in this case \(|X|=7\).
**Case 4**: \(|X\cap F|=2\).
In this case, no matter whether the edges from \(X\cap F\) are consecutive or not, we can easily check that at most four edges of \(E(\Lambda_{5})\backslash F\) can be added to \(X\).
**Case 5**: \(|X\cap F|=1\).
We may assume that \(F=\{e_{1}\}\). Then considering the three subcases based on whether \(\{10000,10010\}\) and \(\{10000,10100\}\) lie in \(X\), we get that in every case \(|X|\leq 6\).
**Case 6:**\(|X\cap F|=0\).
In this case we observe that at most every second edge from the outer 10-cycle can lie in \(X\), hence \(|X|\leq 5\). \(\Box\)
Note that the proof of Proposition 4.7 also implies that a largest edge general position set of \(\Lambda_{5}\) is unique up to symmetry.
## Acknowledgements
This work has been supported by TUBITAK and the Slovenian Research Agency under grant numbers 122N184 and BI-TR/22-24-20, respectively. Sandi Klavzar also acknowledges the financial support from the Slovenian Research Agency (research core funding P1-0297 and projects J1-2452 and N1-0285).
## Declaration of interests
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## Data availability
Our manuscript has no associated data.
|
2305.16894 | Robustness of Multi-Source MT to Transcription Errors | Automatic speech translation is sensitive to speech recognition errors, but
in a multilingual scenario, the same content may be available in various
languages via simultaneous interpreting, dubbing or subtitling. In this paper,
we hypothesize that leveraging multiple sources will improve translation
quality if the sources complement one another in terms of correct information
they contain. To this end, we first show that on a 10-hour ESIC corpus, the ASR
errors in the original English speech and its simultaneous interpreting into
German and Czech are mutually independent. We then use two sources, English and
German, in a multi-source setting for translation into Czech to establish its
robustness to ASR errors. Furthermore, we observe this robustness when
translating both noisy sources together in a simultaneous translation setting.
Our results show that multi-source neural machine translation has the potential
to be useful in a real-time simultaneous translation setting, thereby
motivating further investigation in this area. | Dominik Macháček, Peter Polák, Ondřej Bojar, Raj Dabre | 2023-05-26T12:54:16Z | http://arxiv.org/abs/2305.16894v1 | # Robustness of Multi-Source MT to Transcription Errors
###### Abstract
Automatic speech translation is sensitive to speech recognition errors, but in a multilingual scenario, the same content may be available in various languages via simultaneous interpreting, dubbing or subtitling. In this paper, we hypothesize that leveraging multiple sources will improve translation quality if the sources complement one another in terms of correct information they contain. To this end, we first show that on a 10-hour ESIC corpus, the ASR errors in the original English speech and its simultaneous interpreting into German and Czech are mutually independent. We then use two sources, English and German, in a multi-source setting for translation into Czech to establish its robustness to ASR errors. Furthermore, we observe this robustness when translating both noisy sources together in a simultaneous translation setting. Our results show that multi-source neural machine translation has the potential to be useful in a real-time simultaneous translation setting, thereby motivating further investigation in this area.
National Institute of Information and Communications Technology, Kyoto, Japan\({}^{2}\)
{machacek,polak,bojar}@ufal.mff.cuni.cz, [email protected]
## 1 Introduction
Speech translation (ST) suffers from automatic speech recognition (ASR) errors, especially in challenging conditions such as non-native language speakers, background noise, named entities and specialized vocabulary usage Machacek et al. (2019); Gaido et al. (2021, 2022); Anastasopoulos et al. (2022). ASR errors negatively impact translation quality, via the compounding of speech recognition and translation errors Ruiz et al. (2017); Sperber and Paulik (2020), thereby limiting the application of automatic speech translation in realistic settings. Fortunately, there are multilingual settings where a source is simultaneously or consecutively interpreted into multiple languages. Many documents are also dubbed or subtitled in offline mode. This simultaneous interpretation takes place either via human interpreters, in the form of dubbing or subtitling. In a situation where the same sentence is available in multiple languages, multi-source machine translation (MT) significantly improves translation quality, especially when the two sources used separately do not yield high quality translations Dabre et al. (2018); Zoph and Knight (2016); Nishimura et al. (2018).
Although not yet clearly verified, multi-source MT could be useful in settings where the sources complement each other. In other words, challenges in translation posed by using each source should be independent of one another. Given that ASR is noisy and that multiple sources can help overcome limitations of individual sources, this paper asks the following question: "Can multi-source MT be leveraged for speech translation in a multilingual setting where an original source transcription and its simultaneously interpreted transcription are available?" We decompose this question into three parts where we hypothesize that (a) ASR errors in the original and interpreted transcripts are independent, which makes them complementary, (b) multi-source MT is robust to transcription errors present in individual sources, and (c) the robustness of multi-source MT continues to hold in a simultaneous translation setting. We address each question in Sections 3, 4, and 5, respectively.
To prove our hypotheses, firstly, we verify on the Europarl Simultaneous Interpreting Corpus (ESIC, Machacek et al., 2021) that original speech ASR and interpreted speech ASR are indeed complementary in terms of errors. Secondly, we simulate transcription errors in a full sentence multi-source MT setting for English and German to Czech translation. We clearly show that when both sources are noisy, using them together leads to significant improvements when compared to using them individually, in contrast to drops in quality when at least one of the sources is clean. For example, on ESIC test set with 15% WER noise in English source and 10% WER noise in German source,
multi-sourcing performs 0.9 BLEU score higher than English source. Finally, we use both sources in a simultaneous translation setting and show that multi-source MT continues to be robust to transcription errors.
Our findings show that multi-source MT has strong potential in a simultaneous translation setting where multiple sources are available via ASR or interpreted ASR. We note that our current analysis is limited to the case where the multiple sources are aligned and hence available at the same time. This is a setting of, e.g., dubbed and subtitled videos where we would want to consider additional target languages. In simultaneous settings where one source is available with a delay, the synchronization of the sources would be a considerable problem, which we leave for future research.
## 2 Related Work
This paper mainly focuses on ASR errors, multilingual multi-source translation and simultaneous translation.
ASR errorsoften propagate to MT in cascaded ST systems and in a real time translation setting where ASR systems are used. It is a major issue that affects translation quality. Martucci et al. (2021) propose a method to tune the MT on the training data with artificial noise that mimics ASR errors, via a unigram "lexical noise model" learned on automatic-gold transcript pairs. Other authors propose similar methods for training (Sperber et al., 2017; Di Gangi et al., 2019; Xue et al., 2020; Serai et al., 2022). However, in this work, rather than mimicking ASR noise during training, we complement a noisy source with another whose ASR errors are provably independent of the first. Specifically, we use the lexical noising of Martucci et al. (2021) to simulate noise in multiple source languages and show the robustness of MT models, especially in a multi-source setting.
Multilingualmabre et al. (2020) has been shown to improve translation quality in a variety of situations. In particular, multi-source machine translation has high potential for improving translation quality, but has been relatively underexplored. In the context of multi-source text-to-text translation, Zoph and Knight (2016) and Dabre et al. (2018) showed that leveraging the same sentence in different languages improves translation quality as the two sources are expected to complement hard to translate phenomena in the other source. Although this approach requires multi-parallel sentence-aligned data, the missing sources can be obtained by MT (Nishimura et al., 2018) or via simultaneous interpretation. Rather than training a multi-source model, Firat et al. (2016) propose the "late averaging" which needs multilingual models trained on pairwise bilingual data, which we also focus on when evaluating multi-source models. Late averaging is akin to ensembling via logits averaging, but with source sentences in different languages. These works do not consider transcription errors which are ubiquitous in speech translation, an aspect this paper focuses on.
Multi-sourcingin simultaneous translation settings has not been extensively explored. Dabre et al. (2021) have explored simultaneous multi-pivot translation where a source is translated into a target language via multiple pivot languages, where the pivot languages are translated using multi-source translation. Unlike them, we consider only one pivot language which is interpreted from the source and then use it together with the source to show that the translation quality into the target language improves. Additionally, they do not consider the effect of transcription noise on translation, which we do. Simultaneous translation approaches such as wait-\(k\)Ma et al. (2019) and Local Agreement (LA-\(n\), Polak et al., 2022) are commonly used, and we use the latter for our experiment.
An alternative multi-sourcing approach is to select and use only one source. Machacek et al. (2021) provide an analysis of using either the original, or its simultaneously interpreted equivalent as a source for simultaneous ST. Interpreting is delayed, but shorter and simpler than translationese. Interpreters also segment their speech to sentences differently than the original speakers, so it is not easy to align segments. In any case, selecting sources will involve additional effort and thus we consider using multiple sources together to be a more effective approach. In this regard, multiple language sources, both as text and speech streams, could be used in ASR (Paulik et al., 2005; **?**) as well as in pre-neural MT and ST (Och and Ney, 2001; Paulik and Waibel, 2008; Khadivi and Ney, 2008). Miranda et al. (2013) use them for punctuation restoration. Kocmi et al. (2021) provide a broad analysis of benefits of multilingual MT.
## 3 Parallel Source-Interpreted ASRs are Independent
We assume a multi-source setting with the original speech and its simultaneously interpreted equivalent as the two sources will improve robustness to ASR errors if the errors in the two source streams complement each other. This is not obvious because, on the one hand, the ASRs work independently where they are deployed for different languages, trained on different data, and the processing is fully independent. On the other hand, the content of the speeches is identical. Interpreters' speech pacing also depends on the original speaker, and it may influence the quality of both ASRs the same way. Therefore, in this section, we analyze the dependency of ASR errors in the source and interpreter, on 10-hour ESIC corpus Machacek et al. (2021) to prove that the ASR errors are indeed independent.
MethodologyFirst, we processed ASR for English original speakers and interpreters into Czech and German. For English, we used the low-latency neural ASR by Nguyen et al. (2021). For German, we used an older hybrid HMM-DNN model trained using the Janus Recognition Toolkit, which features a single-pass decoder Cho et al. (2013). For Czech, we used Kaldi Povey et al. (2011) HMM-DNN model trained on Czech Parliament data Kratochvil et al. (2020). Table 1 summarizes the transcription quality on ESIC showing that the quality is low, but to the best of our knowledge it is the best one available for this domain.
We then re-used the word alignments of gold transcripts between the original and interpretation as described in Machacek et al. (2021). 38% of tokens were aligned between English and Czech interpretations, and 40% between English and German, see Table 2. It may be caused by the characteristics of the language pair (e.g. compound words in German vs multi-word expressions in English), features of interpreting (non-verbatim translation, shortening) and by errors in automatic alignment. We only analyzed the aligned tokens further. Since there are many tokens left in two 5-hour subsets of the corpus, we consider further analysis as valid.
Finally, we aligned gold and automatic transcripts using Levenshtein edit distance.1 We classified each token in the ASR transcript as transcribed correctly or not, both for source and interpretations.
Footnote 1: [https://pypi.org/project/edlib/](https://pypi.org/project/edlib/)
ResultsWe made a contingency table (Table 3) and ran a \(\chi^{2}\) test Pearson (1900) of statistical independence. The results show that the **parallel source and interpretation ASRs make errors independently** of each other with \(p<0.01\), for both pairs, English-Czech and English-German, for both dev and test subsets.
We manually assessed the severity of the ASR errors and realized that most errors are only in spelling and fluency, and not in adequacy. We therefore conclude that our finding of independence of parallel ASRs may be valid only for ASRs of comparable quality to ours.
## 4 Multi-Source Speech Translation
Having established that ASR errors are independent, we now analyze whether multi-source neural machine translation (NMT) is robust to noisy sources. We focus on NMT for individual sentences, with gold sentence alignment of the sources and reference. It is a less realistic use-case than translating long speech documents without any sentence segmentation and alignment of the sources, but proving the robustness of multi-sourcing in this
\begin{table}
\begin{tabular}{l|c|c|c}
**subset** & **Cs interp.** & **De interp.** & **En original** \\ \hline
**dev** & 14.84 & 25.14 & 13.63 \\
**test** & 14.04 & 23.79 & 14.71 \\ \end{tabular}
\end{table}
Table 1: Transcription WER on ESIC. There are 191 and 179 documents in dev and test subsets. The scores are weighted by number of words in gold transcripts.
\begin{table}
\begin{tabular}{l|c|c|c}
**En orig.** & \multicolumn{2}{c|}{**Cs int.**} & \multicolumn{2}{c}{**De int.**} \\ & & **corr. incorr.** & **corr. incorr.** \\ \hline
**dev corr.** & 13815 & 1497 & 7192 & 1561 \\
**incorr.** & 1228 & 422 & 633 & 307 \\ \hline
**test corr.** & 14204 & 1655 & 7895 & 1638 \\
**incorr.** & 1344 & 420 & 692 & 336 \\ \end{tabular}
\end{table}
Table 2: Number and percentage of aligned tokens in gold transcripts between the original source (English [En]) and its interpretations (German [De] and Czech [Cs]).
\begin{table}
\begin{tabular}{l|c|c|c}
**subset** & **Cs interp.** & **De interp.** & **En original** \\ \hline
**dev** & 14.84 & 25.14 & 13.63 \\
**test** & 14.04 & 23.79 & 14.71 \\ \end{tabular}
\end{table}
Table 1: Transcription WER on ESIC. There are 191 and 179 documents in dev and test subsets. The scores are weighted by number of words in gold transcripts.
setting paves the way for its application in long speech document translation.
DataFor training, we use data from OPUS Tiedemann and Nygaard (2004), aiming at a multi-way model with English and German on the source side and Czech as a target. We download all the data from OPUS, remove all sentences from IWSLT, WMT, ESIC and other test sets, filter them by language identification, and then process with dual cross-entropy scoring Junczys-Dowmunt (2018) using the bilingual NMT models from Tiedemann and Thottingal (2020). We select the top 30 million sentences for each language pair as training data, to prevent overfitting for either. It is also near the threshold that Chen et al. (2021) showed as optimal.
For NMT validation and evaluation, we use the "revised transcript and translations" from ESIC Machacek et al. (2021). These are the texts that were originally uttered in the European Parliament, transcribed, revised and normalized for reading and publication on the website, and then translated. They are analogous, but not identical, to the gold transcripts of the original and interpretations that we used in Section 3. In addition to the version published by Machacek et al. (2021), we properly align the sentences in all the three languages. Two documents were removed because they missed German translation. The corpus is of comparable size to a usual MT test set. See size statistics in Table 4.
For a contrastive evaluation, we use Newstest11 Callison-Burch et al. (2011). It contains 3003 sentences in 5 languages: English, German, Czech, French and Spanish, the same amount in each. Newstest11 has references that were translated directly, not through an intermediate language. We also use three additional Czech references of Newst11 that were translated from German Bojar et al. (2012).
Multi-SourcingWe convert Marian models to PyTorch to be used with the Hugging Face Transformers Wolf et al. (2020) library, in which we implement late and early averaging. For both single- and multi-sourcing, we use greedy decoding because beam search support is not implemented with multi-source.
Training detailsWe train a multi-way NMT model using Marian Junczys-Dowmunt et al. (2018) with English and German as sources, with language identification tokens, and Czech as the target. We use two separate SentencePiece Kudo and Richardson (2018) vocabularies, both sizes of 16 000. The source vocabulary is joint for German and English, and the target is only for Czech. The model is a Transformer Base (6 layers, 512 embedding size, 8 self-attention heads, 2048 filter size) trained on 8 Quadro P5000 GPUs with 16 GB memory for 17 days, until convergence.
Checkpoint selectionWe validate all checkpoints (every 1000 training steps, 15 minutes) on two single sources (English and German) and two multi-sourcing options: early averaging, and late averaging of a single checkpoint with two sources. Furthermore, after the training has ended, we selected top 10 checkpoints that reached the highest BLEU scores for English and German single-source on the ESIC dev set. We evaluated all pairs of the top performing checkpoints in late averaging multi-sourcing setup. The top performing model from all validation and grid search options was selected as a final model. It is late averaging with a pair of distinct checkpoints. We also use these two checkpoints for single source evaluation.
Evaluation MetricsWe estimate translation quality by BLEU Papineni et al. (2002) and chrF2 Popovic (2016) calculated by sacreBLEU2 Post (2018). We also report the current state-of-the-art metric COMET3 Rei et al. (2020) that achieves the highest correlation with direct assessment as a kind of human judgements Mathur et al. (2020). However, COMET requires one source on the input and is not suitable for multi-source. Therefore, we report it twice (En/De COMET) with two single sources. Note that En COMET scores assume English as source and Czech as target. Since ESIC is tri-parallel, even if the translation is obtained using German or English and German multi-source, we only use the English source as the input to the COMET model. De COMET scores are computed similarly.
\begin{table}
\begin{tabular}{l|c|c|c c} & sent. doc. & En words & De w. & Cs w. \\ \hline dev & 2002 179 & 44866 & 43323 & 38347 \\ test & 1963 189 & 44273 & 42491 & 37695 \\ \end{tabular}
\end{table}
Table 4: Size statistics of tri-parallel sentence-aligned “revised translations” of ESIC Machacek et al. (2021). English is original, German and Czech are translations.
Results with clean inputsTable 5 shows the results of multi-sourcing with clean inputs, without any speech recognition noise. One would be tempted to conclude that the translation from English is of a higher quality than the translation from German (e.g. 33 vs. 26 BLEU on ESIC dev set), but such a claim is risky. The metrics measure the match of the candidate translation with the reference sentence (and, in case of COMET, also with the source), and it is conceivable that the English served as the source for the human reference translation. The Czech reference thus may very well exhibit more traits of the English source than of the German source. While the chrF2 scores agree with BLEU, COMET scores seem to indicate that multi-sourcing is as good as, if not better than, using a single source. Since COMET is known to correlate with human judgements better than BLEU (Mathur et al., 2020) our results show that multi-sourcing is indeed a viable solution.
To further shed light on the impact of the source used for creating references, we evaluated the models with Newstest11 and computed the scores with three additional references that were translated only from German. The German single source achieves much higher BLEU than the English source (32.23 vs 16.62 BLEU), with multi-sourcing in between (22.47 BLEU). Similar trends are observed in chrF2 and COMET scores. This is the opposite of ESIC scores, where the reference was obtained from English. It shows that the traits of the source language such as word order, structure of clauses and terms are remarkable in automatic metrics when the reference is constructed from that source, but these effects may be negligible in human evaluation. Appendix C contains more details.
Finally, we consider a "balanced" scenario where an equal number of references comes from both sources and this shows similar scores for both single sources (23.40 vs 22.85 BLEU) with multi-sourcing outperforming them by 0.6 and 1.1 BLEU. We therefore conclude that our multi-source model should be well-prepared for content originating in any of the source languages, but the automatic evaluation metrics may not always capture this. Moving forward, we only use BLEU for simplicity.
### Modeling Transcription Noise
Although multi-sourcing English and German is not very beneficial when both sources are clean, we hypothesize that it could show benefits with noisy sources. Averaging two noisy sources can lead to cancelling the noise. Since ESIC contains tri-parallel sentence-aligned translations as texts and not speech, and since we want to evaluate different levels of ASR noise, and we do not have many ASRs, we generate the ASR errors artificially.
Custom WER noise modelWe adopt the lexical noise model by Martucci et al. (2021) and modify it to create outputs with arbitrary WER. The lexical noise model modifies the source by applying insertion, deletion, substitution, or copy operations on each word with probabilities \(p_{I},p_{D}\), and \(p_{S}\), respectively. The probabilities are learned from the ASR and gold transcript pairs. It thus may learn to shuffle homonyms such as "eight" and "ate".
In the original lexical noise model by Martucci et al. (2021), the target WER is bound to the performance of the given ASR system on which it is trained, and can not be changed. WER is defined as
\begin{table}
\begin{tabular}{c|c|c c c} \multicolumn{2}{c|}{**Set**} & \multicolumn{2}{c}{**Model**} \\ ref. translation: & \multicolumn{2}{c}{} & \multicolumn{2}{c}{En} & \multicolumn{2}{c}{De+En} \\ \hline \hline \multirow{3}{*}{**ESIC dev**} & BLEU & \({}^{\star}\)**33.31** & 26.13 & \({}^{\star}\)31.90 \\ & chrF2 & \({}^{\star}\)**60.17** & 54.00 & \({}^{\star}\)58.59 \\ & En\(\rightarrow\)Cs & En COMET & \({}^{\star}\)**0.920** & 0.860 & \({}^{\star}\)0.919 \\ & De COMET & \({}^{\star}\)1.007 & 0.994 & \({}^{\star}\)**1.022** \\ \hline \multirow{3}{*}{**ESIC test**} & BLEU & \({}^{\star}\)**33.63** & 27.99 & \({}^{\star}\)32.57 \\ & chrF2 & \({}^{\star}\)**59.58** & 54.75 & \({}^{\star}\)58.63 \\ & En\(\rightarrow\)Cs & En COMET & \({}^{\star}\)0.906 & 0.871 & \({}^{\star}\)**0.912** \\ & De COMET & 0.994 & \({}^{\star}\)1.006 & **1.018** \\ \hline \hline \multirow{3}{*}{**news11** 3\(\times\)\{De\(\rightarrow\)Cs\} & BLEU & 16.62 & **23.23** & 24.47 \\ & chrF2 & \({}^{\star}\)44.84 & **58.81** & 49.72 \\ & chrF2 & \({}^{\star}\)1.018 & \({}^{\star}\)0.38 & \({}^{\star}\)0.27 \\ & En COMET & 0.528 & **0.823** & 0.652 \\ & De COMET & 0.600 & **0.967** & 0.757 \\ & \({}^{\star}\)0.002 & \({}^{\star}\)0.004 & \({}^{\star}\)0.002 \\ \hline \multirow{3}{*}{**news11** \(\{\{\)De,En,Fr,Es\(\}\rightarrow\)Cs\(\}\) & BLEU & \({}^{\star}\)23.40 & 22.85 & **23.96** \\ & chrF2 & \({}^{\star}\)**51.00** & 50.27 & \({}^{\star}\)50.83 \\ \cline{1-1} & En COMET & 0.627 & \({}^{\star}\)**0.674** & \({}^{\star}\)0.659 \\ \cline{1-1} & De COMET & 0.700 & **0.832** & 0.766 \\ \end{tabular}
\end{table}
Table 5: Evaluation scores with clean inputs (no ASR noise), machine-translated into Czech with single-sourcing English (En) or German (De), or multi-sourcing (De+En), on ESIC and Newstest11 (news11). Newstest is evaluated on a balanced reference that has origin in 5 languages ({De,En,Fr,Es}\(\rightarrow\)Cs translations and Cs original; 600 sentences each), and 3-times with additional references that were translated from German (“3\(\times\){De\(\rightarrow\)Cs}”). We report avg\(\pm\)stddev for them. “En COMET” and “De COMET” are run with English and German source, respectively. Maximum scores are in bold. The symbol \({}^{\star}\) means that there is statistically significant difference \((p<0.05)\) from all the lower scores in the same row, \({}^{\times}\) means no significance (\(t\)-test for COMET, paired bootstrap resampling for BLEU and chrF2).
the number of incorrect words in the ASR transcript divided by the number of correct words in the gold transcript. The errors are either insertions, deletions, or substitutions. In the lexical noise model, insertion is applied independently on the other operations. Therefore, we can decompose WER to the sum of insertion rate and the rate of deletions or substitutions.
In the lexical noise model, the insertion rate equals to the expected number of insertions for each gold word. Since the probability of not inserting is \(1-p_{I}\), the expected number of repetitions before not inserting succeeds is \(\frac{p_{I}}{1-p_{I}}\). It is also a mean of a geometric distribution with \(p=1-p_{I}\).
The rate of deletions and substitutions is \(p_{D}+(1-p_{D})p_{S}\), where \(p_{D}\) is the number of deletions. The words that were not deleted can be substituted, and there is \((1-p_{D})p_{S}\) of them. In summary, the original model WER is
\[\text{WER}=\frac{p_{I}}{1-p_{I}}+p_{D}+(1-p_{D})p_{S}, \tag{1}\]
To get a custom target WER, we rescale the learned probabilities by a constant \(c\):
\[\text{WER}_{\text{desired}}=\frac{cp_{I}}{1-cp_{I}}+cp_{D}+(1-cp_{D})cp_{S}. \tag{2}\]
We simplify the equation above to
\[\text{WER}_{\text{desired}}\approx cp_{I}+cp_{D}+(1-cp_{D})cp_{S}. \tag{3}\]
It leads to a quadratic function where \(c\) can be found easily. Since we work with probabilities, we select the smallest non-negative root as the solution. We release our implementation online.4
Training the noise modelFor training the noise model, we utilize VoxPopuli (Wang et al., 2021) to retrieve around 100,000 audio and gold transcript sentences in English and 60,000 in German. They are from the same domain as ESIC, both corpora are from the European Parliament. We processed the audio with NVidia NeMo CTC ASRs5(Kuchaiev et al., 2019; Gulati et al., 2020). Then we trained the rules of the lexical noise model and applied them on source data. Since the result is deterministic on the random seed of the lexical noise model, we perform multi-sourcing using three different seeds and report average BLEU scores with standard deviation.
Footnote 5: stlt_de_quartznet15x5 and stlt_en_conformer_ctc_large from [https://catalog.ngc.nvidia.com/models](https://catalog.ngc.nvidia.com/models)
Results with transcription noiseTable 6 summarizes the BLEU scores of two-source MT with different levels of transcription noise in each of the sources on two sets: ESIC dev with reference translated from English, and Newstest11 with balanced reference. Appendix A contains the corresponding chrF2 scores. Table 7 shows the results on the ESIC test set for the settings where multi-source models achieved the highest improvement due to noisy inputs.
In Table 6, on both sets, we observe that the less noisy single source achieves higher BLEU than the other single source. When the difference in noise levels between the sources is small (close to diagonal in the table), then multi-sourcing reaches slightly higher BLEU than single sources. In case of balanced Newstest11, this area matches the diagonal. In case of ESIC with English original source and reference translated from English, the area of multi-source outperforming single-source is shifted. This tendency is reflected in the test set results in Table 7 as well. Only when the German source is less noisy than the English one, it does improve BLEU in multi-sourcing. We explain it by the discrepancy of source languages for MT and reference that affect BLEU the same way as in offline mode in Section 4. On Newstest11, with the references translated from German, we expect the reverse.
We also observe expected behavior that the more noise, the lower BLEU in all setups. Compare e.g. 33.3 BLEU with zero noise and 12.1 with 40% WER in both sources. With very large noise, it is possible that neither option would be usable. In ESIC dev, e.g. when English WER is 20%, we observe large span, between 5 and 25% WER in German, where multi-sourcing outperforms single source at least by several hundreths of BLEU. This span in Newstest11 is much more narrow, only 20 to 25% WER in German. We hypothesize that it may be caused by the domain difference. The lexical noise model is trained on Europarl. In news domain, there may be fewer words for substitution, so the noise consists more of deletions and insertions, and it might be more harmful for MT in combination of two sources. However, multi-sourcing appears to be robust to ASR errors regardless of whether we have one or both sources as original.
## 5 Simultaneous Multi-Source
In the previous section, we experimented with offline translation with artificial ASR noise and showed that multi-source models are indeed robust to noise. However, one important use case of speech translation is in a real time setting where simultaneous MT is used. We therefore evaluate the robustness of multi-source models in a simultaneous setting.
### Simultaneous Machine Translation
Simultaneous MT is a task that simulates one subtask of a technology that translates long-form monologue speech in real-time, or with the lowest possible latency. There exist two main approaches to simultaneous MT: streaming and re-translating (Niehues et al., 2018; Arivazhagan et al., 2020). Re-translating systems generate preliminary translation hypotheses that can be updated. Both approaches have complementary benefits and drawbacks. In this paper, we focus on streaming.
We assume that simultaneous MT continuously receives an input text segmented to sentences, one token at a time, as produced by the speaker and upstream tasks. After reading each input token, the system can either produce one or more target tokens, or decide to read the next input token, e.g. to have more context for translation. The goal of simultaneous MT is to translate the input with high quality and low latency. Quality is measured on full sentences as in standard text-to-text MT, e.g. by BLEU. The standard latency measure of simultaneous MT is Average Lagging (AL, Ma et al., 2019). It is an average number of tokens behind an "optimal" policy that generates the target proportionally with reading the source.
Simultaneous MT can be created from standard text-to-text NMT by applying any simultaneous de
coding algorithm. However, it is recommendable first to adapt NMT, so it is inclined to translate consecutive sentence prefixes with the same target prefix. We use Local Agreement (LA-\(n\)) as a decoding algorithm. It achieved good performance by the best performing system (Polak et al., 2022) in the most recent IWSLT competition (Anastasopoulos et al., 2022). Local Agreement (LA-\(n\)) means that \(n\) consecutive updates must agree on a target prefix to commit and write. The last committed prefix is then forced as a prefix to decoding the next units. Agreement size \(n\) is a parameter that controls the latency.
### Creating Simultaneous MT Systems
In Section 4, we used multi-way models trained on full sentences, but in a simultaneous setting, these models will make mistakes when translating partial sentences using the LA-\(n\) approach. Therefore, our multi-way models should first be adapted for partial sentence translation. To this end, we used the multi-way English and German to Czech MT model as a base for simultaneous MT. We fine-tuned the last trained model checkpoint for stable translation on 1:1 mix of incomplete sentence prefixes and full sentences as Niehues et al. (2018). For each source-target pair of the training data, we selected 5-times 1 to 90 % of source and target characters and rounded them to full words. Then, we ran training for 1 day on 1 GPU. We validated BLEU score on ESIC dev and Normalized Erasure (NE, Arivazhagan et al., 2020) on all prefixes of the first 65 sentences (around 1500 words) of ESIC dev set. We ran fine-tuning with multi-way data for English and German as source languages, and for bilingual English-Czech and German-Czech MT.
We stopped training after one day when there were no improvements in stability or quality. Then, we selected one checkpoint for English and one for German that reached acceptable quality and stability values. See Appendix B for details.
### Multi-Sourcing in Simultaneous MT
We use late averaging of the two selected checkpoints for multi-sourcing in simultaneous MT. The only aspects of multi-sourcing in simultaneous mode that differ from single-source or non-simultaneous mode are synchronization of the sources and how to count Average Lagging.
SynchronizationIn a realistic use-case, it is necessary to synchronize the original speech and simultaneous interpreting. However, we leave it for further work, as our goal is to inspect the limits of multi-sourcing. Therefore, we simulate a case where the sources are optimally synchronized, aligned and parallel to sentence level.
In multi-source mode, we sort all sentence pre
Figure 1: Single-sourcing vs multi-sourcing with different level of artificial ASR noise of the sources (% WER) in simultaneous mode on ESIC dev set. The results are depicted as quality (BLEU) and latency (AL) trade-off of the candidate systems. The plots highlighted by gray background show noise levels where multi-sourcing (En+De, blue line) outperforms both single sources in BLEU at least for AL\(>\)5.5.
fixes by proportion of the character length to the sentence length. Each "Read" operation of the multi-source system then receives two prefixes in two languages. One of them is updated by one new token. Every such update is counted to local agreement size. We note that there are other strategies, e.g. count only English source updates to LA-\(n\), but in this paper we have other goal than searching for the best strategy.
AL in multi-sourceIn multi-source setup, we only count Read operations of the English source to AL calculations that we report, and not of the German source because the sources are simultaneous. Counting only German tokens differs negligibly, approximately by 0.1 tokens.
### Simultaneous Multi-Source with Artifical Noise
We want to compare multi-sourcing model to single-sourcing with artificial ASR noise model as in Section 4.1. We evaluate each system on the latency levels with local agreement sizes 2, 5, 10 and 15. Since each evaluation takes approximately 5 hours on 2000 sentences, we report only one run, and not average and deviation on multiple randomly noised inputs.
The results on ESIC dev set are in Figure 1. We can observe the same trends as in the offline case. The single source that is noised less achieves higher BLEU. Multi-sourcing outperforms both single sources when both noise levels are similar and when the English one is lower, e.g. in the case with 10% WER in German and 20% WER in English. We explain it again by the fact that the Czech reference is translated from English, and not German.
Furthermore, on both ESIC and Newstest11 (Figure 1) we observe that multi-sourcing performs worse in the low latency modes, i.e. in AL\(<\)5 that roughly corresponds to LA\(<\)5. We assume that the proportional synchronization of the two sources is often inaccurate and may confuse late averaging. In higher latency modes, the synchronization noise at the end of input may be lowered by local agreement. Having validated the multi-source NMT is robust to ASR errors in both full sentence and simultaneous settings, we have paved the way for harder settings where multilingual interpretations of the original source available with different amounts of delay can be used for translation.
## 6 Conclusion
We have investigated the robustness of multi-source NMT to transcription errors in order to motivate its use in settings where ASRs for the original speech and its simultaneously interpreted equivalent are available. To this end, we first analyzed the 10-hour ESIC corpus and documented that the ASR errors in the two sources are indeed independent, indicating their complementary nature. We then simulated transcription noise for English and German when translating into Czech in single and multi-source NMT settings and observed that using multiple noisy sources is significantly better than individual noisy sources. We then repeated experiments in a simultaneous translation setting and showed that multi-source translation continues to be robust to noise. This robustness of multi-source NMT to noise motivates future research into simultaneous multi-source speech translation, where one source is available with a delay. We will also consider training models with simulated ASR errors to further increase their robustness, especially in multi-source settings.
## 7 Limitations
Although we have shown the robustness of multi-source NMT to transcription errors in a full-sentence and simultaneous settings, our work has the following limitations:
* Our work does not address the case where the additional source, typically interpreted, is available after a delay. A delayed source may reduce the gains seen by multi-sourcing.
* We have only focused on the Local Agreement (LA-\(n\)) approach for simultaneous translation and exploration of other simultaneous approaches such as wait-\(k\) remains.
* Human evaluation of translations is pending.
* Evaluation on other language pairs is pending.
### Acknowledgements
The research was partially supported by the grants 19-26934X (NEUREM3) of the Czech Science Foundation, "Grant Schemes at CU" (reg. no. CZ.02.2.69/0.0/0.0/19_073/0016935), 398120 of the Grant Agency of Charles University, and SVV project number 260 698. Part of the work was done during an internship at NICT. |
2301.04967 | Shadows of Kerr-Vaidya-like black holes | In this work, we study the shadow boundary curves of rotating time-dependent
black hole solutions which have well-defined Kerr and Vaidya limits. These
solutions are constructed by applying the Newman-Janis algorithm to a
spherically symmetric seed metric conformal to the Vaidya solution with a mass
function that is linear in Eddington-Finkelstein coordinates. Equipped with a
conformal Killing vector field, this class of solution exhibits separability of
null geodesics, thus allowing one to develop an analytic formula for the
boundary curve of its shadow. We find a simple power law describing the
dependence of the mean radius and asymmetry factor of the shadow on the
accretion rate. Applicability of our model to recent Event Horizon Telescope
observations of M87${}^*$ and Sgr A${}^*$ is also discussed. | H. S. Tan | 2023-01-12T12:21:28Z | http://arxiv.org/abs/2301.04967v4 | # Shadows of Kerr-Vaidya-like black holes
###### Abstract
In this work, we study the shadow boundary curves of rotating time-dependent black hole solutions which have well-defined Kerr and Vaidya limits. These solutions are constructed by applying the Newman-Janis algorithm to a spherically symmetric seed metric conformal to the Vaidya solution with a mass function that is linear in Eddington-Finkelstein coordinates. Equipped with a conformal Killing vector field, this class of solution exhibits separability of null geodesics, thus allowing one to develop an analytic formula for the boundary curve of its shadow. We find a simple power law describing the dependence of the mean radius and asymmetry factor of the shadow on the accretion rate. Applicability of our model to recent Event Horizon Telescope observations of M87\({}^{*}\) and Sgr A\({}^{*}\) is also discussed.
###### Contents
* 1 Introduction
* 2 A family of rotating Vaidya-like black hole solutions
* 2.1 Conformal factors, coordinate charts and the Newman-Janis algorithm
* 2.2 Horizons in the solution parameter space
* 3 Null geodesics, photon spheres and shadow formulas
* 3.1 On null geodesics
* 3.2 Shadow formulas from photon region
* 4 Portraits of the shadow
* 4.1 Scaling laws for variation of \(\overline{R}\) and \(\mathcal{A}\) with \(\mu\)
* 4.2 On the shadows of M87\({}^{*}\) and Sagittarius A\({}^{*}\) as observed by EHT
* 5 Discussion
* A Global geometry and matching spacetimes via junction conditions
* B On the reference frame of the shadow observer
* B.1 In the limit of \(a=0\)
* B.2 Observers in the \(\{v,w,\theta,\phi\}\) chart and aberration formulas
## 1 Introduction
Recent Event Horizon Telescope (EHT) observations of horizon-scale shadow images of M87\({}^{*}\)[1] and Sgr A\({}^{*}\)[2] have furnished not only a direct visual evidence of black holes, but have also led to many new constraints on various potential deviations from General Relativity. The boundary curve of the black hole shadow emerges from light rays that spiral asymptotically from the photon region demarcating the borderline between light rays that will eventually be captured by the black hole and those that escape to infinity [3].1 The geometry of this boundary curve depends on the background metric which could thus be probed by EHT observations [7, 8, 9, 10, 11, 12]. For example, the shadow geometry of Sgr\({}^{*}\) has been used to exclude the central object being a Reissner-Nordstrom-type naked singularity or a traversable Misner-Thorne wormhole [8].
Footnote 1: This curve is termed as the ‘critical curve’ by Gralla et al. in [4] and ‘apparent boundary’ by Bardeen in [5]. See for example the review of [6] for an extensive discussion of basic ideas and history.
Surrounding the black hole shadow is an emission ring of which structure is sensitive to a rich set of astrophysical phenomena, such as radiative transfer, that characterize the matter-energy accretion process. Typically, general relativistic magnetohydrodynamic (GRMHD) simulations are used to model the accretion flow processes [7, 8, 13, 14]. For the EHT experiments, they have revealed the emission ring properties to be consistent with a number
upon the background of a Kerr black hole [7, 8]. The spacetime metric in these simulations is assumed to be purely Kerr spacetime throughout, with the energy-momentum tensor capturing the magnetic field and average plasma properties [13, 15]. In [16, 17, 18], it was noted that the shadow size and shape is hardly influenced by the accretion details, and thus serves as a pristine signature of spacetime geometry. An implicit assumption is that the backreaction of the GRMHD energy-momentum tensor on the metric has a negligible influence on the shadow and could thus be ignored in deriving its geometry.
In this paper, we study the shadow boundary curves of a class of rotating time-dependent black hole solutions of which metric is a deformation of the Kerr solution described by a small dimensionless parameter \(\mu\). In the limit of vanishing spin, our spacetime reduces to a well-known model of spherically accreting black hole - the Vaidya spacetime with a mass function \(\mu v\), with \(v\) being an ingoing Eddington-Finkelstein coordinate and \(\mu\) being the mass accretion rate constant in natural units. The latter solution was studied most recently in [19] where the authors derived and examined its shadow characteristics analytically. Most crucially, an analytic treatment of the shadow was possible by virtue of the existence of a Carter constant leading to separability of its null geodesic equations. This is related to a conformal Killing symmetry associated with the linear mass function, and hence its choice, for it enables the authors of [19] to derive explicit formulas for the radius of the photon sphere and the shadow angular diameter. One main motivation of our work here is to seek a rotating generalization of the analytic treatment in [19]. This would serve as a simple model of a backreacted Kerr-like geometry that is accreting mass, and for which an analytic derivation of its shadow geometry is possible. For readers familiar with exact solutions in GR, a natural candidate would be the Kerr-Vaidya solution [20] which can be obtained by replacing the constant Kerr mass with a variable mass function in the original Kerr line element expressed in Eddington-Finkelstein coordinates. Unfortunately, as we'll elaborate later, this solution does not offer any additional Carter constant that could lead to its null geodesic equations being separable.
We construct our solutions by applying the Newman-Janis algorithm [21] to a spherically symmetric seed metric conformal to the Vaidya solution with the mass function that is linear in Eddington-Finkelstein coordinates. Fortunately, this solution-generating technique turns out to preserve the conformal Killing vector field in the original Vaidya metric, leading to separability of null geodesics, and ultimately allows us to develop an analytic formula for the boundary curve of its shadow. The solution space is parametrized by \(\{a,\mu,M_{s}\}\) where \(\{a,M_{s}\}\) are the spin and mass parameters of Kerr spacetime in the vanishing \(\mu\) limit. Like the Kerr solution, there are regions in the moduli space which do not pertain to black holes. Motivated by phenomenological interests, we focus on the regime of parameters where our solution has event horizons like those of Kerr, with the conformal Killing horizon at a large distance away from the shadow observer and the outer horizon. Thus, our solution serves as a simple model of an accreting Kerr-like geometry not globally but for a finite spatial domain defined by the interior of the conformal Killing horizon. Generically, the shadow geometry is sensitive to the choice of coordinates. We work in a chart which reduces to the Kerr spacetime in Boyer-Lindquist coordinates in the limit \(\mu=0\), and the Vaidya spacetime in Eddington-Finkelstein-like coordinates in the limit \(a=0\). Correspondingly, we verified that our shadow formulas reduce consistently to those of Kerr [22] and Vaidya [19] under these limits.
As reviewed in for example [6], analytic derivations in cases that allow them complement numerical studies of shadow geometry in general. For example, for Schwarzschild spacetime, the angular diameter of its shadow is \(\sim 3\sqrt{3}M_{s}/R_{o}\) for a distant observer located at the radial coordinate \(R_{o}\)[23]. This numerical value has turned out to be very useful as a guide in the analysis of shadow size and shape in EHT's recent observations [1, 2]. For our shadow analysis here, we find a simple power law describing the dependence of the mean radius and asymmetry factor of the shadow on
the accretion rate. The latter describes the departure of the shadow from circularity and has been constrained in M87\({}^{*}\) studies by EHT team [1]. When applied to the parameters of M87\({}^{*}\) and Sgr A\({}^{*}\), our analysis of shadow geometry appears to indicate that the effect of \(\mu\) is very small, and thus provides support for the assumption of using the pure Kerr metric throughout in GRMHD simulations. Our results, in addition, yield an empirical formula that parametrizes the variation of mean radius and asymmetry factor with accretion rate explicitly, and can thus be used to anticipate when backreaction of accretion on the metric may be significant.
Our paper is organized as follows. In Section 2, we present the construction of a class of Kerr-Vaidya-like solutions and elaborate on some basic aspects of its geometry and moduli space, followed by a derivation of some analytical formulas for shadow geometry in Section 3. In Section 4, we present several visual plots of the shadow and examine how the mean radius and asymmetry factor of the shadows vary with various parameters. We also include a brief discussion on recent EHT observations of M87\({}^{*}\) and Sgr A\({}^{*}\) in relation to our model geometry. Finally, we end with some concluding remarks in Section 5. Appendix A presents an extension of our solution obtained by matching the spacetime at some cutoff distance to Kerr-like solutions that are asymptotically flat via Darmois-Israel junction conditions. In Appendix B, for completeness, we develop an aberration formula for observers in another reference frame which, in the zero spin limit, reduces to another class of observers discussed previously in [19] for Vaidya spacetime.
## 2 A family of rotating Vaidya-like black hole solutions
We begin with the Vaidya metric in the coordinatesii
Footnote ii: The unusual choice of the symbol \(w\) to denote radial distance for this line element is solely due to shortage of conventions for the many different radial coordinates that we’ll use throughout this paper.
\[ds^{2} = -\left(1-\frac{2m(v)}{w}\right)dv^{2}+2dvdw+w^{2}\left(d\theta^{ 2}+\sin^{2}\theta d\phi^{2}\right), \tag{1}\]
with the domains \(v\in(0,\infty),\,w\in(0,\infty),\theta\in(0,\pi),\phi\in(0,2\pi)\). We note that \(m(v)\) is a mass function that can be used to model a time-dependent black hole of which exterior is described by (1). The solution (1) solves the field equations in ordinary GR with the energy momentum tensor \(T^{\mu\nu}=m(v)K^{\mu}K^{\nu},K^{\nu}\partial_{\nu}=\partial_{w}\) which is typically interpreted as that of a null dust moving in the direction of decreasing \(w\), with the black hole accreting (radiating) mass if \(m^{\prime}(v)\) is positive (negative).
### Conformal factors, coordinate charts and the Newman-Janis algorithm
In this work, we restrict ourselves to the case where \(m(v)=\mu v\), where \(\mu\) is a positive constant. In this case, the geometry admits a conformal Killing vector field. To see this, we define \(\partial/\partial_{T}\) as the conformal Killing vector and make a coordinate transformation as follows.
\[v=r_{0}e^{T/r_{0}},\qquad w=re^{T/r_{0}}, \tag{2}\]
where \(r_{0}\) is a positive constant with dimension of length. This brings (1) to
\[ds^{2}=e^{2T/r_{0}}\left(-\left(1-\frac{2\mu r_{0}}{r}-\frac{2r}{r_{0}} \right)dT^{2}+2dTdr+r^{2}(d\theta^{2}+\sin^{2}\theta d\phi^{2})\right), \tag{3}\]
with \(T\in(-\infty,\infty),r\in(0,\infty)\). In this form, the metric is conformal to a manifestly static spacetime which can be taken to generate a rotating solution via Newman-Janis algorithm. But first, we seek a temporal coordinate such that constant time slices are 3-dimensional spatial manifolds. Defining
\[T=t+\Upsilon(r),\qquad\Upsilon(r)=\int^{r}d\tilde{R}\ \left(1-\frac{2\mu r_{o}}{ \tilde{R}}-\frac{2\tilde{R}}{r_{0}}\right)^{-1}, \tag{4}\]
the line element then reads
\[ds^{2}=e^{\frac{2(t+\Upsilon(r))}{r_{0}}}\left(-\left(1-\frac{2\mathcal{M}(r) }{r}\right)dt^{2}+\left(1-\frac{2\mathcal{M}(r)}{r}\right)^{-1}dR^{2}+r^{2}(d \theta^{2}+\sin^{2}\theta d\phi^{2})\right)\equiv\Omega^{2}(t,r)ds^{2}_{static}, \tag{5}\]
where \(t\in(-\infty,\infty)\) and
\[\mathcal{M}(r)\equiv\mu r_{0}+\frac{r^{2}}{r_{0}}.\]
If we restrict ourselves to the spacetime patch where the conformal Killing vector field \(\frac{\partial}{\partial t}\) is timelike, then letting \(1-2\mathcal{M}/r>0\) leads to the domain
\[r\in(R_{h},R_{c}),\qquad R_{h,c}=\frac{r_{0}}{4}\left(1\pm\sqrt{1-16\mu} \right),\qquad\mu<1/16,\]
where \(R_{h}\) is the black hole horizon and \(R_{c}\) denotes the conformal Killing horizon. The Schwarzschild limit can be obtained as a double scaling limit as follows.
\[\mu\to 0,\ \ r_{0}\rightarrow\infty,\ \ \mu r_{o}=M_{s}, \tag{6}\]
where \(M_{s}\) is a finite mass parameter equivalent to the ADM mass of the limiting Schwarzschild black hole. We now apply the Newman-Janis algorithmiii to the metric \(ds^{2}_{static}\) which leads to a metric endowed with angular momentum
Footnote iii: To be precise, this algorithm carries with it the assumption of asymptotic flatness in the generated metric which doesn’t hold for our solution though.
\[ds^{2}_{static}\to ds^{2}_{rotating} = -\left(1-\frac{2\mathcal{M}(r)r}{\Sigma}\right)dt^{2}-\frac{4 \mathcal{M}(r)ar\sin^{2}\theta}{\Sigma}d\phi dt \tag{7}\] \[\qquad+\left(r^{2}+a^{2}+\frac{2\mathcal{M}(r)a^{2}r\sin^{2} \theta}{\Sigma}\right)\sin^{2}\theta d\phi^{2}+\frac{\Sigma}{\Delta}dr^{2}+ \Sigma d\theta^{2},\]
where \(a\) is the spin parameter and
\[\Sigma=r^{2}+a^{2}\cos^{2}\theta,\qquad\Delta=r^{2}-2\mathcal{M}(r)r+a^{2}.\]
We will also modify the exponential argument of the conformal factor \(\Omega(t,r)\) in (5) as follows
\[t+\Upsilon(r)\to t+\Upsilon_{a}(r),\ \ \Upsilon_{a}(r)\equiv\int^{r}dr\ \frac{r^{2}+a^{2}}{r^{2}-2\mathcal{M}(r)r+a^{2}}. \tag{8}\]
The full metric then reads
\[ds^{2}=e^{\frac{2(t+\Upsilon_{a}(r))}{r_{0}}}ds^{2}_{rotating}, \tag{9}\]
with \(ds^{2}_{rotating},\Upsilon_{a}(r)\) being defined in (7) and (8) respectively. In the scaling limit of (6), the line element (9) reduces to Kerr spacetime in Boyer-Lindquist coordinates.
Now, like the Kerr solution where \({\cal M}\) is instead just a constant, the metric (9) has singularities at the roots of \(\Delta=0\). To extend the spacetime beyond these singularities, we can perform a coordinate transformation
\[\tilde{T}=t+\Upsilon_{a}(r),\qquad\tilde{\phi}=-\phi-a\int^{r}dr\ \frac{1}{r^{2}-2{ \cal M}(r)r+a^{2}} \tag{10}\]
which leads to
\[ds^{2} = e^{\frac{2\mu}{M_{s}}\tilde{T}}\Bigg{[}-\left(1-\frac{2{\cal M}( r)r}{r^{2}+a^{2}\cos^{2}\theta}\right)(d\tilde{T}+a\sin^{2}\theta d\tilde{ \phi})^{2}+2(d\tilde{T}+a\sin^{2}\theta d\tilde{\phi})(dr+a\sin^{2}\theta d \tilde{\phi}) \tag{11}\] \[\qquad\qquad+(r^{2}+a^{2}\cos^{2}\theta)d\Omega^{2}\Bigg{]},\]
In the \(a=0\) limit, we recover Vaidya spacetime in the conformally static coordinates of (3), whereas the \(\mu=0\) limit (as in (6)) takes the metric to that of Kerr in ingoing Eddington-Finkelstein coordinates. We note that in (11), replacing \({\cal M}(r)\rightarrow\mu\tilde{T}\) and removing the conformal factor \(e^{\frac{2\mu}{M_{s}}\tilde{T}}\) yields the Kerr-Vaidya solution [20] which evidently isn't equipped with the conformal Killing symmetry. This leads to non-separability of null geodesics which would not allow us to solve for the shadow boundary curve analytically.
Our main interest in this class of time-dependent solutions lies in its property of being locally deformable to the Kerr geometry in Boyer-Lindquist coordinates in the \(\mu\to 0,\mu r_{0}\to M_{s}\) limit, and, in the limit of \(a=0\), to the Vaidya solution in a chart where it's conformally static. _This gives us a model of local geometry that approximates both spacetimes in a coordinate system suitable for deriving the analytical form of the black hole shadow._ For this specific purpose, we work in the \(\{t,r,\theta,\phi\}\) chart (line element in (9)), where the spacetime is conformal to a Kerr-like solution in Boyer-Lindquist coordinates. In Section 2.2, we explore the parameter space \(\{a,\mu,M_{s}\}\) in greater detail.
### Horizons in the solution parameter space
In (9), setting \(g^{rr}=0\) yields the following cubic equation in \(r\).
\[-\frac{2\mu}{M_{s}}r^{3}+r^{2}-2M_{s}r+a^{2}=0. \tag{12}\]
In certain regimes of the parameter space of \(\{\mu,a\}\), one could find event horizons. Consider the root space of the cubic equation (12) of which discriminant reads (henceforth, we define \(a\to a/M_{s}\) to be a dimensionless parameter)
\[\mathfrak{D}=4\left(1-a^{2}-16\mu+18a^{2}\mu-27a^{4}\mu^{2}\right).\]
The sign of \(\mathfrak{D}\) determines the number of real roots to \(g^{rr}=0\). As a quadratic equation in \(\mu\), we can derive the curves along which \(\mathfrak{D}=0\), which read
\[\mu_{\pm}=\frac{-8+9a^{2}\pm\left(4-3a^{2}\right)^{3/2}}{27a^{4}}. \tag{13}\]
They enclose the region which pertains to three distinct roots -- two event horizons (outer and inner) and the conformal Killing horizon, and they intersect at the point (see Fig. 1 )
\[(a_{e},\mu_{e})=\left(\frac{2}{\sqrt{3}},\frac{1}{12}\right), \tag{14}\]
which represents a generalized extremal limit. At any constant \(\mu\in(0,\frac{1}{12})\), the upper bound on \(a\) is given by taking \(\mu=\mu_{-}(a)\) along which the inner and outer event horizons coincide. Along \(\mu=\mu_{+}(a)\), the conformal Killing horizon coincides with the outer event horizon.
As depicted in Fig. 1, the region enclosed by the axes and the two curves has non-degenerate outer, inner event horizons and conformal Killing horizon. The small \(\mu\ll 1\) region of this enclosed segment is of closer phenomenological interest to us. Let us consider a generic point in this region. Ordering the roots of (12) as \(R_{i}<R_{e}<R_{a}\), we find that up to first few orders in \(\mu,a\) :
\[R_{e} = M_{s}\left(\left(1+\sqrt{1-a^{2}}\right)+\frac{4+4\sqrt{1-a^{2}} -5a^{2}-3a^{2}\sqrt{1-a^{2}}+a^{4}}{2(1-a^{2})}u+\mathcal{O}(\mu^{2})\right) \tag{15}\] \[= M_{s}\left[(2+8\mu)-(2\mu+1/2)a+\ldots\right]\] \[R_{i} = M_{s}\left(\left(1-\sqrt{1-a^{2}}\right)+\frac{4-4\sqrt{1-a^{2} }-5a^{2}+3a^{2}\sqrt{1-a^{2}}+a^{4}}{2(1-a^{2})}\mu+\mathcal{O}(\mu^{2})\right)\] (16) \[= M_{s}\left(\frac{a}{2}+\frac{a^{2}}{8}-\frac{\mu a^{3}}{16}+ \frac{a^{3}}{16}+\ldots\right)\] \[R_{a} = \frac{M_{s}}{2\mu}-2M_{s}-8\mu M_{s}+2a\mu M_{s}+\ldots \tag{17}\]
The radii \(R_{e},R_{i}\) are those of the outer and inner event horizons smoothly connected to their corresponding expressions in the ordinary Kerr solution, whereas \(R_{a}\) is the radius of the conformal Killing horizon associated with the conformal Killing vector \(\partial_{t}\) (this symmetry is preserved by the Newman-Janis algorithm we implemented). For a generic point within our domain of interest (region enclosed by curves and axes in Fig. 1 ), our shadow observer is a timelike observer located at some distance \(R_{e}<r<R_{a}\) away from the outer event or conformal Killing horizon.
Figure 1: Graph depicting the parameter space \((\mu,a)\) of our family of solutions. The curve \(\mu_{+}\) begins at \((0,\frac{1}{16})\) and contains solutions where the outer event horizon and conformal Killing horizon are degenerate. The curve \(\mu_{-}\) contains all the extremal solutions with degenerate outer and inner event horizons and with a finite conformal Killing horizon radius. Both curves merge at the extremal point \((\frac{2}{\sqrt{3}},\frac{1}{12})\) at which there is only apparent horizon at \(r=2M_{s}\). Our family of solutions can be seen as parametric deformations of the Vaidya solution (vertical axis) and the Kerr solution (horizontal axis).
Null geodesics, photon spheres and shadow formulas
### On null geodesics
Metrics of the form \(ds^{2}_{rotating}\) in (9) admit null geodesics which are separable. This condition was shown for the Kerr solution in for example [6, 22], and for other well-motivated forms of mass function \({\cal M}(r)\) in [24]. Such a property is preserved within its conformal class, and in particular by our line element \(ds^{2}\) in (9). To appreciate this, we first recall that for a pair of metrics which are conformally related, say \(g_{\mu\nu}=\Omega^{2}(x)\tilde{g}_{\mu\nu}\), their geodesic equations are related by
\[\frac{d^{2}x^{\alpha}}{d\eta^{2}}+\Gamma^{\alpha}_{\beta\nu}\frac{dx^{\beta}} {d\eta}\frac{dx^{\nu}}{d\eta}=0=\frac{d^{2}x^{\alpha}}{d\eta^{2}}+\tilde{ \Gamma}^{\alpha}_{\beta\nu}\frac{dx^{\beta}}{d\eta}\frac{dx^{\nu}}{d\eta}-2 \Omega^{-1}\frac{d\Omega}{d\eta}\frac{dx^{\alpha}}{d\eta}, \tag{18}\]
where \(\tilde{\Gamma}^{\alpha}_{\beta\nu}\) are the Christoffel symbols associated with the metric \(\tilde{g}\). The last term of (18) can be rewritten as
\[\frac{d^{2}x^{\alpha}}{d\lambda^{2}}+\tilde{\Gamma}^{\alpha}_{\beta\nu}\frac{ dx^{\beta}}{d\lambda}\frac{dx^{\nu}}{d\lambda}=0,\qquad\frac{d\lambda}{d\eta}= \Omega^{2}(x^{\alpha}(\eta)). \tag{19}\]
Thus, a suitable redefinition of the affine parameter leads to identical forms of the geodesic equations for the pair of conformally related metrics. In particular, we expect separability of null geodesic equations just like the Kerr solution or more generally the Kerr-like solutions studied in [24].
Our solution has a conformal Killing vector field \(K\sim\partial_{t}\) which naturally gives a quantity conserved along its null geodesics. In parallel with the notion of energy in the static case, we call this conserved quantity \(\tilde{E}=K^{\mu}P_{\mu}\). Also, the metric components are independent of \(\phi\), and we call \(L=p_{\phi}\) the associated conserved quantity. These symmetries motivate the use of the Hamilton-Jacobi formalism for geodesics where one first defines an auxiliary action \(S(\kappa,x^{\mu})\) obeying
\[\frac{\partial S}{\partial\kappa}+\frac{1}{2}g_{\mu\nu}p^{\mu}p^{\nu}=0,\qquad p ^{\mu}=g^{\mu\nu}p_{\nu}=g^{\mu\nu}\frac{\partial S}{\partial x^{\nu}}. \tag{20}\]
By virtue of the nature of conserved quantities, we adopt the following ansatz for the action \(S\):
\[S=-\tilde{E}t+L\phi+S_{r}(r)+S_{\theta}(\theta)+\frac{1}{2}\mu^{2}\kappa, \qquad\mu=-p_{\alpha}p^{\alpha}. \tag{21}\]
By construction, the 4-momenta \(p^{\mu}=\frac{dx^{\mu}}{d\eta}=g^{\mu\nu}\partial_{\nu}S(x)\) satisfies the geodesic equation, with \(\kappa=0\) for null geodesics. We find that the Hamilton-Jacobi equation (20) leads to
\[-\Delta(r)(\partial_{r}S_{r})^{2}+[(r^{2}+a^{2})E-aL]^{2}/\Delta(r)=(\partial_ {\theta}S_{\theta})^{2}+(L-aE\sin^{2}\theta)^{2}/\sin^{2}\theta={\cal K}, \tag{22}\]
where the constant \({\cal K}\) indicates separability. For deriving the shadow formulas, we need the explicit expressions for the 4-momenta which we find to simplify as
\[\frac{dt}{d\eta} = \frac{1}{\Sigma(r)}e^{-\frac{2(t+{\cal T}_{\theta}(r))}{r_{0}}} \left(-a(aE\sin^{2}\theta-L)+\frac{(r^{2}+a^{2})P(r)}{\Delta(r)}\right), \tag{23}\] \[\frac{dr}{d\eta} = \pm\frac{1}{\Sigma(r)}e^{-\frac{2(t+{\cal T}_{\theta}(r))}{r_{0}} }\sqrt{{\cal R}(r)},\] (24) \[\frac{d\theta}{d\eta} = \pm\frac{1}{\Sigma(r)}e^{-\frac{2(t+{\cal T}_{\theta}(r))}{r_{0} }}\sqrt{\Xi(\theta)},\] (25) \[\frac{d\phi}{d\eta} = \frac{1}{\Sigma(r)}e^{-\frac{2(t+{\cal T}_{\theta}(r))}{r_{0}}} \left(-\left(aE-\frac{L}{\sin^{2}\theta}\right)+aP(r)/\Delta(r)\right), \tag{26}\]
where
\[P(r)\equiv E(r^{2}+a^{2})-aL,\ {\cal R}(r)\equiv P(r)^{2}-{\cal K}\Delta(r),\ \ \Xi( \theta)\equiv{\cal Q}+\cos^{2}\theta\left(a^{2}E^{2}-L^{2}/\sin^{2}\theta\right),\]
with \({\cal Q}\) being conventionally called the Carter constant defined by
\[{\cal Q}\equiv{\cal K}-(L-aE)^{2}. \tag{27}\]
At this point, we note that apart from the specific form of our mass function \({\cal M}(r)=M_{s}+r^{2}/r_{0}\) implicitly contained in \(\Delta(r)=r^{2}-2{\cal M}(r)r+a^{2}\), the 4-momenta expressions in (23) - (26) are identical to those found in [24] up to the conformal factor \(e^{-\frac{2(t+T_{a}(r))}{r_{0}}}\). This is consistent with the general relations governing conformally related metrics as described in eqns (18) and (19).
### Shadow formulas from photon region
For our analysis of the shadow, we define the constants of motion
\[\eta\equiv\frac{{\cal Q}}{E^{2}},\qquad\xi\equiv\frac{L}{E}. \tag{28}\]
Any null geodesics with \(r=R_{p}\) for some constant \(R_{p}\) leads to the condition
\[{\cal R}(R_{p})=P(R_{p})^{2}-{\cal K}\Delta(R_{p})=0.\]
Further, if the orbit is unstable then setting \(d^{2}r/d\eta^{2}=0\) yields
\[{\cal R}^{\prime}(R_{p})=0.\]
After some algebra, we find that these couple of equations lead to the constants of motion
\[a\frac{L}{E}/M_{s}^{2} = \frac{4(1+\mu R_{p}^{2})R_{p}^{2}-(3\mu R_{p}^{2}+R_{p}+1)(R_{p}^ {2}+a^{2})}{-3\mu R_{p}^{2}-1+R_{p}}, \tag{29}\] \[\frac{{\cal K}}{E^{2}}/M_{s}^{2} = \frac{((R_{p}^{2}+a^{2})-aL/E)^{2}}{R_{p}^{2}-2(1+\mu R_{p}^{2}) R_{p}+a^{2}}. \tag{30}\]
These constants of motion also characterize non-spherical geodesics which asymptotically approach those confined to spheres defined by \({\cal R}(R_{p})={\cal R}^{\prime}(R_{p})=0\). In eqns. (29) and (30), and henceforth, for notational simplicity, we define \(R_{p},a\) in units of \(M_{s}\). Now consider an observer at the position \((R_{o},\theta_{inc})\) and described by an orthonormal tetrad as follows.
\[e_{0}=e^{-\frac{\mu T}{M_{s}}}\frac{(r^{2}+a^{2})\partial_{t}+a \partial_{\phi}}{\sqrt{\Sigma\Delta}},\ \ e_{1}=e^{-\frac{\mu T}{M_{s}}}\sqrt{\frac{1}{\Sigma}} \partial_{\theta},\] \[e_{2}=-e^{-\frac{\mu T}{M_{s}}}\frac{\partial_{\phi}+a\sin^{2} \theta\partial_{t}}{\sqrt{\Sigma}\sin\theta},\ e_{3}=-e^{-\frac{\mu T}{M_{s}}} \sqrt{\frac{\Delta}{\Sigma}}\partial_{r}. \tag{31}\]
As explained in for example [6, 22], this choice leads to \(e_{0}\) being the 4-velocity of the shadow observer with \(e_{0}\pm e_{3}\) being tangential to the principal null congruences. (The various expressions in (31) differ from those in [6, 22] by the conformal factor which we need to take into account for an orthonormalized set of basis vectors.) The tangent vector of each light ray reaching the observer reads
\[\frac{d}{d\eta}=p^{\mu}\partial_{\mu}=\dot{r}\partial_{r}+\dot{\theta}\partial _{\theta}+\dot{\phi}\partial_{\phi}+\dot{t}\partial_{t}=\alpha(-e_{0}+\sin \theta\cos\phi e_{1}+\sin\theta\sin\phi e_{2}+\cos\theta e_{3}), \tag{32}\]
where \(\alpha=g_{\mu\nu}p^{\mu}e_{0}^{\nu}\). Equating the coefficients after evaluating both sides of (32) at \((R_{o},\theta_{inc})\) yields
\[\sin\Phi=\frac{L_{E}(R_{p})-a\sin^{2}\theta_{inc}}{\sqrt{{\cal K}_{E}(R_{p})} \sin\theta_{inc}},\ \sin\Theta=\frac{\sqrt{\Delta(R_{o})}{\cal K}_{E}(R_{p})}{R_{o}^{2}-aL_{E}(R_{ p})+a^{2}}, \tag{33}\]
where
\[L_{E}\equiv\frac{L}{M_{s}^{2}E},\qquad{\cal K}_{E}\equiv\frac{{\cal K}}{M_{s}^{ 2}E^{2}},\]
and we have switched to a different notation for the celestial coordinates \(\Phi,\Theta\) for clarity having derived their eventual expressions. In particular, we note that \(\theta_{inc}\) measures the angle between the black hole's spin axis as defined by \(\theta=0\)(the zero locus of \(g_{\phi\phi}\)) and the observer, with \(e_{3}\) being parallel to the line of sight connecting the observer to the origin of the Boyer-Lindquist-like chart in (9).
The photon region consists of spherical orbits with radii bounded in the domain
\[R_{p}\in(R_{p,min},R_{p,max}), \tag{34}\]
where \(R_{p,min},R_{p,max}\) are defined by setting \(\sin\Phi=+1,-1\) respectively, i.e.
\[aL_{E}(R_{p,min}) = a^{2}\sin^{2}\theta_{inc}+a\sqrt{K_{E}}(R_{p,min})\sin(\theta_{ inc}), \tag{35}\] \[aL_{E}(R_{p,max}) = -a^{2}\sin^{2}\theta_{inc}-a\sqrt{K_{E}}(R_{p,max})\sin(\theta_{ inc}). \tag{36}\]
In the vanishing \(a\) limit, the width of the photon region \((R_{p,min},R_{p,max})\) goes to zero, and collapses to a single value of \(R_{p}\) which is the photon sphere radius of Vaidya spacetime. In this limit, from (35),
\[\lim_{a\to 0}aL_{E}\to R_{p}^{2}\frac{3+\mu R_{p}^{2}-R_{p}}{R_{p}-1-3\mu R _{p}^{2}}=0\Rightarrow R_{p}=\frac{1}{2\mu}\left(1-\sqrt{1-12\mu}\right),\]
where we have restricted the root to be smaller than the conformal Killing horizon radius. This is indeed the expression obtained in [19]. Further taking the \(\mu=0\) limit yields \(R_{p}=3\) which is the radius of Schwarzschild photon sphere. In the \(a=0\) limit, our expression for \(\sin\Theta\) in (33) reduces to eqn. (41) of [19] which is the sine of the angular radius of the Vaidya solution's shadow.
## 4 Portraits of the shadow
In this Section, we discuss geometrical properties of the shadow in more details. We first recall that the coordinate system describing the shadow observer has a conformal Killing horizon \(R_{a}\) obtained as the largest root of \(g^{rr}=0\) in the line element (9) which we reproduce explicitly below for convenience.
\[ds^{2} = e^{\frac{2(t+{\cal T}_{a}(r))}{r_{0}}}\Bigg{[}-\left(1-\frac{2{ \cal M}(r)r}{\Sigma}\right)dt^{2}-\frac{4{\cal M}(r)ar\sin^{2}\theta}{\Sigma}d \phi dt \tag{37}\] \[\qquad+\left(r^{2}+a^{2}+\frac{2{\cal M}(r)a^{2}r\sin^{2}\theta} {\Sigma}\right)\sin^{2}\theta d\phi^{2}+\frac{\Sigma}{\Delta}dr^{2}+\Sigma d \theta^{2}\Bigg{]},\]
where \({\cal M}(r)=M_{s}+\frac{r^{2}}{r_{0}}\) and \(\Upsilon_{a}(r)\equiv\int^{r}dr\ \frac{r^{2}+a^{2}}{r^{2}-2{\cal M}(r)r+a^{2}}\). For our shadow observer located at some radial distance \(R_{o}\), we would also like \(g_{tt}<0\) for all values of \(\theta\), leading to the condition
\[R_{h}<R_{o}<R_{c}<R_{a},\qquad R_{h,c}=\frac{r_{0}}{4}\left(1\mp\sqrt{1-16\mu} \right), \tag{38}\]
where \(R_{h},R_{c}\) are the event and Killing horizons of the Vaidya solution in the \(a=0\) limit. This also implies that for some fixed \(R_{o},\) we have an upper bound on
\[\mu<\mu_{b}\equiv M_{s}\frac{R_{o}-2M_{s}}{2R_{o}^{2}}, \tag{39}\]
since we wish to have \(R_{o}<R_{c}\). For the visual representation of the shadow, we follow the convention used by Johannsen and Psaltis in [16]. This is essentially the orthonormal tetrad we used in deriving the shadow formula, with the \(x,y\) coordinates being
\[x = R_{o}\sin\left(\Theta(R_{p},R_{o})\right)\sin\left(\Phi(R_{p}, \theta_{inc})\right), \tag{40}\] \[y = \pm R_{o}\sin\left(\Theta(R_{p},R_{o})\right)\cos(\Phi(R_{p}, \theta_{inc})). \tag{41}\]
These coordinates parametrize the observer's plane upon which the shadow is projected.iv From (33), noting that \(L_{E}\) and \({\cal K}_{E}\) are odd and even in the spin parameter \(a\) respectively, one can straightforwardly identify the discrete symmetry
Footnote iv: Another set of projection coordinates used for plotting the black hole shadow is the \((\alpha,\beta)\) parameters of Bardeen which would not be entirely suitable in our case since our solution is not asymptotically flat. See for example the review of [6] which discusses the relations between Bardeen’s impact parameters and others such as stereographic coordinates, etc.
\[a\rightarrow-a,\,\,\,\theta_{inc}\rightarrow-\theta_{inc},\]
which implies in particular that \(a<0\) shadows can be obtained from their \(a>0\) counterparts by a reflection in the \(y\)-axis (for all our shadow plots, we take \(a>0\)).
In quantifying the shape of the shadow, we follow the work of Johannsen and Psaltis in [16] who introduced the asymmetry parameter \({\cal A}\) to describe departure from circularity.
\[{\cal A}=2\sqrt{\frac{\int_{0}^{2\pi}d\alpha\,\,\,(R-\overline{R})^{2}}{\int_ {0}^{2\pi}d\alpha}},\,\,\tan\alpha=\frac{y}{x},\,\,\,R\equiv\sqrt{(x-D)^{2}+y^ {2}},\,\,\,D\equiv\frac{|x_{max}+x_{min}|}{2}, \tag{42}\]
and \(\overline{R}=\int^{2\pi}d\alpha\,\,R/2\pi\) is the averaged radius projected upon the observer's plane. These quantities were mentioned in the EHT paper [1] for M87\({}^{*}\), and we checked that our shadow geometries for the background Kerr solution (with \(\mu=0\)) yield comparable features obtained previously in the work of Johannsen and Psaltis [16]. In Figure 2, we picked a few values of \(a,\theta_{inc}\) for a black hole located at \(R_{o}/M_{s}\sim 5.4\times 10^{10}\) (estimated for M87\({}^{*}\)) to demonstrate how the shadow curve changes with \(\mu\).
### Scaling laws for variation of \(\overline{R}\) and \({\cal A}\) with \(\mu\)
The parameter space for the shadow geometry is spanned by \(\{a,\theta_{inc},R_{o},\mu\}.\) Computing the shadow's mean radius and asymmetry factor for a range of parameters, we find a simple empirical scaling law that describes the variation of \(\overline{R},{\cal A}\) with the accretion rate parameter \(\mu\) and other parameters as follows.
\[\overline{R}=\overline{R}_{o}\left(a,\theta_{inc},R_{o}\right)\sqrt{1-\frac{ \mu}{\mu_{b}(R_{o})}},\,\,\,\overline{\cal A}=\overline{\cal A}_{o}\left(a, \theta_{inc},R_{o}\right)\sqrt{1-\frac{\mu}{\mu_{b}(R_{o})}}, \tag{43}\]
where \(\mu_{b}(R_{o})\) is the upper bound (39) on the accretion rate allowed by our model for some fixed observer distance \(R_{o}\), obtained by setting the conformal Killing horizon to be the observer distance.
The dependence on \(\mu\) appears as a separate factor independent of the other shadow parameters, with the functions \(\overline{R}_{o}\left(a,\theta_{inc},R_{o}\right),\overline{\mathcal{A}}_{o} \left(a,\theta_{inc},R_{o}\right)\) describing the radius and asymmetry factor at \(\mu=0\). The form of (43) implies that for \(\mu\ll 1,R_{0}\gg M_{s}\), the fractional decrease in the mean radius and the asymmetry factor that is induced by a non-zero \(\mu\) scales approximately as
\[\frac{\delta\overline{R}}{\overline{R}}=\frac{\delta\mathcal{A}}{\mathcal{A} }\approx-\mu\frac{R_{o}}{M_{s}}. \tag{44}\]
In Figure 3, we plot these functions for a few values of spin at a fixed \(R_{0}\) and \(\theta_{inc}\). These plots are expectedly similar to the corresponding ones presented in [16] and [17]. At higher spin values, the asymmetry factor and mean radius exhibit a greater range of values over the \(\theta_{inc}\) domain. At any \(\theta_{inc}\), increasing \(a\) increases \(\overline{\mathcal{A}}_{o}\) but decreases \(\overline{R}_{o}\). The form of (43) also implies that the asymmetry factor expressed in units of the mean radius is independent of \(\mu\), with
\[\frac{\mathcal{A}}{\overline{R}}=\frac{\overline{\mathcal{A}}_{o}}{\overline {R}_{o}}\left(a,\theta_{inc},R_{o}\right). \tag{45}\]
In Figures 4, 5 and 6, we plot the empirical fitting curves (described by (43) ) that depict how \(\overline{R},\mathcal{A}\) vary with the accretion parameter \(\mu\) and each of the parameters \(\{a,\theta_{inc},R_{o}\}\) separately.
Figure 3: Graphs depicting how \(\overline{R},\mathcal{A}\) vary with angle \(\theta_{inc}\), at \(\mu=0\). We plot the functions \(\overline{R}_{o}\left(a,\theta_{inc},R_{o}\right),\overline{\mathcal{A}}_{o} \left(a,\theta_{inc},R_{o}\right)\) at five values of \(a\) at \(R_{o}/M_{s}=5.4\times 10^{10}\) and \(\theta_{inc}=17^{\circ}\).
### On the shadows of M87\({}^{*}\) and Sagittarius A\({}^{*}\) as observed by EHT
The M87\({}^{*}\) black hole was found to be about 16.8 Mpc away, with a mass \(M_{s}\simeq 6.5\times 10^{9}M_{\odot}\)[1]. The ensemble of accretion models used by the EHT team [1, 7] involved mass rates that ranged from about \(2\times 10^{-7}\) to \(4\times 10^{-4}\) times the Eddington rate \(\dot{M}_{\rm Edd}\). In their work, \(\dot{M}_{\rm Edd}\sim 137M_{\odot}/{\rm yr}\), and we find that this translates into
\[\mu_{M87}=\dot{M}\times\frac{GM_{\odot}}{{\rm Yr}\times c^{2}}\sim\dot{M}\times 1.56\times 10^{-13}\in\left(4\times 10^{-18},9\times 10^{-15}\right),\]
where \(\dot{M}\) is mass rate in units of \(M_{\odot}/{\rm yr}\). This is smaller than the upper bound \(\mu_{b}\sim 9.2\times 10^{-12}\) for \(R_{o}\sim 16.8\) Mpc. Equivalently, for this range of \(\mu\), the conformal Killing horizon size falls within
\[R_{c}/M_{s}\sim\left(5.9\times 10^{13},1.16\times 10^{17}\right),\]
which lies beyond \(R_{o}/M_{s}\sim 5.4\times 10^{10}\). Thus, the observed distance to M87\({}^{*}\) and estimates of mass accretion are well within the domains of validity of our simple model geometry. Now it was estimated in [25] that the angle of inclination is around \(17^{\circ}\). Corresponding to this value, in Figure 7, we plot the variation of the shape parameters \(\overline{R},{\cal A}\) and their ratio with the spin parameter \(a\). The range of values of \(\overline{R}\) translates into the shadow angular diameter \(\sim(36.9\mu{\rm as},39.6\mu{\rm as})\) which is comparable to the measured emission ring diameter in the EHT experiment [1]. The maximum \({\cal A}/\overline{R}\) ratio is about 0.01 which is within limits of the upper bound of 10% indicated in [1]. The highest \(\mu\sim 8.5\times 10^{-15}\) in the ensemble of models considered in [1] translates only to a fractional shift of 0.05 % in \(\overline{R},{\cal A}\).
Sgr A\({}^{*}\) has been observed to located near the dynamical center of our galaxy at a distance \(R_{o}\sim 8\) kpc away, with a dense concentration of mass \(M_{s}\sim 4\times 10^{6}M_{\odot}\). In contrast to M87\({}^{*}\) where its prominent jet provides robust constraints on source orientation with respect to the line of sight, fixing it to be \(\sim 17^{\circ}\), there is no such constraint unfortunately on Sgr A\({}^{*}\)[2]. However, GRMHD models appeared to have favored \(\theta_{inc}<50^{\circ}\), with accretion rate of order-of-magnitude \(10^{-9}-10^{-8}M_{\odot}\,{\rm yr}^{-1}\). These models were equipped with spin parameter values of \(a=0.5,0.94\)[2]. As mentioned in [2], in the earlier works of Quataert [26] and Baganoff [27], the captured accretion rate was estimated to be \(10^{-6}-10^{-5}M_{\odot}\,{\rm yr}^{-1}\) from Chandra observations of thermal bremsstrahlung emission at the vicinity of the gas capture radius. Most recently, in [14], a promising model was identified in which \(\theta_{inc}\leq 30^{\circ}\), and accretion models of \(\dot{M}\sim 5.2-9.5\times 10^{-9}M_{\odot}/{\rm yr}\) were examined.
Even for the accretion rate \(\dot{M}=10^{-5}M_{\odot}\,\mathrm{yr}^{-1}\) this translates into merely
\[\mu_{sgr}\sim 1.6\times 10^{-18}.\]
Like the case of M87\({}^{*}\), this turns out to be smaller than the upper bound \(\mu_{b}\sim 1.2\times 10^{-11}\) for \(R_{o}\sim 8\) kpc. Equivalently, taking this value of \(\mu_{sgr}\), the conformal Killing horizon size is
\[R_{c}/M_{s}\sim 3.2\times 10^{17},\]
which lies beyond \(R_{o}/M_{s}\sim 4.2\times 10^{10}\). Thus again, both observer distance and (estimated) mass accretion rates are well within the domains of validity of our simple model geometry.
In Figure 8, we plot the variation of the shape parameters \(\overline{R},\mathcal{A}\) and their ratio with the spin parameter \(a\), at a few representative values of \(\theta_{inc}\). Over the domain of \(\theta_{inc}\in(10^{\circ},50^{\circ})\), the range of shadow angular diameters is \(\sim(47.6\mu\mathrm{as},51.2\mu\mathrm{as})\) which is comparable to the shadow diameter estimate \(48.7\pm 7.0\mu\mathrm{as}\) in the EHT experiment [2]. The asymmetry factor-to-mean radius ratio \(\mathcal{A}/\overline{R}\) ratio increases with \(\theta_{inc}\), and can be as high as \(\sim 10\%\) for \(\theta_{inc}=50^{\circ}\). It would be interesting to study this geometrical signature for the EHT's Sgr A\({}^{*}\) shadow image. In [8], the EHT team mentioned in passing that the sparse interferometric coverage of 2017 observations led to significant uncertainties in circularity measurements which were thus not quantified yet, but future EHT observations with additional telescopes may place constraints on the circularity. Finally, let us mention that the effect of \(\mu\) on \(\overline{R},\mathcal{A}\) is even smaller for Sgr A\({}^{*}\), inducing only a fractional change of \(10^{-6}\) in these quantities.
Figure 8: Graphs showing how the mean radius \(\overline{R}\), asymmetry factor \(\mathcal{A}\) and their ratio vary with \(a\), with \(R_{o}/M_{s}=4.2\times 10^{10}\) (pertaining to EHT observation of Sgr A\({}^{*}\)), \(\theta_{inc}=10^{\circ},30^{\circ},50^{\circ}\).
Discussion
We have presented a study of the shadow geometry for a class of spacetime metric that is Kerr-Vaidya-like in nature. The family of time-dependent black hole solutions we constructed in this work has well-defined Kerr and Vaidya limits. Agnostic to the source and the underlying theory, we had conceived of its form starting from the Vaidya solution with a mass function that is linear in Eddington-Finkelstein coordinates since this particular class of solutions furnishes a model for accretion and is equipped with a conformal Killing isometry that leads to separability of null geodesics. After expressing it as being conformal to a solution that is Schwarzschild-like with a radial coordinate-dependent mass function, the Newman-Janis algorithm was applied to obtain a Kerr-like solution that reduces to the Kerr solution in the limit of vanishing accretion parameter \(\mu\). In real-life applications, the dimensionless accretion rate \(\mu\) is expected to be very small, for instance, for the recent M87\({}^{*}\) and Sgr A\({}^{*}\) observations, the highest model estimates of \(\mu\) are of the order \(10^{-15},10^{-18}\) respectively. Thus, our model geometry can be considered as a small \(\mu\)-deformation of the Kerr solution that preserves separability of null geodesics, or from a different perspective, a rotating generalization of the Vaidya solution. For a finite spatial domain, it could act as a simple model of a Kerr-like geometry that takes into account the backreaction of accretion. The existence of a conformal Killing vector field allows us to solve for the shadow geometry straightforwardly yet also brings with it the subtlety of a horizon that should be located beyond the shadow observer. Equivalently, at any fixed observer distance, there exists an upper bound to \(\mu\) (eqn.(39)) for applicability of our model.
Footnote ∥: In [28], it was found that increasing the flux of infalling gas into a Schwarzschild black hole (and hence the accretion rate) by increasing an axion-plasmon coupling parameter decreases the size of the shadow which qualitatively agrees with the variation of \(\overline{R}\) with \(\mu\) of our model. It would be interesting to study if the results in [28] could be cast in a similar form as (43) or (46).
In our study of the variation of the mean radius \(\overline{R}\) and asymmetry factor \({\cal A}\) with regards to various parameters -- \(\{\mu,a,\theta_{inc},R_{o}\}\), we found a simple empirical scaling law of the form in (43). In particular, this implies that at small \(\mu\ll 1\) and large observer distance \(R_{o}\gg M_{s}\), the fractional change induced by turning on the accretion rate parameter \(\mu\) reads simply as
\[\frac{\delta\overline{R}}{\overline{R}}=\frac{\delta{\cal A}}{{\cal A}}\approx -\mu\frac{R_{o}}{M_{s}}. \tag{46}\]
To our knowledge, we have not encountered any previous descriptive relations between shadow geometrical features and accretion rate, black hole and observer parameters of the form similar to (43) or (46)v.GRMHD simulations (e.g. [13, 15, 29]) typically assume the validity of the background metric being purely Kerr in some suitable coordinate system, with the complicated astrophysics of accretion contained within the choice of the energy-momentum tensor. Even for GRMHD simulations involving spacetime metrics motivated by beyond-GR theories (e.g. [8]), the accretion parameter is rarely involved in describing the background spacetime. In [16, 17] and most recently in [18], it was found that the accretion details do not appear to influence the shadow geometry which is sensitive only to the background metric. The assumption here is that backreactions of accretion on the metric are insignificant. Indeed, in applying our model to EHT observations of M87\({}^{*}\) and Sgr A\({}^{*}\), we found that the most generous estimates of \(\mu\) in [1] and [2] yield fractional changes of \(\overline{R}\) and \({\cal A}\) of the order of \(10^{-4}\) and \(10^{-6}\) respectively, consistent with such an assumption. In addition, our model yields an explicit relation in (43) that describes how, at least in principle, the shadow geometry changes with accretion rate. From (46), it may seem that for situations where the (fractional) experimental uncertainties \(\delta\overline{R}_{e}/\overline{R}_{e},\delta{\cal A}_{e}/{\cal A}_{e}\) are of the order \(\mu R_{o}/M_{s}\), then metric
backreactions due to accretion may be important in analyzing geometrical details of the black hole shadow.
A limitation of our model geometry is that it is only asymptotically Ricci-flat and not Minkowskian. As a model of an effective Kerr-like geometry with accretion backreaction, it is thus valid only for a finite spatial domain. Imposing a stricter condition for \(g_{tt}<0\) in (37), in our exposition of the black hole shadow analysis, we have taken the observer location \(R_{o}\) to lie within the interior of the sphere defined by the conformal Killing horizon of the limiting Vaidya spacetime, i.e. \(R<R_{c}\) in the coordinate system of (37). (For M87\({}^{*}\) and Sgr A\({}^{*}\), \(R_{c}/R_{o}\sim 10^{3},10^{7}\) respectively.) It would be interesting to seek a refinement of our model geometry in one that allows for separability of null geodesics while being asymptotically flat. Towards this ideal goal, in Appendix A, we sketched a globally well-defined solution obtained by matching the spacetime at some cutoff distance to asymptotically flat Kerr-like solutions using Darmois-Israel junction conditions, leaving more realistic constructions for future work.
## Acknowledgments
I am grateful to Jan de Boer, Chong-Sun Chu, Ori Ganor, Petr Horava, Daniel Robbins and Neal Snyderman for sharing with me their insights on various aspects of gravitational physics and their moral support over the years.
## Appendix A Global geometry and matching spacetimes via junction conditions
For the Vaidya spacetime, the conformally static chart in (3) has a coordinate singularity at the conformal Killing horizon. It can be continued beyond that via (2). Similarly, from (11), we can perform the coordinate transformation (2)
\[v=r_{0}e^{\tilde{T}/r_{0}},\qquad w=re^{\tilde{T}/r_{0}}, \tag{47}\]
after which the line element takes the form
\[ds^{2} = -F\left(\frac{w}{v}\right)\left(dv+\frac{av\sin^{2}\theta}{r_{0}} d\tilde{\phi}\right)^{2}+2\left(dv+\frac{av\sin^{2}\theta}{r_{0}}d\tilde{ \phi}\right)\left(dw+\frac{av\sin^{2}\theta}{r_{0}}d\tilde{\phi}\right) \tag{48}\] \[-\frac{2aw\sin^{2}\theta}{r_{0}}dvd\tilde{\phi}+\left(w^{2}+ \frac{v^{2}a^{2}\cos^{2}\theta}{r_{0}^{2}}\right)d\Omega^{2},\]
where
\[F\left(\frac{w}{v}\right)=-1+\frac{2\left(\mu+(\frac{w}{v})^{2}\right)\frac{ w}{v}}{\left(\frac{w}{v}\right)^{2}+\frac{a^{2}\cos^{2}\theta}{r_{0}^{2}}}- \frac{2w}{v}.\]
The Vaidya metric in the chart (1) is obtained in the \(a=0\) limit, whereas the double scaling limit of (6) brings it to Kerr in Eddington-Finkelstein chart. The metric in the chart \(\{v,w,\theta,\tilde{\phi}\}\) is convenient for examining the asymptotic infinity of the spacetime. Taking the limit of infinite \(w\) yields the following asymptotic form
\[\lim_{w\rightarrow\infty}ds^{2}\sim-dv^{2}+2dvdw+w^{2}d\Omega^{2}+\frac{2a \mu}{M_{s}}\sin^{2}\theta\left(wdvd\tilde{\phi}+vdwd\tilde{\phi}\right). \tag{49}\]
At radial infinity, while the spacetime is Ricci-flat, there are still non-zero Ricci and Einstein tensor components which read \(R_{\theta\theta}=-G_{\theta\theta}=-\frac{a^{2}}{r_{0}^{2}}\sin^{2}\theta,\ R_{ \tilde{\phi}\tilde{\phi}}=3G_{\tilde{\phi}\tilde{\phi}}=-3\frac{a^{2}}{r_{0}^{2 }}\sin^{4}\theta\), with other components being zero. In the following, we briefly discuss how one could impose suitable initial and final boundary conditions to the mass-accreting geometry by suitably matching our solution to Kerr solutions representing the start or end-point of the mass accretion process, yielding a more realistic global geometry via the use of Darmois-Israel junction conditions.
For the Vaidya spacetime, we can cut-and-paste the spacetime geometry to that of Schwarzschild spacetime at some advanced \(v\) where the accretion ends, and to an initial geometry such as Minkowski or another Schwarzschild solution at some earlier \(v\). For example, in (1), instead of a function monotonically increasing in \(v\), we can refine the mass function to be
\[m(\nu)=\begin{cases}0,&\nu=0\\ \mu\nu,&0<\nu<\nu_{f}\\ \mu\nu_{f},&\nu\geq\nu_{f},\end{cases} \tag{50}\]
and formally, we obtain a global geometry constructed by matching Vaidya to Minkowski at \(\nu=0\) and Schwarzschild solution with ADM mass \(\mu\nu_{f}\) at \(\nu=\nu_{f}\), yielding a more realistic model.
We adopt a similar construction to extend our geometry suitably to past and future infinities in this vein. Let \(\mathcal{G}_{i},\mathcal{G}_{f}\) be the initial and final geometries that we match our Kerr-Vaidya-like solution at times \(\tilde{T}=\{\tilde{T}_{i},\tilde{T}_{f}\}\) respectively, working in the chart of \(\{\tilde{T},r,\theta,\tilde{\phi}\}\) (11). We denote the matched induced metric on \(\mathcal{G}_{i},\mathcal{G}_{f}\) by \(\Sigma_{i},\Sigma_{f}\), and that of our Kerr-Vaidya-like solutions by \(\mathcal{G}\) evaluated on \(\{\tilde{T}_{i},\tilde{T}_{f}\}\). The first Darmois-Israel junction condition is the continuity of the induced metric at these two junctions.
\[\Sigma_{i}=\mathcal{G}|_{\tilde{T}=\tilde{T}_{i}}\ \ \Sigma_{f}=\mathcal{G}|_{ \tilde{T}=\tilde{T}_{f}}. \tag{51}\]
For simplicity, we consider glueing geometries of which induced metric at the matching surface can be easily expressed in the coordinate systems we discussed earlier. For asymptotic flatness in \(\mathcal{G},\mathcal{G}_{i},\mathcal{G}_{f}\), consider modifying the line element in (11) as follows. Defining the bracketed expression in (11) to be \(ds_{K}^{2}\), we now replace it with
\[ds_{K}^{2}=\begin{cases}-\left(1-\frac{2\mathcal{M}(r)r}{r^{2}+a^{2}\cos^{2} \theta}\right)(d\tilde{T}+a\sin^{2}\theta d\tilde{\phi})^{2}+2(d\tilde{T}+a \sin^{2}\theta d\tilde{\phi})(dr+a\sin^{2}\theta d\tilde{\phi})\\ \hskip 113.811024pt+(r^{2}+a^{2}\cos^{2}\theta)d\Omega^{2},\ \ r<R_{m}\\ -\left(1-\frac{2M_{K}r}{r^{2}+a^{2}\cos^{2}\theta}\right)(d\tilde{T}+a\sin^{2} \theta d\tilde{\phi})^{2}+2(d\tilde{T}+a\sin^{2}\theta d\tilde{\phi})(dr+a\sin ^{2}\theta d\tilde{\phi})\\ \hskip 113.811024pt+(r^{2}+a^{2}\cos^{2}\theta)d\Omega^{2},\ \ r>R_{m}\end{cases} \tag{52}\]
where \(r=R_{m}\) is a matching surface beyond which \(ds_{K}^{2}\) is the Kerr line element with mass \(M_{K}\),
\[M_{K}=M_{s}\left(1+\mu\frac{R_{m}^{2}}{M_{s}^{2}}\right).\]
We set the cutoff radius \(R_{m}\) to be at some radial distance beyond the observer of the shadow.vi Such a spacetime is parametrized not only by \(\{a,\mu,M_{s}\}\) but also by the cutoff parameter \(R_{m}\). The full metric still takes the form
Footnote vi: One natural choice would be to take \(R_{m}=R_{c}\), the conformal Killing horizon of the limiting Vaidya spacetime.
\[\mathcal{G}(a,\mu,M_{K},R_{m}):ds^{2}=e^{\frac{2\mu}{M_{s}}\tilde{T}}ds_{K}^{ 2}(a,\mu,M_{K},R_{m}), \tag{53}\]
but with \(ds^{2}_{K}\) defined as in (52). Next, at \(\tilde{T}=\{\tilde{T}_{i},\tilde{T}_{f}\}\), we cut-and-paste \(\mathcal{G}\) to \(\mathcal{G}_{i}\), \(\mathcal{G}_{f}\) respectively which are defined by
\[\mathcal{G}_{i}:ds^{2}=ds^{2}_{K}(a^{i},\mu,M^{i}_{K},R^{i}_{m}),\qquad\mathcal{ G}_{f}:ds^{2}=ds^{2}_{K}(a^{f},\mu,M^{f}_{K},R^{f}_{m}). \tag{54}\]
The initial and final matching surfaces are spacelike surfaces, and the various parameters are related as
\[e^{\frac{\tilde{T}_{i}}{r_{0}}}R_{m}=R^{i}_{m},\ \ e^{\frac{\tilde{T}_{i}}{r_{0}}} M_{K}=M^{i}_{K},e^{\frac{\tilde{T}_{i}}{r_{0}}}a=a^{i}, \tag{55}\] \[e^{\frac{\tilde{T}_{f}}{r_{0}}}R_{m}=R^{f}_{m},\ \ e^{\frac{\tilde{T}_{f}}{r_{0}}} M_{K}=M^{f}_{K},e^{\frac{\tilde{T}_{f}}{r_{0}}}a=a^{f}. \tag{56}\]
The initial geometry can be taken to be arbitrarily close to Minkowski spacetime with \(\tilde{T}_{i}\rightarrow-\infty\) with \((R_{m},M_{K},a)\) being some finite set of parameters. The eventual geometry from the mass accretion process of duration \(\Delta\tilde{T}=\tilde{T}_{f}-\tilde{T}_{i}\) is that of the Kerr solution (for \(R>R^{f}_{m}\)) with mass and spin parameters being
\[M^{f}_{K}=e^{\mu\frac{\Delta\tilde{T}}{M_{s}}}M^{i}_{K},\ \ a^{f}=e^{\mu \frac{\Delta\tilde{T}}{M_{s}}}a^{i}. \tag{57}\]
From (57), one can easily see that our model geometry manifestly represents an accretion process where the fractional increase in _both_ the mass and spin parameters occur at a rate of \(\mu\).
## Appendix B On the reference frame of the shadow observer
### In the limit of \(a=0\)
In the limit of \(a=0\), the tetrad basis (31) defining the reference frame of our shadow observer reduces to the following.
\[e_{0}=\frac{e^{-\mu\tilde{T}/M_{s}}}{\sqrt{1-\frac{2\mathcal{M}}{r}}}\partial _{t},\ \ e_{1}=\frac{e^{-\mu\tilde{T}/M_{s}}}{r}\partial_{\theta},\ \ e_{2}=-\frac{e^{-\mu\tilde{T}/M_{s}}}{r\sin \theta}\partial_{\phi},\ \ e_{3}=-e^{-\mu\tilde{T}/M_{s}}\sqrt{\left(1-\frac{2 \mathcal{M}}{r}\right)}\partial_{r}. \tag{58}\]
In the chart \(\{\tilde{T},r,\theta,\tilde{\phi}\}\), we replace \(\partial_{t}\rightarrow\partial_{\tilde{T}},\ \partial_{r}\rightarrow\frac{\partial \tilde{T}}{\partial r}\frac{\partial}{\partial T}+\frac{\partial}{\partial r}\) which leads to
\[e_{0}=\frac{e^{-\mu\tilde{T}/M_{s}}}{\sqrt{f}}\partial_{\tilde{T}},\ \ e_{1}=\frac{e^{-\mu\tilde{T}/M_{s}}}{r} \partial_{\theta},\ \ e_{2}=-\frac{e^{-\mu\tilde{T}/M_{s}}}{r\sin \theta}\partial_{\phi},\ \ e_{3}=-\frac{e^{-\mu\tilde{T}/M_{s}}}{\sqrt{f}}\left(\partial_{\tilde{T}}+ f\partial_{r}\right), \tag{59}\]
where \(f=1-\frac{2\mathcal{M}(r)}{r}\). This is the tetrad basis used in [19] for Vaidya spacetime's shadow calculation, relevant for an observer with 4-velocity \(e_{0}\), at constant \(\theta,\phi,r\).
### Observers in the \(\{v,w,\theta,\phi\}\) chart and aberration formulas
The coordinate system \(\{v,w,\theta,\phi\}\) avoids the horizons as coordinate singularities. In [19], the shadow observed by an observer with 4-velocity \(\sim\frac{\partial}{\partial v}\) was derived using an aberration formula being applied to the shadow angle formula.
In general, for a pair of reference frames \((S,S^{\prime})\), aberration formulas relating the coordinates of their celestial spheres can be derived from the expression (32) after we express the 4-velocity \(e^{\prime}_{0}\) of the \(S^{\prime}\) reference frame as a linear combination of the original tetrad basis components, writing
\[e^{\prime}_{0}=\frac{e_{0}+V^{k}e_{k}}{\sqrt{1-v^{2}}}, \tag{60}\]
where \(\vec{V}\) is the relative 3-velocity of \(S^{\prime}\) observer. Consider an observer of which \(e^{\prime}_{0}\) is a linear combination of \(e_{0}\) and another basis vector \(e_{3}\). Its tetrad basis components read
\[e^{\prime}_{0}=\frac{1}{\sqrt{1-V^{2}}}(e_{0}+Ve_{3}),\,\,\,e^{\prime}_{3}=\frac {1}{\sqrt{1-V^{2}}}(e_{3}+Ve_{0}),\,\,\,e^{\prime}_{1,2}=e_{1,2}. \tag{61}\]
Taking the inner product between \(e^{\prime}_{0},e^{\prime}_{3}\) and the tangent vector expression in (32) in both unprimed and primed coordinates, we obtain the aberration formulas
\[\cos\theta^{\prime}=\frac{V+\cos\theta}{1+V\cos\theta},\qquad\phi^{\prime}=\phi,\qquad V=-\frac{e^{\prime}_{0}\cdot e_{3}}{e^{\prime}_{0}\cdot e_{0}}. \tag{62}\]
In [19], the observer with \(e^{\prime}_{0}\sim\frac{\partial}{\partial v}\) was also considered. After a coordinate transformation from \(\{\tilde{T},r,\theta,\tilde{\phi}\}\) used for the tetrad basis in (59), one can show that
\[e^{\prime}_{0}\sim\frac{\partial}{\partial v}=\frac{1}{\sqrt{1-\frac{2M_{s}}{ r}}}e^{-\frac{T}{r_{0}}}\left(\partial_{\tilde{T}}-\frac{r}{r_{0}}\partial_{r} \right)=\frac{1}{\sqrt{1-V^{2}}}(e_{0}+Ve_{3}),\,\,V=\frac{r^{2}}{r^{2}+rr_{0} \left(1-\frac{2\mathcal{M}(r)}{r}\right)} \tag{63}\]
where \(e_{0},e_{1},e_{2},e_{3}\) are as defined in (59). In [19], the same expression for the relative velocity \(V\) was obtained with an aberration relation \(\tan^{2}\frac{\theta^{\prime}}{2}=\frac{1-V}{1+V}\tan^{2}\frac{\theta}{2}\) that we verified to be identical to (62).
Let us now consider an appropriate \(S^{\prime}\) observer for our Kerr-Vaidya-like geometry. We note that for the \(S\) observer, its 4-velocity \(e_{0}\) is a linear combination of \(\partial_{\tilde{T}}\) and \(\partial_{\tilde{\phi}}\), or in the Boyer-Lindquist-like chart, a linear combination of \(\partial_{t}\) and \(\partial_{\phi}\). The angular component is such that \(e_{0}\pm e_{3}\) are tangential to the principal null congruences of the metric. Relating between \(v\) and \(t\), we choose the following \(S^{\prime}\) observer with
\[e^{\prime}_{0}\sim\frac{\partial}{\partial v}+\frac{r_{0}a}{v(r^ {2}+a^{2})}\frac{\partial}{\partial\tilde{\phi}} = \frac{r_{0}}{v}\left(1+\frac{r(r^{2}+a^{2})}{r_{0}(r^{2}-2 \mathcal{M}r+a^{2})}\right)\partial_{t}-\frac{r}{v}\partial_{r}-\frac{r_{0}a}{ v(r^{2}-2\mathcal{M}r+a^{2})}\partial_{\phi} \tag{64}\] \[+\frac{r_{0}a}{v(r^{2}+a^{2})}\frac{\partial}{\partial\tilde{\phi }}.\]
This choice of \(e^{\prime}_{0}\) is also uniquely the one that allows us to write
\[e^{\prime}_{0}\sim e_{0}+we_{3}, \tag{65}\]
for some relative 3-velocity \(w\). Thus the aberration formulas in (62) apply similarly. Recall that in (31), the relevant basis tetrad components read
\[e_{0}=e^{-\mu\tilde{T}/M_{s}}\frac{(r^{2}+a^{2})\partial_{t}+a\partial_{\phi} }{\sqrt{\Sigma\Delta}},\,\,e_{3}=-e^{-\mu\tilde{T}/M_{s}}\sqrt{\frac{\Delta}{ \Sigma}}\partial_{r}, \tag{66}\]
which allows us to read off the 3-velocity as
\[w=\frac{r^{2}+a^{2}}{r^{2}+a^{2}+\frac{r_{0}}{r}(r^{2}-2\mathcal{M}r+a^{2})}. \tag{67}\]
In the limit of vanishing \(a\), we recover the 3-velocity \(v\) for the observer in Vaidya spacetime with \(e_{0}\sim\frac{\partial}{\partial\tilde{T}}\) as derived in [19]. In the limit \(\mu\to 0\), up to leading order, we have
\[w\approx\mu\left(\frac{r^{2}+a^{2}}{\frac{M_{s}}{r}(r^{2}-2M_{s}r+a^{2})} \right)+\mathcal{O}(\mu^{2}). \tag{68}\]
Thus, this reference frame may be relevant for theoretical situations where the observer's velocity is proportional to the strength of the accretion rate, although arguably not so for realistic EHT observations where the accretion is hardly expected to backreact on the metric significantly to affect the 3-velocity of the shadow observer in such a manner. |
2308.07965 | A Kitaev-type spin liquid on a quasicrystal | We develop an exactly solvable model with Kitaev-type interactions and study
its phase diagram on the dual lattice of the quasicrystalline Ammann-Beenker
lattice. Our construction is based on the $\Gamma$-matrix generalization of the
Kitaev model and utilizes the cut-and-project correspondence between the
four-dimensional simple cubic lattice and the Ammann-Beenker lattice to
designate four types of bonds. We obtain a rich phase diagram with gapped
(chiral and abelian) and gapless spin liquid phases via Monte Carlo simulations
and variational analysis. We show that the ground state can be further tuned by
the inclusion of an onsite term that selects 21 different vison configurations
while maintaining the integrability of the model. Our results highlight the
rich physics at the intersection of quasicrystals and quantum magnetism. | M. A. Keskiner, O. Erten, M. Ö. Oktel | 2023-08-15T18:01:28Z | http://arxiv.org/abs/2308.07965v1 | # A Kitaev-type spin liquid on a quasicrystal
###### Abstract
We develop an exactly solvable model with Kitaev-type interactions and study its phase diagram on the dual lattice of the quasicrystalline Ammann-Beenker lattice. Our construction is based on the \(\Gamma\)-matrix generalization of the Kitaev model and utilizes the cut-and-project correspondence between the four-dimensional simple cubic lattice and the Ammann-Beenker lattice to designate four types of bonds. We obtain a rich phase diagram with gapped (chiral and abelian) and gapless spin liquid phases via Monte Carlo simulations and variational analysis. We show that the ground state can be further tuned by the inclusion of an onsite term that selects 21 different vison configurations while maintaining the integrability of the model. Our results highlight the rich physics at the intersection of quasicrystals and quantum magnetism.
## I Introduction
Quantum spin liquids (QSLs) are the disordered phases of magnetic systems and exhibit exceptional properties such as fractionalized excitations and long-range entanglement due to their underlying topological order[1; 2; 3; 4; 5; 6; 7; 8]. The Kitaev model on the honeycomb lattice is a foundational model as it is the first exactly solvable spin model that showcases a spin liquid ground state with both gapless and gapped phases, featuring abelian and non-abelian anyonic excitations[9]. Recently, there has been a tour de force effort in discovering new materials with strong Kitaev interactions such as iridates and \(\alpha-\)RuCl\({}_{3}\)[10; 11; 12]. Kitaev interactions may be strong in other van der Waals (vdW) materials like CrI\({}_{3}\)[13].
Most models of quantum magnetism, particularly models exhibiting a QSL ground state, have been explored in the context of periodic systems. While QSLs lack long-range magnetic order, they are in general defined on models with perfect translational symmetry. For instance, recent works have explored the consequences of translational symmetry breaking for the QSL state, either by considering defects in a periodic lattice[14; 15; 16; 17] or by defining randomly generated amorphous lattices[18; 19]. Solids do not have to be either periodic structures or random glasses. A third possibility is quasiperiodicity.
Quasicrystals are unique classes of materials that exhibit a regular atomic structure with non-repeating patterns, forming a contrast between the disordered arrangement of glasses and the fully periodic arrangement of crystals. They can be characterized by the presence of long-range order in terms of their translational and orientational symmetries, but the patterns do not repeat at regular intervals[20; 21; 22]. Due to this unique arrangement of the sites, they exhibit rare features such as strictly-localized states [23; 24], or electronic states which are neither localized nor extended [25; 26]. Quasicrystals can also exhibit different symmetries, including five- and eight-fold rotations, forbidden in conventional crystal structures. These forbidden symmetries lead to unprecedented topological states that are not allowed in crystalline materials[27; 28; 29].
The understanding of the elementary excitation spectrum of quasicrystals is quite limited compared to periodic solids. Experimentally, most quasicrystals are found to be poor conductors [30]. However, non-Fermi liquid behavior[31] and superconductivity [32] have also been observed. Recent experiments with high-quality samples have demonstrated that ferromagnetic long-range order is possible in a quasicrystal [33]. The experimental realization of quasicrystalline systems goes beyond the synthesis of alloys. Vapor deposition on metallic surfaces[34], as well as atomic force microscopy assembly of molecules [35] have created large two-dimensional quasicrystals. Synthetic quasicrystals for light [36], and ultracold atoms [37] promise to probe bosonic models in quasicrystals. In a more recent development, quasicrys
Figure 1: (a) Ammann-Beenker lattice (ABL) and the coloring scheme consisting of four type of bonds obtained via projecting from the hyperspace. (b) The dual lattice of ABL (dABL) and the corresponding coloring scheme. The pink and blue dots show the position of the lattice sites in the ABL and dABL respectively.
tals can also be formed in moire superlattices of vdW materials[38]. Experimental possibilities for testing the excitation spectrum of quasicrystals are rapidly increasing.
Classical models of magnetism, such as the Ising model, have been studied in some detail for quasicrystals [39; 40]. The non-trivial structural properties of quasicrystals force a reconsideration of the basic models, such as dimer coverings on the simplest two-dimensional lattices [41; 42]. However, quantum magnetism research in quasicrystals is at its early stages. Numerical work on small systems [43] and two-dimensional lattices [44; 45; 46] indicate elements of frustration and self-similar magnetic ground states. Recently, Ref. [47] showed that exact dimer wave functions can be constructed in certain quasicrystals for generalized Heisenberg Hamiltonians.
Here, we investigate the interplay between quasiperiodicity and spin liquid order by constructing an exactly solvable model with Kitaev-type interactions on a quasicrystal. The Kitaev model and its generalizations are integrable due to the presence of conserved quantities at each mesh of the lattice. The conservation of such mesh loops requires two structural properties. First, the lattice must have the same coordination number \(z\) on each site. Second, the link lattice itself must be \(z-\)partite, i.e., the links of the lattice can be labeled by one of the \(z\) colors so that all links starting from any site have different colors. The two most common quasicrystal models, the Penrose [48] (PL) and Ammann-Beenker [49; 50] (ABL) lattices, have varying coordination numbers for their vertices. However, both of these models have quadrilateral tiles and their dual lattices have a constant coordination number \(z=4\). The tight-binding models on the dual lattices are also referred to as the center tight-binding models [51]. Although the condition of uniform coordination is easily satisfied by considering the dual lattices, the second condition for the \(z\)-partite coloring of the lattice is non-trivial. For the dual of the Ammann-Beenker lattice (dABL), we find a z-partite coloring rule by lifting the ABL back into the four-dimensional space. We show that the four-dimensional cubic lattice with nearest neighbor links can be colored in a way so that each square has sides of 4 different colors. This rule allows us to define the Kitaev-type Hamiltonian on the dABL. It is worth remarking that the same method cannot be extended to a five-dimensional simple cubic lattice, and we have not been able to define a similar model on the PL.
Our coloring scheme for the ABL obtained by projection and corresponding dABL are shown in Fig. 1(a), and Fig. 1(b). In this model, an exact solution of the spin model can be achieved via a Majorana fermion representation of the \(\Gamma\) matrices. dABL consists of plaquettes with both even- and odd-numbered sides. The odd-numbered plaquettes break time reversal symmetry[9] and allow for chiral spin liquid phases that are classified by a Chern number, \(\nu\). We obtain the ground state phase diagram via Monte Carlo simulations and variational analysis. We find that the ground state follows a flux configuration that is an extension of the Lieb's theorem [52], even though the theorem was not proved for non-bipartite lattices. Depending on the coupling constants \((J_{\mu},\ \mu=1,2,3,4)\), we find that the ground state can be gapped with \(\nu=2\) or \(\nu=0\). These phases are separated by a gapless phase, as shown in Fig. 4. Upon inclusion of an onsite term that commutes with the flux operators, the ground state vison configuration changes as shown in Fig. 6. We find that up to 21 different flux configurations can be stabilized as a function of the onsite field strength.
The paper is organized as follows. Section II briefly overviews the Ammann-Beenker lattice and its periodic approximants. Section III focuses on constructing our model with Kitaev-type interactions, including our coloring scheme. In section IV, we present our results and discuss the phase diagram. We conclude with a discussion and a summary of our results.
## II Ammann-Beenker lattice
We use one of the most well-known quasicrystals, the Ammann-Beenker Lattice (ABL), to construct our model. All the points forming the ABL can be written as
\[\vec{R}_{ABL}=k_{0}\hat{e}_{0}+k_{1}\hat{e}_{1}+k_{2}\hat{e}_{2}+k_{3}\hat{e} _{3}, \tag{1}\]
where \(k_{j}\) are integers and the star-vectors are
\[\hat{e}_{0}=\hat{x},\,\hat{e}_{1}=\frac{1}{\sqrt{2}}\left(\hat{x}+\hat{y} \right),\,\hat{e}_{2}=\hat{y},\,\hat{e}_{3}=\frac{1}{\sqrt{2}}\left(-\hat{x}+ \hat{y}\right). \tag{2}\]
All the bonds in the ABL are parallel to one of the above star vectors, making the meshes of ABL either squares or \(\pi/4\) rhombi.
If the integers \(k_{j}\) in Eq. 1 are allowed to vary independently, the set of lattice points will uniformly fill the plane and create points that are infinitesimally close to each other. The definition of the quasicrystal lattice can be seen as a way of constraining the set \(k_{0},k_{1},k_{2},k_{3}\). Although alternative methods exist for defining this constraint, we use the cut-and-project method which relates the ABL to the four-dimensional cubic lattice. Each quadruple \(k_{0},k_{1},k_{2},k_{3}\), defines a unit tessaract. ABL is formed by using \(k_{j}\) which has an intersection with a specific two-plane. The two-plane is chosen so that the projection of unit vectors in four dimensions onto this plane defines the star vectors given above [50].
The first condition necessary for an exactly solvable Kitaev model is to obtain a lattice with a uniform coordination number. ABL has vertices with coordination numbers varying from 3 to 8, hence does not satisfy this condition. However, as each mesh of ABL is a quadrilateral, its dual lattice (dABL) is uniformly coordinated. dABL is obtained by placing vertices to the center of each mesh and forming a link between two sites if their
meshes share an edge. So each dABL site is expressed as \(\tilde{R}_{dABL}=\tilde{R}_{ABL}\pm\frac{\hat{e}_{m}}{2}\pm\frac{\hat{e}_{n}}{2}\), where the edges of the mesh including this vertex are parallel to \(\hat{e}_{m}\) and \(\hat{e}_{n}\). Tight binding models have been studied on dABL and other dual lattices, where such models are referred to as center models [51].
The second condition we need to define a Kitaev-type model is to make sure that the uniformly coordinated dABL is 4-partite. Each of the links in the dABL has to be assigned an index from 1 to 4, with the condition that all 4 links meeting at every lattice site has a different index. The same condition can be expressed in terms of the direct ABL as coloring all the links so that each mesh has 4 edges of different colors. By considering the four-dimensional simple cubic lattice, we establish a coloring rule that satisfies this condition. As the cut-and-project construction shows every mesh in the ABL corresponds to a mesh in a two-plane of the four dimensional cubic lattice. The details of this coloring scheme for the four-dimensional lattice which satisfies the coloring condition is presented in the Appendix A. The resulting coloring for the ABL and dABL is shown in Fig. 1. Our coloring rule cannot be trivially generalized to higher dimensions, hence to other quasicrystals such as the PL.
While our model is defined for the infinite quasicrystal, numerical calculations require a finite region. We use two kinds of regions for our calculations. First, we use a finite region with open boundary conditions. We choose different local configurations for the finite region to make sure that our results are not specific to a particular patch of the quasicrystal. We use a periodic approximant for the ABL [53] as the second method. Periodic approximants are obtained by choosing a cut plane with a rational slope in the cut-and-project construction. The approximants are indexed by an integer \(s\) which increases the number of sites within the unit cell and creates the infinite ABL in the limit \(s\to\infty\). For \(s=1,2,3\) the periodic unit cell has \(N=7,41,239\) sites respectively. The unit cell for the ABL approximants is a square, and we double the unit cell in both directions for compatibility with our coloring rules. In Fig. 2 we display the enlarged unit cell for the \(s=2\) approximant with the dABL coloring.
## III Microscopic model
An essential requirement for solving the Kitaev model exactly is the anticommutation relations of the Pauli matrices, \(\{\sigma_{i},\sigma_{j}\}=2\delta_{ij}\). Since there are only three Pauli matrices, this approach is applicable only to lattices with a coordination number of \(z=3\), such as the honeycomb, hyperhoneycomb, and hyperoctagon lattices. However, an extension of Kitaev's method is possible using \(\Gamma\) matrices that follow the Clifford algebra \(\{\Gamma_{i},\Gamma_{j}\}=2\delta_{ij}\)[54; 55]. For instance, in a four-dimensional representation of the Clifford algebra, there exist five \(\Gamma^{\mu}\) operators, along with ten \(\Gamma^{\mu\nu}=\frac{i}{2}[\Gamma^{\mu},\Gamma^{\nu}]\) operators and an identity matrix, which form the basis of the local Hilbert
Figure 3: Ground state flux configuration for \(J_{5}=0\) for a s=3 approximant. The 0-flux, \(\pi\)-flux, \(\frac{\pi}{2}\)-flux, and \(-\frac{\pi}{2}\)-flux are represented by yellow, blue, red, and green colors, respectively. The ground state breaks time-reversal symmetry due to odd-numbered plaquettes.
Figure 2: Four unit cells of the approximant, represented by s=2 for dABL. The black dotted lines show the original ABL and the purple dashed lines show a single unit cell of the s=2 approximant.
space. Consequently, Kitaev's construction can be expanded to lattices with a coordination number of up to \(z=5\)[54, 55]. dABL has coordination number \(z=4\), represented by four different colors for each type of bond. Therefore, we use this representation and consider the Hamiltonian,
\[H_{K}=-\sum_{\langle ij\rangle_{\mu}}J_{\mu}(\Gamma_{i}^{\mu}\Gamma_{j}^{\mu}+ \Gamma_{i}^{\mu 5}\Gamma_{j}^{\mu 5})+J_{5}\sum_{j}\Gamma_{j}^{5} \tag{3}\]
where \(\mu\) is the type if the bond, \(\mu=\{1,2,3,4\}\). Since the \(\Gamma^{5}\) operator is not used as a bond operator, it can be included as an onsite term. Similar models have been considered for other \(z=4\) lattices such as square lattice[55, 56, 57]. Note that four-dimensional \(\Gamma\) matrices can be represented by two sets of Pauli matrices [58, 59, 60] with Kugel-Khomskii interactions[61] or \(J=3/2\) operators[55]. For the latter, \(\Gamma^{\mu}\) operators correspond to quadrupolar operators. For each plaquette, we define \(W_{p}=\prod_{(j,k)\in\mathcal{P}}\Gamma_{j}^{\mu}\Gamma_{k}^{\mu}\) where the product is taken in the counter-clockwise direction. These plaquette operators commute with the Hamiltonian, \([H,W_{p}]=0\) and each other \([W_{p},W_{p^{\prime}}]=0\). Therefore this leads to infinitely many conserved quantities for the model. An exact solution can be obtained by a Majorana representation of the \(\Gamma\) matrices [54],
\[\Gamma_{j}^{\mu}=ib_{j}^{\mu}c_{j},\ \ \ \ \Gamma_{j}^{\mu\nu}=ib_{j}^{\mu}b_{j}^ {\nu} \tag{4}\]
where we introduce a total of 6 Majorana fermions per site. Relabeling \(b_{j}^{5}\) as \(b_{j}^{5}\to d_{j}\), the Hamiltonian can be reexpressed as
\[H=\sum_{\langle ij\rangle_{\mu}}J_{\mu}iu_{ij}^{\mu}(c_{i}c_{j}+d_{i}d_{j})+J_ {5}\sum_{i}id_{i}c_{i} \tag{5}\]
where \(u_{ij}^{\mu}=ib_{j}^{\mu}b_{j}^{\mu}\). \(u_{ij}^{\mu}\) also commute with the Hamiltonian, \([H,u_{ij}^{\mu}]=0\) and thus it is also a constant of motion. This representation is redundant, and the physical states need to be eigenstates of \(D_{i}=ib_{i}^{1}b_{i}^{2}b_{i}^{3}b_{i}^{4}c_{i}d_{i}\) with eigenvalues \(+1\). These constraints can be implemented by a projection operator \(P=\prod_{i}(1+D_{i})/2\). A \(\mathbb{Z}_{2}\) gauge transformation at site \(i\) involves flipping the signs of the Majorana fermions and bond operators, \(\{c_{i},d_{i}\}\rightarrow\{-c_{i},-d_{i}\}\); \(u_{ij}^{\mu}\rightarrow-u_{ij}^{\mu}\). The plaquette operators can be expressed in terms of the bond operators as \(W_{p}=(-i)^{n}\prod_{(j,k)\in p}u_{jk}^{\mu}\) where n is the number of the links on the boundary of the plaquette p and the product is taken in the counter-clockwise direction. The eigenvalues of \(W_{p}\) for even (odd) plaquettes can take values \(\pm 1\) (\(\pm i\)). Therefore the solution of the model involves two flavors of free Majorana fermions hopping in the background of static \(\mathbb{Z}_{2}\) fluxes. Since there is no Lieb's theorem for quasicrystals, we obtain the ground state phase diagram via Monte Carlo simulations for approximants on varying sizes, s=1, 2, and 3. We use the Metropolis algorithm for the flux degrees of freedom, and for a given vison configuration, we perform exact diagonalization to obtain the corresponding energy. We perform 5000 Monte Carlo steps for a given temperature and reduce the temperature down to \(10^{-3}J\). Motivated by the Monte Carlo results, we also construct various variational states and compare their energies. For instance, in the case of finite \(J_{5}\) calculations, we construct 32 vison configurations for which 21 of them are stabilized as a function of \(J_{5}/J\).
## IV Results and discussion
We first discuss the phase diagram in the absence of the onsite anisotropy term (\(J_{5}=0\)). Even though Lieb's theorem[52] does not apply to quasicrystals, we observe that the ground state follows a simple rule that is an extension of Lieb's theorem, and the fluxes through the plaquettes, \(W_{p}\), are given by
\[\phi_{p}=-(\pm i)^{n} \tag{6}\]
which recovers the well-known result that honeycomb (square, octagon) plaquettes exhibit 0 (\(\pi\)) fluxes whereas the triangle, heptagon (pentagon) plaquettes have \(\pm\pi/2\) (\(\mp\pi/2\)) fluxes in the ground state as shown in Fig. 3. We will refer this vison configuration 'L' for the remainder of the article. Albeit there is no proof for the extension of Lieb's theorem for graphs that has plaquettes with varying number of edge sites, n, similar be
Figure 4: Ground state phase diagram for \(J_{1}+J_{2}+J_{3}+J_{4}=1\) and \(J_{5}=0\). (a), (b), (c), (d) panels have \(J_{4}\)=0, 0.25, 0.5, and 0.75 respectively. The gap value is normalized by the bandwidth which varies depending on the parameters. The phase diagram contains two gapped phases with \(\nu=2\) and \(\nu=0\) which are separated by a gapless region.
haviors have also been observed in amorphous and polycrystalline Kitaev models[18; 19] and Kitaev models in closed-geometries (i. e. tetrahedrons)[62]. Eq. 6 implies that the ground state flux configuration spontaneously breaks time-reversal symmetry due to the odd-numbered plaquettes (n=3, 5, 7). This phenomenon was originally pointed out by Kitaev[9] and was first discovered in a model by Yao and Kivelson[55]. A time reversal operation on a ground state flips the sign of the fluxes on the odd-numbered plaquettes and generates a new ground state.
We find that the L vison configuration remains the ground state even with anisotropic couplings, \(J_{\mu}\), as long as \(J_{5}=0\). However, similar to Kitaev's original model, the Majorana band structure depends on the values of \(J_{\mu}\)'s. To investigate that, we study the phase diagram for \(J_{1}+J_{2}+J_{3}+J_{4}=1\), which forms a tetrahedron phase space. For the sake of a simpler and clearer presentation, we take four cuts on the tetrahedron as shown in Fig. 4. We find two gapped phases: a chiral spin liquid with \(\nu=2\) and an abelian spin liquid with \(\nu=0\). The chiral edge modes for the \(\nu=2\) phase originate from the two flavors of Majorana fermions both contributing a single chiral edge mode. For \(J_{5}=0\), the two Majorana fermions are completely decoupled and have an identical spectrum. We identify the Chern number by counting the chiral edge modes on a slab geometry. An example of the chiral edge modes are shown in Fig. 5. The gapped phases are connected by a gap-closing transition and in certain cases a finite region of the gapless phase (see Fig 4(a). We note that our phase diagram shows similarities to the amorphous Kitaev model[18; 19] which has \(\nu=0\) and \(\nu=1\) phases.
Next, we discuss the effects of the \(J_{5}\) term which is an onsite term that couples to \(\Gamma^{5}\). Since \(\Gamma^{\mu}\) operators can be expressed as \(J=3/2\) quadruple operators, \(J_{5}\) term does not break time-reversal symmetry but it is akin to an anisotropy term. Previous studies[63; 57; 64] show that similar onsite terms can change the ground state vison configuration. In terms of Majorana fermion description, \(J_{5}\) term couples the two flavors, \(c\) and \(d\) (see Eq. 5). We consider uniform bond coupling constants, \(J_{1}=J_{2}=J_{3}=J_{4}=J\) and study the \(J_{5}/J\) phase diagram. Our Monte Carlo simulations suggest that new vison configurations can be stabilized. However, these configurations all obey a simple rule that plaquettes with the same \(n\) have the same flux configuration. Since dABL has \(n=\{3,4,5,6,7,8\}\) plaquettes, there are \(2^{6}\) vison configuration possibilities. As each configuration has a time-reversal partner and there are a total of 32 distinct possible vison configurations that follow this rule. Since the energy differences between the vison configurations can be quite small, obtaining the true ground state from Monte Carlo can be challenging due to phase separation. Therefore, we construct 32 variational con
Figure 5: (a) Edge states without the onsite term for a point in the phase space characterized by the Chern number 2. (b) Edges states for the same point with onsite term splits the edge states but preserves \(\nu=2\).
figurations and compare their energies. In Fig. 6, we show that 21 of these states can be realized as a function of \(J_{5}/J_{1}\). The vison configurations of these phases are shown in Table 1. Note that their time-reversal partners are also ground states. For \(J_{5}/J_{1}>3.5\) all vison configurations become degenerate. This phase is unstable to confinement and has been observed in similar models with onsite anisotropy[57]. For \(J_{1}=J_{2}=J_{3}=J_{4}\) and \(J_{5}=0\), the ground state has \(\nu=\pm 2\). We find that \(J_{5}\) term hybridizes the edge states but does not change \(\nu\). Remarkably, all 21 states in Table 1 have the same Chern number \(\nu=\pm 2\) even though they have different vison configurations.
## V Conclusion
We developed an exactly solvable spin liquid model on a quasicrystal lattice. The structural properties required for Kitaev type integrability are uniform coordination number \(z\) throughout the lattice and the bond lattice being \(z\)-partite. The dual lattice (center model) of the commonly used ABL is uniformly coordinated with 4 bonds at each site. We showed that a coloring rule applied to the four dimensional simple cubic lattice generates the dABL with the required 4-partite property. With these structural properties we used the \(\Gamma\) matrix generalization of the Kitaev interactions to define our Hamiltonian. A Majorana fermion representation of the interactions reduce the Hamiltonian to a tight binding model on the quasicrystal that is coupled to a static \(Z_{2}\) gauge field. We used both finite size quasicrystals with open boundary conditions and periodic approximants for our numerical calculations.
We obtained the ground state phase diagram via the Monte Carlo simulations and variational analysis and showed that it follows a flux configuration that is an extension of the Lieb's theorem, _albeit_ the Lieb's theorem does not directly apply here. As a function of coupling strengths, we find that the ground state can be gapped with \(\nu=\pm 2\) or \(\nu=0\). These phases are separated by a gapless phase. Subsequently, including an onsite term that preserves the integrability of the model, we found that 21 different vison configurations can be stabilized for isotropic exchange constants. Notably, all of these phases also share the same Chern number, \(\nu=2\). Interesting future directions include extending our formalism to the emerging field of moire quasicrystals.
## VI Acknowledgements
We thank Yuan-Ming Lu and Johannes Knolle for fruitful discussions. OE acknowledges support from
Figure 6: Phase diagram as a function of the onsite anisotropy, \(J_{5}/J\) for \(J_{1}=J_{2}=J_{3}=J_{4}=J\). The y-axis represents the gap, \(\Delta\). We obtain 21 different vison configurations shown in Table 1.
Figure 7: A Tesseract colored with four different colors so that no square mesh has colors of the same side. |
2310.13582 | Comprehensive Analysis of the Open Cluster Collinder 74 | In this study, we have used the {\it Gaia} Third Data Release (Gaia DR3) to
investigate an intermediate-age open cluster Collinder 74. Taking into account
the stars with membership probabilities over 0.5 and inside the limiting radius
of the cluster, we identified 102 most likely cluster members. The mean
proper-motion components of Collinder 74 are estimated as ($\mu_{\alpha}\cos
\delta, \mu_{\delta})=(0.960 \pm 0.005, -1.526 \pm 0.004$) mas yr$^{-1}$. We
detected previously confirmed four blue straggler stars which show flat radial
distribution. Colour excess, distance, and age of the cluster were estimated
simultaneously by fitting {\sc PARSEC} isochrones to the observational data on
{\it Gaia} based colour magnitude diagram. These values were derived as
$E(G_{\rm BP}-G_{\rm RP})=0.425\pm 0.046$ mag, $d=2831 \pm118$ pc and $t=1800
\pm 200$ Myr, respectively. The mass function slope was estimated as $\Gamma =
1.34 \pm 0.21$ within the mass range $0.65\leq M/ M_{\odot}\leq 1.58$ which is
well matched with that of Salpeter. Stellar mass distribution indicated that
the massive and most likely stars are concentrated around the cluster center.
The total mass of the cluster was found to be 365 $M/M_{\odot}$ for the stars
with probabilities $P>0$. Galactic orbit integration shows that the Collinder
74 follows a boxy pattern outside the solar circle and is a member of the
thin-disc component of the Galaxy. | T. Yontan, R. Canbay | 2023-10-20T15:23:50Z | http://arxiv.org/abs/2310.13582v1 | # Comprehensive Analysis of the Open Cluster Collider 74
###### Abstract
In this study, we have used the _Gaia_ Third Data Release (Gaia DR3) to investigate an intermediate-age open cluster Collinder 74. Taking into account the stars with membership probabilities over 0.5 and inside the limiting radius of the cluster, we identified 102 most likely cluster members. The mean proper-motion components of Collinder 74 are estimated as \((\mu_{\alpha}\cos\delta,\mu_{\delta})=(0.960\pm 0.005,-1.526\pm 0.004)\) mas yr\({}^{-1}\). We detected previously confirmed four blue straggler stars which show flat radial distribution. Colour excess, distance, and age of the cluster were estimated simultaneously by fitting PARSEC isochrones to the observational data on _Gaia_ based colour magnitude diagram. These values were derived as \(E(G_{\rm BP}-G_{\rm RP})=0.425\pm 0.046\) mag, \(d=2831\pm 118\) pc and \(t=1800\pm 200\) Myr, respectively. The mass function slope was estimated as \(\Gamma=1.34\pm 0.21\) within the mass range \(0.65\leq M/M_{\odot}\leq 1.58\) which is well matched with that of Salpeter. Stellar mass distribution indicated that the massive and most likely stars are concentrated around the cluster center. The total mass of the cluster was found to be \(365~{}M/M_{\odot}\) for the stars with probabilities \(P>0\). Galactic orbit integration shows that the Collinder 74 follows a boxy pattern outside the solar circle and is a member of the thin-disc component of the Galaxy.
Galaxy: open clusters and associations; individual: Collinder 74, stars: Hertzsprung Russell (HR) diagram, Galaxy: Stellar kinematics, 2023
## 1 Introduction
Open star clusters (OCs), also known as Galactic clusters, are loosely bound groups of stars that emerged from the same molecular cloud, sharing a common origin and age. As relatively young and dynamically active systems, OCs typically contain hundreds to thousands of stars that share similar chemical composition, age, and distance. The formation origin of OCs plays a crucial role in our understanding of stellar formation and evolution and makes them ideal laboratories for studying stellar properties, such as temperature, luminosity, and mass (Lada & Lada, 2003; Kim et al., 2017). Also, member stars of OCs have similar movement directions in the sky which knowledge makes proper-motion components useful tools to separate physical members from the field star contamination (Sariya et al., 2021). This method provides a reliable sample of member stars for the estimation of astrophysical parameters for OCs (Bisht et al., 2020; Sariya et al., 2021).
In this study, we estimated the structural, kinematic, and astrophysical properties of Collinder 74 (Coll 74) open cluster. The cluster is positioned at \(\alpha=05^{\rm h}48^{\rm m}40^{\rm s}.8,\delta=+07^{\circ}22^{\prime}26^{\prime \prime}.4\) (J2000) corresponding to Galactic coordinates \(l=199^{\circ}.019,b=-10^{\circ}.379\) according to Cantat-Gaudin et al. (2020). Coll 74 is a centrally concentrated old open cluster located in the third Galactic quadrant toward the Galactic anti-center region. Ann et al. (1999) afforded _UBVI_CCD photometry and suggested that the age of the cluster is \(1300\pm 200\) Myr and it is located at \(2511\pm 245\) pc from the Sun. Tadross (2001) performed _UBV_ CCD observations and defined colour excess, distance and age of the cluster as \(E(B-V)=0.38\) mag, \(d=2254\) pc and \(t=1600\) Myr,
respectively. Dias et al. (2006) presented the kinematics of the cluster using UCAC2 Catalogue positions and proper motions. They derived proper-motion components as \((\mu_{\alpha}\cos\delta,\mu_{\delta})=(-0.49,-3.49)\) mas yr\({}^{-1}\). Carraro and Costa (2007) investigated CCD photometry in the \(V\) and \(I\)-bands and obtained colour excess, distance and age of the cluster as \(E(B-V)=0.28\pm 0.04\) mag, \(d=1500\) pc and \(t=3000\) Myr, respectively. From the analyses of \(BVI\) photometry Hasegawa et al. (2008) concluded that the Coll 74 is 3680 pc distant from the Sun and 1400 Myr old cluster. Dias et al. (2014) derived the mean proper-motion components by taking into account the U.S. Naval Observatory CCD Astrograph Catalogue (UCAC4; Zacharias et al. 2013) as \((\mu_{\alpha}\cos\delta,\mu_{\delta})=(1.92\pm 0.55,-3.00\pm 0.10)\) mas yr\({}^{-1}\). Loktin and Popova (2017) redetermined main parameters using published photometric measurements provided by 2MASS catalogue for 959 clusters including Coll 74. They estimated colour excess, distance and age as \(E(B-V)=0.418\) mag, \(d=3293\) pc and \(t=1400\) Myr, respectively. Also, from RAVE catalogue Loktin and Popova (2017) estimated proper-motion components in equatorial coordinate system as \((\mu_{\alpha}\cos\delta,\mu_{\delta})=(0.294\pm 0.181,-1.091\pm 0.172)\) mas yr\({}^{-1}\), respectively.
With the first data release of the _Gaia_(Gaia Collaboration et al., 2016), many researchers investigated the astrometric and kinematic properties of the Coll 74. According to literature studies performed with _Gaia_ data, values of the mean radial velocity of the cluster differs from \(15.94\pm 17.57\) km s\({}^{-1}\)(Zhong et al., 2020) to \(20.20\pm 0.80\) km s\({}^{-1}\)(Dias et al., 2021) and distance to the Sun changes between 1500 pc (Carraro and Costa, 2007) and 3680 pc (Hasegawa et al., 2008). Also, the age of the cluster varies from 1230 Myr (Kharchenko et al., 2013) to 3000 Myr (Carraro and Costa, 2007). The literature results are listed in Table 1 for detailed comparison. The main purpose of this study is to find out structural, astrophysical, and kinematic properties of the Coll 74 open cluster.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \(E(B-V)\) & \(\mu_{V}\) & \(d\) & [Fe/H] & \(t\) & \((\mu_{\alpha}\cos\delta)\) & \((\mu_{\delta})\) & \(V_{\gamma}\) & Ref \\ (mag) & (mag) & (pc) & (dex) & (Myr) & (mas yr\({}^{-1}\)) & (mas yr\({}^{-1}\)) & (km s\({}^{-1}\)) & \\ \hline
0.38\(\pm\)0.04 & 13.08\(\pm\)0.25 & 2511\(\pm\)245 & 0.07 & 1300\(\pm\)200 & – & – & – & (01) \\
0.38 & 13.00 & 2254 & 0.054 & 1600 & – & – & – & (02) \\
0.38 & 13.21 & 2549 & – & 1300 & – & – & – & (03) \\ – & – & – & – & – & -0.49 & -3.49 & – & (04) \\
0.28\(\pm\)0.04 & 11.75\(\pm\)0.10 & 1500 & – & 3000 & – & – & – & (05) \\
0.36 & 13.95 & 3680 & -0.38 & 1400 & – & – & – & (06) \\
0.604 & 13.98 & 2637 & – & 1230 & 0.91 & -3.86 & – & (07) \\ – & – & – & – & – & 1.92\(\pm\)0.55 & -3.00\(\pm\)0.10 & – & (08) \\ – & – & 2510 & 0.050 & 1288 & – & – & – & (09) \\
0.418 & 12.588 & 3293 & – & 1400 & 0.2940\(\pm\)0.181 & -1.091\(\pm\)0.172 & – & (10) \\ – & – & – & – & – & 0.77\(\pm\)2.15 & 0.47\(\pm\)3.54 & – & (11) \\ – & – & 2453\({}^{+379}_{-884}\) & – & – & 1.011\(\pm\)0.016 & -1.512\(\pm\)0.016 & – & (12) \\ – & – & 2453\({}^{+379}_{-884}\) & – & – & 1.011\(\pm\)0.016 & -1.512\(\pm\)0.016 & 20.18\(\pm\)0.39 & (13) \\ – & – & 2747\(\pm\)332 & – & 2190\(\pm\)131 & 0.981\(\pm\)0.200 & -1.497\(\pm\)0.199 & – & (14) \\ – & – & 2453\({}^{+379}_{-884}\) & – & 0.05\(\pm\)0.03 & – & 1.011\(\pm\)0.121 & -1.512\(\pm\)0.122 & 15.94\(\pm\)17.57 & (15) \\
0.274 & 11.99 & 2498\(\pm\)494 & – & 1900 & 1.011\(\pm\)0.121 & -1.512\(\pm\)0.122 & – & (16) \\
0.391\(\pm\)0.076 & – & 2153\(\pm\)144 & -0.083\(\pm\)0.084 & 2760 & 0.995\(\pm\)0.170 & -1.528\(\pm\)0.175 & 20.20\(\pm\)0.80 & (17) \\ – & – & 2356 & – & 2100 & 1.011\(\pm\)0.121 & -1.512\(\pm\)0.122 & 20.18\(\pm\)0.39 & (18) \\
0.511\(\pm\)0.074 & – & 2466\(\pm\)22 & – & 627\(\pm\)348 & 0.964\(\pm\)0.007 & -1.546\(\pm\)0.007 & 20.93\(\pm\)4.10 & (19) \\
0.301\(\pm\)0.033 & 13.052\(\pm\)0.088 & 2831\(\pm\)118 & 0.052\(\pm\) 0.034 & 1800\(\pm\)200 & 0.960\(\pm\)0.005 & -1.526\(\pm\)0.004 & 20.55\(\pm\)0.41 & (20) \\ \hline \end{tabular} (01) Ann et al. (1999), (02) Tadros (2001), (03) Lata et al. (2002), (04) Dias et al. (2006), (05) Carraro and Costa (2007), (06) Hasegawa et al. (2008), (07) Kharchenko et al. (2013), (08) Dias et al. (2014), (09) Miraskov et al. (2016), (10) Loktin and Popova (2017), (11) Dias et al. (2018), (12) Cantat-Gaudin et al. (2018), (13) Soubiran et al. (2018), (14) Liu and Pang (2019), (15) Zhong et al. (2020), (16) Cantat-Gaudin et al. (2020), (17) Dias et al. (2021), (18) Tarrico et al. (2021), (19) Hunt and Reffert (2023), (20) This study
\end{table}
Table 1: Basic parameters of the Collinder 74 open cluster collected from the literature.
## 2 Data
The astrometric, photometric, and spectroscopic data for Coll 74 open cluster was taken from _Gaia_'s third data release (_Gaia_ DR3, Gaia Collaboration et al., 2023). To do this, we used the central equatorial coordinates of Cantat-Gaudin et al. (2020) (\(\alpha,\delta\)) = (05\({}^{\rm h}\)48\({}^{\rm m}\)40\({}^{\rm s}\).8, +07\({}^{\circ}\)22\({}^{\prime}\)26\({}^{\prime\prime}\).4) and gathered the detected stars in the direction of the cluster for 35\({}^{\prime}\)-radius field. Hence, we reached 73,326 stars within the applied radius. The finding chart of the Coll 74 (\(35^{\prime}\times 35^{\prime}\)) is shown in Figure 1. The main cluster catalogue contains each stars' position (\(\alpha,\delta\)), photometric magnitude and colour index (\(G\), \(G_{\rm BP}-G_{\rm RP}\)), trigonometric parallax (\(\varpi\)), proper-motion components (\(\mu_{\alpha}\cos\delta\), \(\mu_{\delta}\)), radial velocity (\(V_{\gamma}\)) and their errors within the \(8<G\leq 22\) mag.
To achieve reliable structural and astrophysical parameters for Coll 74, we obtained a faint limited magnitude of the used data. For this, we calculated the number of stars that correspond to the \(G\) magnitude intervals. The histogram of a number of stars versus \(G\) magnitudes is shown in Figure 2, where a number of stars rise towards the fainter \(G\) magnitudes and declines after a certain limit. This limit value is \(G=20.5\) mag for the Coll 74 and in the following analyses, we used only the stars brighter than \(G=20.5\) mag. We calculated the mean photometric errors of the stars for \(G\) magnitude intervals. The mean errors for \(G\) and \(G_{\rm BP}-G_{\rm RP}\) colour indices reach up 0.011 and 0.228 mag for \(G=20.5\) limiting magnitude, respectively. The photometric errors for \(G\) magnitudes and \(G_{\rm BP}-G_{\rm RP}\) colour indices versus \(G\) magnitude intervals are shown in Figure 3.
## 3 Results
### Spatial structure of Collinder 74
To interpret stellar distribution within the cluster we constructed the radial density profile (RDP) considering adopted central equatorial coordinates presented by Cantat-Gaudin et al. (2020). We divided the 35\({}^{\prime}\) cluster region into many concentric rings surrounding the cluster center and calculated stellar densities (\(\rho(r)\)) from the stars within \(G\leq 20.5\) mag. The stellar densities in \(i^{\rm th}\) ring were calculated by the equation of \(R_{i}=N_{i}/A_{i}\), where, \(N_{i}\) and \(A_{i}\) indicate the number of stars and area of related ring, respectively. To visualise RDP, we
Figure 1: Finding chart of the Coll 74 for 35\({}^{\prime}\times 35^{\prime}\) region. Up and left directions represent North and East, respectively.
Figure 3: Distribution of mean photometric errors obtained for \(G\) apparent magnitude (a) and \(G_{\rm BP}-G_{\rm BP}\) colour index versus \(G\) magnitude intervals.
Figure 2: Distribution of the stars in the direction of Coll 74 for \(G\) magnitude intervals. The photometric completeness limit is indicated by a red dashed line.
plotted stellar density distribution versus distance from cluster center and fitted the empirical King (1962) model identified by the following equation:
\[\rho(r)=f_{\rm bg}+\frac{f_{0}}{1+(r/r_{\rm c})^{2}} \tag{1}\]
Here, \(r\) is the radius of the cluster. The \(f_{\rm bg}\), \(f_{0}\), \(r_{\rm c}\) are the background stellar density, the central stellar density and the core radius, respectively. We used the \(\chi^{2}\) minimisation technique for RDP analyses and estimated \(f_{\rm bg}\), \(f_{0}\) and \(r_{\rm c}\). In Figure 4 we showed the best-fit result of RDP which is demonstrated by the black solid line. It can be seen from the figure that stellar density is higher near the cluster center and it flattens toward the outer region of the cluster and at a point merges with the field star density. This point is described as limiting radius (\(r_{\rm lim}\)) and visually adopted as \(10\arcmin\). The best-fit solution of RDP analyses resulted that the structural parameters to be \(f_{0}=8.42\pm 0.35\) stars arcmin\({}^{-2}\), \(f_{\rm bg}=5.45\pm 0.16\) stars arcmin\({}^{-2}\) and \(r_{\rm c}=1.38\pm 0.12\) arcmin for the Coll 74.
### Membership Analyses of Collinder 74
Working with a sample of stars that are part of the cluster itself is crucial to accurately characterise its properties, such as its age, mass, luminosity function, and dynamics. Field star contamination may contribute to noise and bias during the analyses and can cause inaccurate results. Therefore, field star separation is necessary to understand the astrophysical, astrometric, and kinematic properties of the studied cluster. As their physical formation processes, cluster members are gravitationally bound to the cluster. Therefore, they exhibit similar vectorial movements across the sky relative to the background field stars. Through the analysis of proper-motion components, stars that share the motion of the cluster can be identified as cluster
Figure 4: The RDP of King (1962) for Coll 74. Stellar density errors were determined from Poisson statistics \(1/\sqrt{N}\), where \(N\) is the number of stars. The fitted black curve and horizontal grey shaded area show the best-fitted RDP and background stellar density, respectively. Also, red-shaded area indicates the \(1\sigma\) uncertainty of the fit.
members. Hence, proper-motion components are useful tools to separate cluster members and calculate their membership probabilities (Sariya et al., 2021; Bisht et al., 2020). The precise astrometric data from _Gaia_ DR3 data (Gaia Collaboration et al., 2023) provide a crucial role for membership analyses.
We used the method of Photometric Membership Assignment in Stellar Clusters (UPMASK, Kronen-Martins and Moitinho, 2014) considering _Gaia_ DR3 astrometric data to calculate membership probabilities of stars in the region of the Coll 74. UPMASK uses a clustering algorithm to group stars that have similar positions, proper-motion components, trigonometric parallaxes and are close to each other in space. The algorithm then assigns membership probabilities to each star based on its likelihood of belonging to a particular cluster. We utilised the method in five-dimensional astrometric space considering astrometric measurements (\(\alpha\), \(\delta\), \(\mu_{\alpha}\cos\delta\), \(\mu_{\delta}\), \(\varpi\)), also their uncertainties, of each star. To determine stars' membership probabilities (\(P\)) we run the program with 100 iterations. As a result, taking into account the stars within both the estimated limiting radius (\(r_{\rm lim}\)=10') and the completeness limit \(G=20.5\) mag, for the open cluster Coll 74 we obtained 102 most likely member stars with membership probabilities of \(P\geq 0.5\). These stars are used in the estimation of astrometric and astrophysical parameters of Coll 74.
To visualise the clustering of the most likely member stars, we plotted vector-point diagram (VPD) by using proper-motion components of the stars and showed it in Figure 5. It is evident from the VPD that Coll 74 is embedded in the field stars. Even if this is the case, the cluster structure can be distinguished by investigation of the probability values of the stars. We estimated mean proper-motion values from the stars with membership probabilities greater than 0.5 and found the values as \((\mu_{\alpha}\cos\delta,\mu_{\delta})=(0.960\pm 0.005,-1.526\pm 0.004)\) mas yr\({}^{-1}\). The trigonometric parallax histogram of the most likely member stars is shown in Figure 6. By fitting the Gaussian function to the histogram, we estimated the mean trigonometric parallax as \(\varpi=0.363\pm 0.043\) mas for the Coll 74 and corresponding distance value (with the linear equation \(d({\rm pc})=1000/\varpi\)) mas as \(d_{\varpi}=2755\pm 326\) pc. This value of distance is close to the values estimated in _Gaia_ era, as listed in Table 1.
Figure 5: VPD of Coll 74. The colour scale on the right panel indicates the membership probabilities of the stars for the cluster. The enlarged panel inset shows the cluster’s denser region in VPD. The intersection of the dashed blue lines represents the mean proper-motion value for Coll 74.
### The Blue Straggler Stars of Collinder 74
Blue straggler stars (BSSs) in open clusters defy the natural aging process by appearing younger and bluer than their surrounding companions. While most stars in open clusters follow common evolutionary processes, BSSs defy the laws of stellar evolution in the cluster by reversing this trend. Interactions between stars in binary systems or stellar collisions within the dense cluster environment are among the leading formation mechanisms for BSSs (Sandage, 1953; Zinn & Dahn, 1976; Hills & Day, 1976).
We identified four BSSs in Coll 74. These stars are confirmed as cluster members with membership probabilities \(P\geq 0.9\). The positions of BSSs are shown in Figure 7. Rain et al. (2021) defined five BSSs by using _Gaia_ DR2 (Gaia Collaboration et al., 2018) photometric and astrometric data. Due to the membership analyses being based on _Gaia_ DR3 data and we took into account the stars within limiting radius (\(r_{\rm lim}\leq 10^{\prime}\)), one of the Rain et al. (2021) BSS was not considered. Jadhav & Subramaniam (2021) using _Gaia_ DR2 data and identified BSSs in 1246 open clusters. They found four BSSs members in Coll 74 which are the same star sample as presented in this study.
Ferraro et al. (2012) considering the radial distribution of BSSs in the clusters, defined three classes for BSSs. Three of BSSs in Coll 74 are located at a radial distance 0.42, 0.88, and 0.98 arcmin, whereas one star is located at 6.25 arcmin. According to their radial distribution and the criterion of Ferraro et al. (2012), we can conclude that the BSSs of Coll 74 show flat distribution and cluster belongs to the family I. The formation mechanisms of blue stragglers in family I clusters are thought to be dominated by stellar collisions and mass transfer in close binary systems.
### Astrophysical Parameters of Collinder 74
To derive age, distance modulus and colour excess of Coll 74, we fitted theoretical PARSEC isochrones of Bressan et al. (2012) to the observed CMD constructed from the most likely cluster members. The age,
Figure 6: Histogram of star count of the most likely members (\(P\geq 0.5\)) in trigonometric parallaxes. The red dashed line indicates the fitted Gaussian function.
distance modulus and colour excess of the cluster were estimated simultaneously, while the metallicity of Coll 74 was taken from Zhong et al. (2020) as [Fe/H]=\(-0.052\pm 0.034\) dex. We fitted PARSEC models by taking into account the morphology of the cluster and reached the best fit. For the selection of the isochrones and estimation of astrophysical parameters, we transformed the assumed metallicity ([Fe/H]=\(-0.052\pm 0.034\) dex) to the mass fraction \(z\). To do this, we applied the equation of Bovy1 that are available for PARSEC models (Bressan et al., 2012). The equations are given as follow:
Footnote 1: [https://github.com/jobwy/isodavis/blob/master/isodavis/Isochrones.py](https://github.com/jobwy/isodavis/blob/master/isodavis/Isochrones.py)
\[z_{\rm x}=10^{\rm[Fe/H]+log\left(\frac{z_{\odot}}{1-0.28\pm 2.78\pm 2.0}\right)} \tag{2}\]
and
\[z=\frac{(z_{\rm x}-0.2485\times z_{\rm x})}{(2.78\times z_{\rm x}+1)}. \tag{3}\]
where \(z_{\rm x}\) and \(z_{\odot}\) are intermediate values where solar metallicity \(z_{\odot}\) was adopted as 0.0152 (Bressan et al., 2012). The calculated mass fraction is \(z=0.0136\) for Coll 74.
We plotted \(G\times(G_{\rm BP}-G_{\rm RP})\) and superimposed isochrones, scaled to the \(z=0.0136\), of different ages (log \(t\)=9.20, 9.25 and 9.30 yr) by visual inspection to the most likely cluster main-sequence, turn-off and giant members with probabilities over \(P\geq 0.5\) as shown in Figure 7. The best fit supports the isochrone with log \(t\)=9.25 yr to the cluster morphology, this isochrone corresponding to \(t=1800\pm 200\) Myr. The estimated age is comparable with the values of Tadross (2001) and Cantat-Gaudin et al. (2020). Also, good isochrone fitting result supplies the distance modulus and colour excess of the Coll 74 to be \(\mu_{\rm G}=13.052\pm 0.088\) mag, corresponding to isochrone distance \(d_{\rm iso}=2831\pm 118\) pc and \(E(G_{\rm BP}-G_{\rm RP})=0.425\pm 0.046\) mag, respectively. We used the relations of Carraro et al. (2017) to estimate the errors of distance modulus and isochrone distance. Our derived isochrone distance is compatible with most of the studies presented by different researchers (see Table 1) as well as the trigonometric parallax distance \(d_{\varpi}=2755\pm 326\) pc estimated in this study. For a more accurate comparison with literature studies, we converted this value to the _UBV_-based colour excess \(E(B-V)\). We utilised the equation \(E(G_{\rm BP}-G_{\rm RP})=1.41\times E(B-V)\) given by Sun et al. (2021) and determined the value as \(E(B-V)=0.301\pm 0.033\) mag. This result is close to the values given by Tadross (2001), Lata et al. (2002), Carraro & Costa (2007) and Hasegawa et al. (2008) within the errors (see Table 1).
We also estimated heliocentric Galactic coordinates \((X,Y,Z)_{\odot}\) of Coll 74. Here, \(X\) is the distance from the Galactic center in the Galactic plane (\(l=0^{\rm o}\), \(b=0^{\rm o}\)), \(Y\) is the distance in the direction of Galactic rotation (\(l=90^{\rm o}\), \(b=0^{\rm o}\)) and \(Z\) is the vertical distance from Galactic plane to the North Galactic Pole (\(l=0^{\rm o}\), \(b=90^{\rm o}\)). Galactocentric coordinates provide a convenient way to describe the positions of celestial objects relative to the Galactic center, Sun, and Galactic plane. By considering isochrone distance, Galactic longitude, and latitude of the cluster, we derived these distances as \((X,Y,Z)_{\odot}=(-2633,-907,-510)\) pc.
## 4 Galactic orbit study of the collinder 74
The Galactic orbits of open clusters are important for understanding how these celestial objects dynamically evolve within the Milky Way (Tasdemir & Yontan, 2023). We derived the orbits and orbital parameters of
Figure 7: Colour-magnitude diagram for the studied cluster Coll 74. Different colour and colourbar scales show the membership probabilities of stars with \(P\geq 0.5\). Stars with probabilities \(P<0.5\) are demonstrated with filled grey circles. BSSs of the cluster are shown in a blue dashed-lined box. The best solution of fitted isochrones and their errors are inferred as the blue and green lines, respectively. The age of the blue-lined isochrone matches 1800 Myr for the cluster.
Coll 74 with the help of the python based Galactic dynamics library galpy2 of Bovy (2015). This library implements MWPotential2014 model, which commonly uses a potential model in Galactic dynamics. The MWPotential2014 model is based on a combination of different components that represent the various structures within the Milky Way, including the bulge, disc, and halo. These components are parameterised to approximate the observed properties of the Galaxy: the bulge is modelled as a power-law density profile as described in Bovy (2015), the disc is typically modelled as an exponential disk with a specified scale length and scale height as defined by Miyamoto and Nagai (1975) and the halo is often modelled as a spherical or ellipsoidal distribution with a specified density profile as defined by Navarro et al. (1996). We accepted Sun's Galactocentric distance and orbital velocity as \(R_{\rm gc}=8\) kpc and \(V_{\rm rot}=220\) km s\({}^{-1}\), respectively (Bovy, 2015; Bovy and Tremaine, 2012), as well as Sun's distance from the Galactic plane was adopted as \(25\pm 5\) pc (Juric et al., 2008).
Footnote 2: See also [https://galpy.readthedocs.io/en/v1_5.0/](https://galpy.readthedocs.io/en/v1_5.0/)
The mean radial velocity (\(V_{\gamma}\)) of the cluster was calculated from available _Gaia_ DR3 radial velocity measurements of the stars. We considered the most likely member stars with probabilities over \(P\geq 0.5\) whose number are 16. We used the equations of Soubiran et al. (2018) which are based on the weighted average of the radial velocities of the stars. Hence, the mean radial velocity of the Coll 74 was determined as \(V_{\gamma}=20.55\pm 0.41\) km s\({}^{-1}\) which is in good agreement with mean radial velocity findings presented by Soubiran et al. (2018), Dias et al. (2021), Tarricq et al. (2021) and Hunt and Reffert (2023). To estimate orbital parameters, we used equatorial coordinates (\(\alpha=05^{\rm h}48^{\rm m}40^{\rm s}.8\), \(\delta=+07^{\circ}22^{\prime}26^{\prime\prime}.4\)) taken from Cantat-Gaudin et al. (2020), mean proper-motion components (\(\mu_{\alpha}\cos\delta=0.960\pm 0.005\), \(\mu_{\delta}=-1.526\pm 0.004\) mas yr\({}^{-1}\)), isochrone distance (\(d_{\rm iso}=2831\pm 118\) pc) and the radial velocity (\(V_{\gamma}=20.55\pm 0.41\) km s\({}^{-1}\)) calculated in the study (see also Table 2) for Coll 74 as input parameters.
The orbit integration was applied for the forward with an integration step of 1 Myr up to 2.5 Gyr to estimate the possible current position of Coll 74. The resultant orbit is shown in Figure 8a. The figure pictures the path followed by the cluster in \(Z\times R_{\rm gc}\) plane, which represents the side view of the orbit. Here, \(Z\) and \(R_{\rm gc}\) are the distance from the Galactic plane and the Galactic center, respectively. Also, the orbit analyses were carried out for the past epoch across a time equal to \(t=1800\pm 200\) Myr cluster's age. Figure 8b shows the cluster's distance variation in time on the \(R_{\rm gc}\times t\) plane. The figure also represents the influence of errors in the input parameters on the orbit of Coll 74. Orbit analyses stated that the Coll 74 was formed outside the solar vicinity with a birth radius of \(R_{\rm gc}=10.97\pm 0.32\) kpc.
From the orbit integration we derived the following parameters for Coll 74: apogalactic (\(R_{\rm a}=10987\pm 112\) pc) and perigalactic (\(R_{\rm p}=9337\pm 20\) pc) distances, eccentricity (\(e=0.081\pm 0.004\)), maximum vertical distance from Galactic plane (\(Z_{\rm max}=506\pm 22\) pc), space velocity components (\(U,V,W=-11.43\pm 0.79\), \(-29.50\pm 1.08\), \(-2.56\pm 0.05\) km s\({}^{-1}\)), and orbital period (\(P_{\rm orb}=291\pm 2\) Myr). The Local Standard of Rest (LSR) correction was applied to the \((U,V,W)\) components of the Coll 74. To do this, we considered the space velocity component values \((U,V,W)_{\odot}=(8.83\pm 0.24\), \(14.19\pm 0.34\), \(6.57\pm 0.21)\) km s\({}^{-1}\) of Coskunoglu et al. (2011). Hence, LSR corrected space velocity components were found to be \((U,V,W)_{\rm LSR}=(-2.60\pm 0.25\), \(-15.31\pm 1.13\), \(4.01\pm 0.22)\) km s\({}^{-1}\). Total space velocity was estimated as \(S_{\rm LSR}=16.04\pm 1.18\) km s\({}^{-1}\), which is compatible with the velocity value given for thin-disc objects (Leggett, 1992). We interpreted from the perigalactic and apogalactic distances that Coll 74 is completely outside the solar circle (Figure 8a). The
cluster reaches a maximum distance above the Galactic plane at \(Z_{\rm max}=506\pm 22\) pc, shows that Coll 74 belongs to the thin-disc component of the Milky Way (Bilir et al. 2006a,b, 2008).
## 5 Luminosity and Mass Functions
The luminosity function (LF) of an open cluster represents the number of stars at different brightness within the cluster. The LF and mass function (MF) of open clusters are related because the luminosity of a star is generally correlated with its mass. This correlation is also defined as mass-luminosity relation and provides transform LF into the MF (Bisht et al. 2019).
For LF analyses of Coll 74, first, we selected the main-sequence stars with membership probabilities \(P>0\) and located inside the limiting radius obtained in the study (\(r_{\rm lim}^{\rm obs}=10^{\prime}\)). Hence, we reached 324 stars within the \(15\leq G\leq 20.5\) magnitude interval. Then considering distance modulus (\(M_{\rm G}=G-5\times\log d+A_{\rm G}\)) with apparent magnitude (\(G\)), isochrone distance (\(d_{\rm iso}\)) and \(G\) band absorption (\(A_{\rm G}\)) estimated in the study, we transformed apparent \(G\) magnitudes into the absolute \(M_{\rm G}\) magnitudes. The histogram of LF for the cluster that constructed an interval of 1 mag is shown in Figure 9. This figure shows that the number of main-sequence stars rises up to \(M_{\rm G}\)=6 mag properly, after this limit the counts drop gradually.
For MF estimation, we used PARSEC isochrones (Bressan et al. 2012) that scaled to the derived age and adopted metallicity fraction (\(z\)) for the cluster. From this isochrone, we produced a high-degree polynomial equation between the \(G\)-band absolute magnitudes and masses. By applying this equation to the selected main-sequence stars (\(P>0\)), we transformed their absolute magnitudes \(M_{\rm G}\) into masses. Hence, we found a mass range of the 324 stars within the \(0.65\leq M/M_{\odot}\leq 1.58\). The MF slope was derived from the following
Figure 8: The Galactic orbits and birth radii of Coll 74 in the \(Z\times R_{\rm gc}\) (a) and \(R_{\rm gc}\times t\) (b) planes. The filled yellow circles and triangles show the present-day and birth positions, respectively. The red arrow is the motion vector of Collinder 74. The green and pink dotted lines show the orbit when errors in input parameters are considered, while the green and pink filled triangles represent the birth locations of the open cluster based on the lower and upper error estimates.
equation:
\[\log(\rm dN/dM)=-(1+\Gamma)\times\log M+C\,. \tag{4}\]
In the equation \(dN\) is the number of stars per unit mass \(dM\), \(M\) is the central mass, \(C\) denotes the constant for the equation, and \(\Gamma\) represents the slope of the MF. The estimated MF slope for Coll 74 is \(\Gamma=1.34\pm 0.21\), which is in good agreement with the value of Salpeter (1955). The resulting MF is shown in Figure 10.
The masses of stars in Coll 74 were derived as a function of stars' membership probabilities. The number of stars with probabilities \(P>0\) and \(P\geq 0.5\) was determined as 324 and 102, respectively. Hence, the total mass of the cluster for these probabilities is to be \(365M_{\odot}\) and \(132M_{\odot}\), respectively. We interpreted that the total mass of the cluster estimated from the stars with probabilities \(P\geq 0.5\) corresponds to about 36% of the total mass for the stars with all probabilities. To investigate the mass distribution of the stars in Coll 74, 324 stars with \(P>0\) were plotted according to their equatorial coordinates and membership probabilities as shown in Figure 11. It can be interpreted from the figure that the stars with probabilities over 0.8 and massive ones are mostly concentrated central region of the cluster, whereas low-mass stars with probabilities under
Figure 10: Derived mass function for Coll 74. The blue line represents the MF, whereas the green lines indicate the \(\pm 1\sigma\) standard deviations. The black dashed line represented Salpeter (1955)’s slope.
Figure 9: The luminosity function of Coll 74.
0.8 are distributed beyond the cluster center. This case shows that the Coll 74 is a mass segregated open cluster.
## 6 Conclusion
We performed a detailed _Gaia_ DR3 data-based study of open cluster Collinder 74. The number of member stars with probabilities over 0.5 were 102. Considering these stars, we calculated structural and fundamental astrophysical parameters, investigated luminosity and mass functions, and estimated the orbit of the cluster. All parameters obtained in the study are listed in Table 2. The main results of the study are summarized as follows:
1. From the RDP analyses, we determined the limiting radius by visual inspection as \(r_{\rm lim}^{\rm obs}=10^{{}^{\prime}}\).
2. Considering results of photometric completeness limit, membership probability analyses, and limiting radius, we identified 102 most likely members with probabilities \(P\geq 0.5\) for Coll 74. These stars were used in the cluster analyses.
3. The mean proper-motion components were obtained as \((\mu_{\alpha}\cos\delta,\mu_{\delta})=(0.960\pm 0.005,-1.526\pm 0.004)\) mas yr\({}^{-1}\).
4. Four most probable BSS members were identified within the limiting radius of the cluster. We concluded that the Coll 74 belongs to a family I according to the radial distribution of its BSSs.
Figure 11: Mass distribution of the stars in the Coll 74. The radius sizes of the stars indicate the masses, and the different colours show the membership probabilities of the stars. The intersection of the red dashed lines indicates the central position of the cluster in the equatorial coordinate system.
\begin{table}
\begin{tabular}{l r} \hline \hline Parameter & Value \\ \hline \((\alpha,\ \delta)_{\rm J2000}\) (Sexagesimal) & 05:48:40.8, +07:22:26.4 \\ \((l,\ b)_{\rm J2000}\) (Decimal) & \(199.0189,-10.3791\) \\ \(f_{0}\) (stars arcmin\({}^{-2}\)) & \(8.42\pm 0.35\) \\ \(f_{\rm bg}\) (stars arcmin\({}^{-2}\)) & \(5.45\pm 0.16\) \\ \(r_{\rm c}\) (arcmin) & \(1.38\pm 0.12\) \\ \(r_{\rm lim}\) (arcmin) & \(10\) \\ \(r\) (pc) & \(8.24\) \\ Cluster members (\(P\geq 0.5\)) & \(102\) \\ \(\mu_{\alpha}\cos\delta\) (mas yr\({}^{-1}\)) & \(0.960\pm 0.005\) \\ \(\mu_{\delta}\) (mas yr\({}^{-1}\)) & \(-1.526\pm 0.004\) \\ \(\varpi\) (mas) & \(0.363\pm 0.043\) \\ \(d_{\varpi}\) (pc) & \(2755\pm 326\) \\ \(E(B-V)\) (mag) & \(0.301\pm 0.033\) \\ \(E(G_{\rm BP}-G_{\rm RP})\) (mag) & \(0.425\pm 0.046\) \\ \(A_{\rm G}\) (mag) & \(0.792\pm 0.086\) \\ \([{\rm Fe}/{\rm H}]\) (dex)\({}^{*}\) & \(-0.052\pm 0.034\) \\ Age (Myr) & \(1800\pm 200\) \\ Distance modulus (mag) & \(13.052\pm 0.088\) \\ Isochrone distance (pc) & \(2831\pm 118\) \\ \((X,Y,Z)_{\odot}\) (pc) & (\(-2633,-907,-510\)) \\ \(R_{\rm gc}\) (kpc) & \(10.67\) \\ MF slope & \(1.34\pm 0.21\) \\ Total mass (\(M/M_{\odot}\)) (\(P>0\)) & \(365\) \\ \(V_{\gamma}\) (km s\({}^{-1}\)) & \(20.55\pm 0.41\) \\ \(U_{\rm LSR}\) (km s\({}^{-1}\)) & \(-2.60\pm 0.25\) \\ \(V_{\rm LSR}\) (km s\({}^{-1}\)) & \(-15.31\pm 1.13\) \\ \(W_{\rm LSR}\) (km s\({}^{-1}\)) & \(4.01\pm 0.22\) \\ \(S_{\rm LSR}\) (km s\({}^{-1}\)) & \(16.04\pm 1.18\) \\ \(R_{\rm a}\) (pc) & \(10987\pm 112\) \\ \(R_{\rm p}\) (pc) & \(9337\pm 20\) \\ \(z_{\rm max}\) (pc) & \(506\pm 22\) \\ \(e\) & \(0.081\pm 0.004\) \\ \(P_{\rm orb}\) (Myr) & \(291\pm 2\) \\ Birthplace (kpc) & \(10.97\pm 0.32\) \\ \hline \end{tabular}
\end{table}
Table 2: Fundamental parameters of Coll 74.
5. The metallicity value for the cluster was adopted as \(\left[\mathrm{Fe/H}\right]=-0.052\pm 0.034\) dex which is presented by Zhong et al. (2020). We transformed this value into the mass fraction \(z=0.0136\) and kept it as a constant parameter for the age and distance modulus estimation.
6. By fitting PARSEC isochrone (Bressan et al., 2012) to the \(G\) versus \(\left(G_{\mathrm{BP}}-G_{\mathrm{RP}}\right)\) colour-magnitude diagram, we estimated colour excess of the Coll 74 as \(E\left(G_{\mathrm{BP}}-G_{\mathrm{RP}}\right)=0.425\pm 0.046\) mag, which corresponds to a colour excess in \(UBV\) system to be \(E\left(B-V\right)=0.301\pm 0.033\) mag. We estimated this value by using the equation \(E\left(G_{\mathrm{BP}}-G_{\mathrm{RP}}\right)=1.41\times E\left(B-V\right)\) as given by Sun et al. (2021).
7. The isochrone fitting distance of Coll 74 was determined as \(d_{\mathrm{iso}}=2831\pm 118\) pc. This value is supported by the distance \(d_{\mathrm{RP}}\)= \(2755\pm 326\) pc that derived from mean trigonometric parallax.
8. PARSEC isochrone of Bressan et al. (2012) provides the age of the cluster to be \(t=1800\pm 200\) Myr.
9. The LF and MF were investigated from the main-sequence stars with probabilities \(P>0\). The MF slope was found as \(\Gamma=1.34\pm 0.21\) which is in good agreement with the value of Salpeter (1955).
10. Orbit integration was performed via MWPotential2014 model. We concluded that Coll 74 orbits in a boxy pattern outside the solar circle, as well as the cluster, is a member of the thin-disc component of the Milky Way. Moreover, the birth radius (\(10.97\pm 0.32\) kpc) shows that the forming region of the cluster is outside the solar circle.
## Acknowledgements
This study has been supported in part by the Scientific and Technological Research Council (TUBITAK) 122F109. This research has made use of the WEBDA database, operated at the Department of Theoretical Physics and Astrophysics of the Masaryk University. We also made use of NASA's Astrophysics Data System as well as the VizieR and Simbad databases at CDS, Strasbourg, France and data from the European Space Agency (ESA) mission _Gaia3_, processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC)4. Funding for DPAC has been provided by national institutions, in particular, the institutions participating in the _Gaia_ Multilateral Agreement.
Footnote 4: [https://www.comso.esa.int/guia](https://www.comso.esa.int/guia)
Footnote 4: [https://www.comso.esa.int/web/gaia/dpac/consortium](https://www.comso.esa.int/web/gaia/dpac/consortium)
|
2305.17552 | Online Nonstochastic Model-Free Reinforcement Learning | We investigate robust model-free reinforcement learning algorithms designed
for environments that may be dynamic or even adversarial. Traditional
state-based policies often struggle to accommodate the challenges imposed by
the presence of unmodeled disturbances in such settings. Moreover, optimizing
linear state-based policies pose an obstacle for efficient optimization,
leading to nonconvex objectives, even in benign environments like linear
dynamical systems.
Drawing inspiration from recent advancements in model-based control, we
introduce a novel class of policies centered on disturbance signals. We define
several categories of these signals, which we term pseudo-disturbances, and
develop corresponding policy classes based on them. We provide efficient and
practical algorithms for optimizing these policies.
Next, we examine the task of online adaptation of reinforcement learning
agents in the face of adversarial disturbances. Our methods seamlessly
integrate with any black-box model-free approach, yielding provable regret
guarantees when dealing with linear dynamics. These regret guarantees
unconditionally improve the best-known results for bandit linear control in
having no dependence on the state-space dimension. We evaluate our method over
various standard RL benchmarks and demonstrate improved robustness. | Udaya Ghai, Arushi Gupta, Wenhan Xia, Karan Singh, Elad Hazan | 2023-05-27T19:02:55Z | http://arxiv.org/abs/2305.17552v2 | # Online Nonstochastic
###### Abstract
In this work, we explore robust model-free reinforcement learning algorithms for environments that may be dynamic or even adversarial. Conventional state-based policies fail to accommodate the challenge imposed by the presence of unmodeled disturbances in such settings. Additionally, optimizing linear state-based policies pose obstacle for efficient optimization, leading to nonconvex objectives even in benign environments like linear dynamical systems.
Drawing inspiration from recent advancements in model-based control, we introduce a novel class of policies centered on disturbance signals. We define several categories of these signals, referred to as pseudo-disturbances, and corresponding policy classes based on them. We provide efficient and practical algorithms for optimizing these policies.
Next, we examine the task of online adaptation of reinforcement learning agents to adversarial disturbances. Our methods can be integrated with any black-box model-free approach, resulting in provable regret guarantees if the underlying dynamics is linear. We evaluate our method over different standard RL benchmarks and demonstrate improved robustness.
## 1 Introduction
Model-free reinforcement learning in time-varying dynamical systems is a statistically and computationally challenging problem. By contrast, model based control of even unknown and changing linear dynamical systems has enjoyed recent successes. In particular, new techniques from online learning have been applied to these linear dynamical systems (LDS) within the framework of online nonstochastic control. A comprehensive survey can be found in Hazan and Singh (2022). The key innovation in the aforementioned framework is the introduction of a new policy class called Disturbance-Action Control (DAC), which achieves a high degree of representational capacity without compromising computational efficiency. Moreover, efficient gradient-based algorithms can be employed to obtain provable regret bounds for this approach, even in the presence of adversarial noise. Crucially, these methods rely on the notion of disturbance, defined to capture unmodeled deviations between the observed and nominal dynamics, and its availability to the learner.
This paper explores the potential of applying these disturbance-based techniques, which have proven effective in model-based control, to model-free reinforcement learning. However, it is not immediately clear how these methods can be adapted to model-free RL, as the disturbances in model-free RL are unknown to the learner.
We therefore develop the following approach to this challenge: instead of relying on a known disturbance, we create a new family of signals, which we call "Pseudo-Disturbances", and define policies that use "Pseudo-Disturbance" features to produce actions. The advantage of this approach is that it has the potential to produce more robust policies. Again inspired by model-based methods, we aim to augment existing reinforcement learning agents with a "robustness module" that serves two purposes. Firstly, it can filter out adversarial noise from the environment and improve agent performance in noisy settings. Secondly, in cases where the environment is benign and simple, such as linear dynamical systems, the augmented module will achieve a provably optimal solution. We also empirically verify the performance of our method on OpenAI Gym environments.
### Our Contributions
In this work, we make the following algorithmic and methodological contributions:
* In contrast to state-based policies commonly used in RL, Section 3 defines the notion of a **disturbance-based policy**. These policies augment traditional RL approaches that rely strictly on state feedback.
* We develop **three distinct and novel methods** (Sections 3.1, 3.2, 3.3) to estimate the Pseudo-Disturbance in the model-free RL setting.
* We develop a **new algorithm**, MF-GPC (Algorithm 1), which adapts existing RL methods to take advantage of our Pseudo-Disturbance framework.
* We **empirically verify** the performance of our method on OpenAI Gym environments in Section 5. We find our method applied on top of a DDPG baseline performs significantly better than the baseline in some cases.
* We prove that the proposed algorithm achieves **sublinear regret** for the setting of linear dynamics in Theorem 4. These regret bounds improve upon the best-known results for bandit linear control in terms of their dependence on state space dimension (Appendix E).
### Pseudo-Disturbance based RL
A fundamental primitive of the non-stochastic control framework is the _disturbance_. In our RL setting, the system evolves according to the following equation
\[\mathbf{x}_{t+1}=f(\mathbf{x}_{t},\mathbf{u}_{t})+\mathbf{w}_{t}\;,\]
where \(\mathbf{x}_{t}\) is the state, \(\mathbf{u}_{t}\) is control signal, and \(\mathbf{w}_{t}\) is a bounded, potentially adversarially chosen, disturbance. Using knowledge of the dynamics, \(f\), non-stochastic control algorithms first compute \(\mathbf{w}_{t}\), and then compute actions via DAC, as follows
\[\mathbf{u}_{t}=\pi_{\text{base}}(\mathbf{x}_{t})+\sum_{i=1}^{h}M_{i}^{t} \mathbf{w}_{t-i}\;.\]
Here \(\pi_{\text{base}}\) is a baseline linear controller, and \(M^{t}\) are matrices, learned via gradient descent or similar algorithms. For linear systems, the DAC law is a convex relaxation of linear policies, which allows us to prove regret bounds against a powerful policy classes using tools from online convex optimization.
To generalize this approach, without a model or knowledge of the dynamics function \(f\), both defining and obtaining this disturbance in order to implement DAC or similar policies becomes unclear. To address this, we introduce the concept of a _Pseudo-Disturbance_ (PD) and provide three distinct variants, each representing a novel signal in reinforcement learning. These signals have various advantages and disadvantages depending on the available environment:
1. The first notion is based on the gradient of the temporal-difference error. It assumes the availability of a value function oracle that can be evaluated or estimated online or offline using any known methodology.
2. The second notion also assumes the availability of a black-box value function oracle/generator. We assign artificial costs over the states and generate multiple auxiliary value functions to create a "value vector." The Pseudo-Disturbance is defined as the difference between the value vector at consecutive states. This signal's advantage is that it does not require any zero-order optimization mechanism for estimating the value function's gradient.
3. The third notion assumes the availability of an environment simulator. The Pseudo-Disturbance is defined as the difference between the true state and the simulated state for a specific action.
For all these Pseudo-Disturbance variants, we demonstrate how to efficiently compute them (under the appropriate assumption of either a value function oracle or simulator). We provide a reduction from any RL algorithm to a PD-based robust counterpart that converts an RL algorithm into one that is also robust to adversarial noise. Specifically, in the special case of linear dynamical systems our algorithm has provable regret bounds. The formal description of our algorithm, as well as a theorem statement, are given in Section 4. For more general dynamical systems, the learning problem is provably intractable. Nonetheless, we demonstrate the efficacy of these methods empirically.
### Related Work
Model-free reinforcement learning.Reinforcement learning (Sutton and Barto, 2018) approaches are classified as model-free or model-based (Janner et al., 2019; Ha and Schmidhuber, 2018; Osband and Van Roy, 2014), dependent on if they attempt to explicitly try to learn the underlying transition dynamics an agent is subject to. While the latter is often more sample efficient (Wang et al., 2019), model-free approaches scale better in that their performance does not prematurely saturate and keeps improving with number of episodes (Duan et al., 2016). In this paper, we focus on adaption to unknown, arbitrary disturbances for model-free reinforcement learning algorithms, which can be viewed as a tractable restriction of the challenging adversarial MDP setting (Abbasi Yadkori et al., 2013). Model-free approaches may further be divided into policy-based (Schulman et al., 2015, 2017), value-based approaches (Mnih et al., 2013), and actor-critic approaches (Barth-Maron et al., 2018; Lillicrap et al., 2016); the latter use a learnt value function to reduce the variance for policy optimization.
Robust and Adaptive reinforcement learning.Motivated by minimax performance criterion in robust control (Zhang et al., 2021; Morimoto and Doya, 2005) introduced to a minimax variant of Q-learning to enhance of he robust of policies learnt from off-policy samples. This was later extended to more tractable formulations and structured uncertainty sets in Tessler et al. (2019); Mankowitz et al. (2019); Pinto et al. (2017); Zhang et al. (2021); Tamar et al. (2013), including introductions of model-based variants (Janner et al., 2019). Another approach to enhance the robustness is Domain Randomization (Tobin et al., 2017; Akkaya et al., 2019; Chen et al., 2021), wherein a model is trained in a variety of randomized environments in a simulator, and the resulting policy becomes robust enough to be applied in the real world. Similarly, adversarial training (Mandlekar et al., 2017; Vinitsky et al., 2020; Agarwal et al., 2021) has been shown to improve performance in out-of-distribution scenarios. In contrast to the previously mentioned approaches, our proposed approach only adapts the policy to observed disturbances at test time, and does not require a modification of the training procedure. This notably means that the computational cost and sample requirement of the approach matches that of vanilla RL in training, and has the benefit of leveraging recent advances in mean-reward RL, which is arguably better understood and more studied.Adaption of RL agents to new and changing environments has been similarly tackled through the lens of Meta Learning and similar approaches (Wang et al., 2016; Nagabandi et al., 2018; Pritzel et al., 2017; Agarwal et al., 2021).
Online nonstochastic control.The presence of arbitrary disturbances during policy execution had been for long in the fields of robust optimization and control (Zhou and Doyle, 1998). In contrast to minimax objectives considered in robust control, online nonstochastic control algorithms (see Hazan and Singh (2022) for a survey) are designed to minimize regret against a benchmark policy class, and thus compete with the best policy from the said class determined posthoc. When the benchmark policy class is sufficiently expressive, this approach has the benefit of robustness against adversarially chosen disturbances (i.e. non-Gaussian and potentially adaptively chosen (Ghai et al., 2021)), while distinctly
not sacrificing performance in the typical or average case. The first nonstochastic control algorithm with sublinear regret guarantees was proposed in Agarwal et al. (2019) for linear dynamical systems. It was subsequently extended to partially observed systems (Simchowitz et al., 2020), unknown systems (Hazan et al., 2020), multi-agent systems (Ghai et al., 2022) and the time-varying case (Minayan et al., 2021). The regret bound was improved to a logarithmic rate in Simchowitz (2020) for strongly convex losses. Chen et al. (2021) extend this approach to non-linearly parameterized policy classes, like deep neural networks. Bandit versions of the nonstochastic control setting have also been studied (Gradu et al., 2020; Cassel and Koren, 2020; Sun et al., 2023) and are particularly relevant to the RL setting, which only has access to scalar rewards.
### Paper Outline
After some basic definitions and preliminaries in Section 2, we describe the new Pseudo-Disturbance signals and how to create them in a model-free reinforcement learning environment in Section 3. In Section 4 we give a unified meta-algorithm that exploits these signals and applies them as an augmentation to any given RL agent. In Section 5 we evaluate our methods empirically.
## 2 Setting and Preliminaries
Consider an agent adaptively choosing actions in a dynamical system with adversarial cost functions. We use notation from the control literature: \(\mathbf{x}_{t}\in\mathbb{R}^{d_{x}}\) is a vector representation of the state1 at time \(t\), \(\mathbf{u}_{t}\in\mathbb{R}^{d_{u}}\) is the corresponding action. Formally, the evolution of the state will follow the equations
Footnote 1: Although we consider continuous state and action spaces in this section and the remainder of the main paper, we handle discrete spaces in Appendix C.
\[\mathbf{x}_{t+1}=f(\mathbf{x}_{t},\mathbf{u}_{t})+\mathbf{w}_{t},\]
where \(\mathbf{w}_{t}\) is an arbitrary (even adversarial) disturbance the system is subject to at time \(t\). Following this evolution, the agent suffers a cost of \(c_{t}(\mathbf{x}_{t},\mathbf{u}_{t})\).
In this work, we adapt model-free reinforcement learning algorithms to this more challenging case. The (easier) typical setting for model-free methods assume, in contrast, that the disturbance \(\mathbf{w}_{t}\) is sampled _iid_ from a distribution \(\mathcal{D}\), and that the cost functions \(c(\mathbf{x},\mathbf{u})\) is fixed and known. Central to the study of model-free methods are the notions of the state- and state-action value functions, defined as the discounted sum of future costs acquired by starting at any state (or state-action pair) and thereafter following the policy \(\pi\). For any policy \(\pi\), we denote the state- and state-action value functions, which are mappings from state or state/action pair to the real numbers, as
\[Q_{\pi}(\mathbf{x},\mathbf{u})=\mathbb{E}\left[\left.\sum_{t=0}^{\infty}\gamma ^{t}c(\mathbf{x}_{t}^{\pi},\mathbf{u}_{t}^{\pi})\right|\mathbf{x}_{0}^{\pi}= \mathbf{x},\mathbf{u}_{0}^{\pi}=\mathbf{u}\right]\,V_{\pi}(\mathbf{x})=\mathbb{E} \left[\left.\sum_{t=0}^{\infty}\gamma^{t}c(\mathbf{x}_{t}^{\pi},\mathbf{u}_{t}^ {\pi})\right|\mathbf{x}_{0}^{\pi}=\mathbf{x}\right]\,\]
where expectations are taken over random transitions in the environment and in the policy.
A special case we consider henceforth are linear dynamical systems. In these special instances the state involves linearly according to a linear transformation parameterized by matrices \(A,B\), i.e.
\[\mathbf{x}_{t+1}=A\mathbf{x}_{t}+B\mathbf{u}_{t}+\mathbf{w}_{t}.\]
## 3 Pseudo-Disturbance Signals and Policies
In this section we describe the three different Pseudo-Disturbance (PD) signals we can record in a general reinforcement learning problem. As discussed, the motivation for this signal comes from the framework of online nonstochastic control. We consider dynamical systems with an additive misspecification or noise structure,
\[\mathbf{x}_{t+1}=f(\mathbf{x}_{t},\mathbf{u}_{t})+\mathbf{w}_{t},\]
where the perturbation \(\mathbf{w}_{t}\) does not depend on the state. Using perturbations rather than state allows us to avoid recursive structure that makes the optimization landscape challenging and nonconvex. As
discussed, we introduce Pseudo-Disturbance signals \(\hat{\mathbf{w}}_{t}\in\mathbb{R}^{d_{w}}\) in lieu of the true disturbances. We note that the PD dimensionality \(d_{w}\) need not be the same as that of the true disturbance, \(d_{x}\).
An important class of policies that we consider henceforth is linear in the Pseudo-Disturbance, i.e.
\[\Pi_{\text{DAC}}=\left\{\left.\pi(\mathbf{x}_{1:t})=\pi_{\text{base}}( \mathbf{x}_{t})+\sum_{i=1}^{h}M_{i}\hat{\mathbf{w}}_{t-i}\right|M_{i}\in \mathbb{R}^{d_{u}\times d_{w}}\right\}.\]
Here \(\Pi_{\text{DAC}}\) denotes the policy class of Disturbance-Action-Control. The fact that \(\mathbf{w}_{t}\) does not depend on our actions allows for convex optimization of linear disturbance-action controllers in the setting of linear dynamical systems, see e.g. Hazan and Singh (2022).
We would like to capture the essence of this favorable phenomenon in the context of model free RL, but what would replace the perturbations \(\mathbf{w}_{t}\) without a dynamics model \(f\)? That's the central question of this section, and we henceforth give three different proposal for this signal.
An important goal in constructing these signals is that **in the case of linear dynamical systems, it recovers the perturbation**. This will enable us to prove regret bounds in the case the environment is an LDS.
### Pseudo-Disturbance Class I: Value-Function Gradients
The first signal we consider is based on the gradient of the value function. The value function maps the state onto a scalar, and this information is insufficient to recreate the perturbation even if the underlying environment is a linear dynamical system. To exact a richer signal, we thus consider the gradient of the value function with respect to the action and state.
The basic goal for our pseudo-perturbation is to implement the following equation
\[\hat{\mathbf{w}}_{t}=\nabla_{\mathbf{u}}(\gamma V_{\pi}(f(\mathbf{x}_{t}, \mathbf{u})+\mathbf{w}_{t})-(Q_{\pi}(\mathbf{x}_{t},\mathbf{u})-c(\mathbf{x}_{ t},\mathbf{u})))|_{\mathbf{u}=\mathbf{u}_{t}}\;,\]
where \(f(\mathbf{x}_{t},\mathbf{u})+\mathbf{w}_{t}\) represents the counterfactual next state after playing \(\mathbf{u}\) at state \(\mathbf{x}_{t}\). Note, this signal is a gradient of a temporal-difference error. If \(\mathbf{w}_{t}\) was in fact (_iid_) stochastic with \(V_{\pi}\), \(Q_{\pi}\) as corresponding value functions, the TD-error on expectation would be zero. Therefore, this signal on average measures deviation introduced in \(\mathbf{x}_{t+1}\) due to arbitrary or adversarial \(\mathbf{w}_{t}\). We can also view this expression as
\[\hat{\mathbf{w}}_{t}=\nabla_{\mathbf{u}}(\gamma V_{\pi}(f(\mathbf{x}_{t}, \mathbf{u})+\mathbf{w}_{t})-\gamma V_{\pi}(f(\mathbf{x}_{t},\mathbf{u})))|_{ \mathbf{u}=\mathbf{u}_{t}}\;.\]
With linear systems \(V_{\pi}\) is quadratic, so this becomes a linear function of \(\mathbf{w}_{t}\). Computing \(\nabla_{\mathbf{u}}V_{\pi}(f(\mathbf{x}_{t},\mathbf{u})+\mathbf{w}_{t})|_{ \mathbf{u}=\mathbf{u}_{t}}\) analytically would require knowledge of the dynamics, but luckily this can be efficiently estimated online. Using a policy \(\pi\), with noised actions \(\mathbf{u}_{t}=\pi(\mathbf{x}_{t})+\mathbf{n}_{t}\), for \(\mathbf{n}_{t}\sim\mathcal{N}(0,\Sigma)\) we have the following PD estimates:
\[\boxed{\hat{\mathbf{w}}_{t}=\gamma V_{\pi}(\mathbf{x}_{t+1})\Sigma^{-1} \mathbf{n}_{t}-\nabla_{\mathbf{u}}(Q_{\pi}(\mathbf{x}_{t},\mathbf{u})-c( \mathbf{x}_{t},\mathbf{u}))|_{\mathbf{u}=\mathbf{u}_{t}}\;,} \tag{1}\]
\[\boxed{\hat{\mathbf{w}}_{t}=(c(\mathbf{x}_{t},\mathbf{u}_{t})+\gamma V_{\pi}( \mathbf{x}_{t+1})-Q_{\pi}(\mathbf{x}_{t},\mathbf{u}_{t}))\Sigma^{-1}\mathbf{n }_{t}\;.} \tag{2}\]
Intuitively, the second estimator may have lower variance as the temporal-difference error can be much smaller than the magnitude of the value function. An additional benefit is that this implementation only requires a scalar cost signal without needing access to a differentiable cost function.
The most important property of this estimator is that it, in expectation, it produces a signal that is a linearly transformation of the true disturbance if the underlying setting is a linear dynamical system. This is formalized in the following lemma.
**Lemma 1**.: _Consider a time-invariant linear dynamical systems with system matrices \(A,B,\) along with a linear baseline policy \(\pi\) defined by control law \(\mathbf{u}_{t}=-K_{\pi}\mathbf{x}_{t}\). In expectation, the pseudo disturbances (1) and (2) are linear transformations of the actual perturbation_
\[\mathbb{E}[\hat{\mathbf{w}}_{t}|\mathbf{x}_{t}]=T\mathbf{w}_{t},\]
_where \(T\) is a fixed linear operator that depends on the system._
### Pseudo-Disturbance Class II: Vector Value Functions
This approach derives a signal from auxiliary value functions. Concretely, instead of scalar-valued cost function \(c:\mathbb{R}^{d_{x}}\rightarrow\mathbb{R}\), consider a vector-valued cost function \(\mathbf{c}:\mathbb{R}^{d_{x}}\rightarrow\mathbb{R}^{d_{w}}\). For such vector-valued cost, we introduce vectorized value and state-action value functions as
\[V_{\pi}^{\mathbf{c}}:\mathbb{R}^{d_{x}}\rightarrow\mathbb{R}^{d_{w}}\,\ Q_{\pi}^{ \mathbf{c}}:\mathbb{R}^{d_{x}}\times\mathbb{R}^{d_{w}}\rightarrow\mathbb{R}^{d _{w}}\.\]
In particular, we have
\[Q_{\pi}^{\mathbf{c}}(\mathbf{x},\mathbf{u})=\mathbb{E}\left[\left.\sum_{t=0}^{ \infty}\gamma^{t}\mathbf{c}(\mathbf{x}_{t}^{\pi})\right|\mathbf{x}_{0}^{\pi} =\mathbf{x},\mathbf{u}_{0}^{\pi}=\mathbf{u}\right]\,V_{\pi}^{\mathbf{c}}( \mathbf{x})=\mathbb{E}\left[\left.\sum_{t=0}^{\infty}\gamma^{t}\mathbf{c}( \mathbf{x}_{t}^{\pi})\right|\mathbf{x}_{0}^{\pi}=\mathbf{x}\right]\.\]
Our PD signal is then
\[\boxed{\hat{\mathbf{w}}_{t}=\mathbf{c}(\mathbf{x}_{t})+\gamma\mathbf{V}_{\pi }^{\mathbf{c}}(\mathbf{x}_{t+1})-\mathbf{Q}_{\pi}^{\mathbf{c}}(\mathbf{x}_{t},\mathbf{u}_{t})\.} \tag{3}\]
In contrast to the first approach, for a fixed set of cost functions, this approach provides a deterministic PD-signal. This is very beneficial, as at inference time the DAC policy can be run without injecting additional noise and without requiring a high variance stochastic signal. This does come at a cost, as this method requires simultaneous off-policy evaluation for many auxiliary value functions (each corresponding to a different scalar cost) before DAC can be run via \(Q\)-function evaluations at inference, both of which can be significantly more expensive than the first approach.
For the case of linear dynamical systems, if we use _linear_ costs on top of a linear base policy, this approach can recover the disturbances up to a linear transformation. It can be seen that the values corresponding to a linear cost function \(c\) are linear functions of the state, and hence the vectorized versions are also linear functions of state. We formalize this as follows:
**Lemma 2**.: _Consider a time-invariant linear dynamical systems with system matrices \(A,B\), along with a linear baseline policy \(\pi\) defined by control law \(\mathbf{u}_{t}=-K_{\pi}\mathbf{x}_{t}\). Let \(\mathbf{V}_{\pi}^{\mathbf{c}}\) and \(\mathbf{Q}_{\pi}^{\mathbf{c}}\) be value functions for \(\pi\) for i.i.d. zero mean noise with linear costs \(\mathbf{c}(x):=Lx\), then the PD-signal (3) is a linear transformation_
\[\hat{\mathbf{w}}_{t}=T\mathbf{w}_{t},\]
_where \(T\) is a fixed linear operator that depends on the system and baseline policy \(\pi\). In addition, if \(L\) is full rank and the closed loop dynamics are stable, then \(T\) is full rank._
### Pseudo-Disturbance Class III: Simulator Based
The last Pseudo-Disturbance signal we consider requires a simulator. We include it here since it is intuitive, particularly simple to implement, and yet comes with theoretical guarantees.
The Pseudo-Disturbance is taken to be the difference between the actual state reached in an environment, and the expected state, over the randomness in the environment. To compute the expected state, we require the simulator \(f_{\text{sim}}\) initialized at the current state. Formally,
\[\boxed{\hat{\mathbf{w}}_{t}=\mathbf{x}_{t+1}-f_{\text{sim}}(\mathbf{x}_{t}, \mathbf{u}_{t}).} \tag{4}\]
The simplicity of this PD is accompanied by a simple lemma on its characterization of the disturbance in a dynamical system, even if that system is time varying, as follows,
**Lemma 3**.: _Suppose we have a simulator \(f_{\text{sim}}\) such that \(\forall\mathbf{x},\mathbf{u},\|f_{\text{sim}}(\mathbf{x},\mathbf{u})-f( \mathbf{x},\mathbf{u})\|\ \leq\ \delta\), then Pseudo-Disturbance (4) is approximately equal to the actual perturbation \(\|\widetilde{\mathbf{w}}_{t}-\mathbf{w}_{t}\|\ \leq\ \delta\)._
### Merits of different Pseudo-Disturbance signals
Each of the three PD signals described in this section offers something a bit different. PD3 offers the most direct disturbance signal, but comes with the requirement of a simulator. If the simulator is very accurate, this is likely the strongest signal, though this method may not be suitable with a large sim-to-real gap. PD1 and PD2 on the other hand, do not require a simulator but also have a natural trade off. PD1 is simpler and easier to add on top of an existing policy. However, it is a zeroth
order estimation, so the guarantees only hold in expectation and may have high variance. On the other hand PD2 is not a stochastic estimate, but it requires auxiliary value estimation from the base policy. This may come at the cost of additional space and computational complexity. In many cases, this can be handled using the same deep Q-network except with a wider head, which may not be so generous. PD2 also has the benefit of being able to control the disturbance dimensions \(d_{w}\) by varying the dimension of the auxiliary cost. This can be advantageous in high dimensional state spaces where we suspect the true dynamics are much simpler.
## 4 Meta Algorithm and Main Theorem
In this section we define a meta-algorithm for general reinforcement learning. The algorithm takes as an input an existing RL method, that may or may not have theoretical guarantees. It adds an additional layer on top, which estimates the Pseudo-Disturbances according to one of the three methods in the previous section. It then uses an online gradient method to optimize a linear policy in the past Pseudo-Disturbances. This can be viewed as a zeroth-order model-free version of the Gradient Perturbation Controller (GPC) Agarwal et al. (2019).
The algorithm is formally defined in Algorithm 1. A typical choice of the parametrization \(\pi(\cdot|M)\) is a linear function of a window of past disturbances (ie. Disturbance Action Control Agarwal et al. (2019)).
\[\pi(\mathbf{w}_{t-1:t-h}|M_{1:h})=\sum_{i=1}^{h}M_{i}\mathbf{w}_{t-i}. \tag{5}\]
```
1:Input: Memory parameter \(h\), learning rate \(\eta\), exploration noise covariance \(\Sigma\), initialization \(M_{1:h}^{1}\in\mathbb{R}^{d_{w}\times d_{w}\times h}\), initial value and \(\tilde{Q}\) functions, base RL algorithm \(\mathcal{A}\).
2:for\(t\) = 1... \(T\)do
3: Use action \(\mathbf{u}_{t}=\pi_{\text{base}}(\mathbf{x}_{t})+\pi(\hat{\mathbf{w}}_{t-1:t- h}|M^{t})+\mathbf{n}_{t}\), where \(\mathbf{n}_{t}\) is _iid_ Gaussian, i.e. \[\mathbf{n}_{t}\sim\mathcal{N}(0,\Sigma)\]
4: Observe state \(\mathbf{x}_{t+1}\), and cost \(c_{t}=c_{t}(\mathbf{x}_{t},\mathbf{u}_{t})\).
5: Compute Pseudo-Disturbance [see (2),(3), (4)] \[\hat{\mathbf{w}}_{t}=\text{PD-estimate}(\mathbf{x}_{t+1},\mathbf{x}_{t}, \mathbf{u}_{t},c_{t},\mathbf{n}_{t}).\]
6: Update policy parameters using the stochastic gradient estimate (see Section 4.1) \[M^{t+1}\gets M^{t}-\eta\ c_{t}(\mathbf{x}_{t},\mathbf{u}_{t})\Sigma^{-1} \sum_{j=0}^{h-1}\mathbf{n}_{t-i}\otimes J_{i}^{t},\] where \(\otimes\) is an outer product and \(J_{i}^{t}=\hat{\mathbf{w}}_{t-i-1:t-h-i}\) for (5), and more generally, \[J_{i}^{t}=\left.\frac{\partial\pi(\hat{\mathbf{w}}_{t-i-1:t-h-i}|M_{i})}{ \partial M}\right|_{M=M^{t}}.\]
7:endfor
8:Optionally, update the policy \(\pi_{\text{base}}\) and its \(Q,V\) functions using \(\mathcal{A}\) so that they are Bellman consistent, i.e. they satisfy the policy version of Bellman equation.
```
**Algorithm 1** MF-GPC (Model-Free Gradient Perturbation Controller)
**Theorem 4** (Informal Statement (see Theorem 8)).: _If the underlying dynamics are linear with the state evolution specified as_
\[\mathbf{x}_{t+1}=A\mathbf{x}_{t}+B\mathbf{u}_{t}+\mathbf{w}_{t},\]
_with \(d_{u}\ \leq\ d_{x}\), then then as long as the Pseudo-Disturbance signal \(\hat{\mathbf{w}}_{t}\) satisfies \(\hat{\mathbf{w}}_{t}=T\mathbf{w}_{t}\), for some (possibly unknown) invertible map \(T\), Algorithm 1 generates controls \(\mathbf{u}_{t}\) such that for any sequence of bounded (even adversarial) \(\mathbf{w}_{t}\) such that the following holds_
\[\sum_{t}c_{t}(\mathbf{x}_{t},\mathbf{u}_{t})\ \leq\ \min_{\pi\in\Pi^{DAC}} \sum_{t}c_{t}(\mathbf{x}_{t}^{\pi},\mathbf{u}_{t}^{\pi})+\widetilde{\mathcal{ O}}(d_{u}T^{3/4}),\]
_for any any sequence of convex costs \(c_{t}\), where the policy class \(DAC\) refers to all policies \(\pi\) that produce a control as a linear function of \(\mathbf{w}_{t}\). Further, if the costs \(c_{t}\) are \(L\)-smooth, the regret for Algorithm 1 admits the following improved upper bound of \(\widetilde{\mathcal{O}}(d_{u}^{4/3}T^{2/3})\)._
In particular, the above theorem implies the stated regret bounds when the Pseudo-Disturbance is estimated as described in Equations 3 (Vector Value Function-based) and 4 (Simulator-based).
### Derivation of update
In the algorithm, the key component is computing an approximate policy gradient of the cost. A complete theoretical analysis of our algorithm can be found in Appendix E, but we provide a brief sketch of the gradient calculation. Let \(J_{t}(M)\) denote the expected counterfactual cost \(c_{t}\) of following policy \(M\) with the same observed disturbances \(w_{t}\). We first note that if the dynamics are suitably stabilized (which should be done by \(\pi_{\text{base}}\)), the state and cost can be approximated as a function \(C\) of a small window of previous controls.
\[J_{t}(M)=\mathbb{E}_{\mathbf{n}_{1:t}}[c_{t}(\mathbf{x}_{t}^{M},\mathbf{u}_{t }^{M})]\approx\mathbb{E}_{\mathbf{n}_{t-h:t}}[C(\mathbf{u}_{t}(M)+\mathbf{n} _{t},\ldots,\mathbf{u}_{t-h}(M)+\mathbf{n}_{t-h})]\;,\]
where we use \(u_{t-i}(M)\) as a shorthand for \(\pi(\hat{\mathbf{w}}_{t-i-1:t-h-i}|M)\). The expression here is that of a Gaussian smoothed function, which allows us to get the following unbiased single point gradient estimate
\[\nabla_{\mathbf{u}_{t}}\mathbb{E}_{\mathbf{n}_{t-h:t}}[C(\mathbf{u}_{t}+ \mathbf{n}_{t},\ldots,\mathbf{u}_{t-h}+\mathbf{n}_{t-h})]=\mathbb{E}_{\mathbf{ n}_{t-h:t}}[\Sigma^{-1}C(\mathbf{u}_{t}+\mathbf{n}_{t},\ldots,\mathbf{u}_{t-h}+ \mathbf{n}_{t-h})\mathbf{n}_{i}]\;.\]
We use a single sample to get a stochastic gradient. Using the chain rule, which involves an outer product due to the tensor structure of \(M\), we get stochastic gradients with respect to \(M\) as follows
\[\widehat{\nabla_{M}}J_{t}(M)\approx C(\mathbf{u}_{t}(M)+\mathbf{n}_{t}, \ldots,\mathbf{u}_{t-h}(M)+\mathbf{n}_{t-h})\Sigma^{-1}\sum_{i=0}^{h-1} \mathbf{n}_{t-i}\otimes\frac{\partial\pi(\hat{\mathbf{w}}_{t-i-1:t-h-i}|M)}{ \partial M}\;.\]
Finally, we note that \(M^{t}\) is slowly moving because of gradient descent, so we can approximate
\[c_{t}(\mathbf{x}_{t},\mathbf{u}_{t})\approx C(\mathbf{u}_{t}(M^{t})+\mathbf{ n}_{t},\ldots,\mathbf{u}_{t-h}(M^{t})+\mathbf{n}_{t-h}).\]
Putting everything together, we have
\[\widehat{\nabla_{M}}J_{t}(M)\Big{|}_{M=M^{t}}\approx\;c_{t}(\mathbf{x}_{t}, \mathbf{u}_{t})\Sigma^{-1}\sum_{i=0}^{h-1}\mathbf{n}_{t-i}\otimes\left.\frac {\partial\pi(\hat{\mathbf{w}}_{t-i-1:t-h-i}|M)}{\partial M}\right|_{M=M^{t}}. \tag{6}\]
## 5 Experiments
We apply the MF-GPC Algorithm 1 to various OpenAI Gym (Brockman et al., 2016) environments. We conduct our experiments in the research-first modular framework Acme (Hoffman et al., 2020). We pick \(h=5\) and use the DDPG algorithm (Lillicrap et al., 2016) as our underlying baseline. We update the \(M\) matrices every 3 episodes instead of continuously to reduce runtime. We also apply weight decay to line 6 of Algorithm 1. Our implementation of PD1 is based on Equation 2. PD2 can be implemented with any vector of rewards. We choose linear function \(L\) given in Lemma 2 to be the identity function. Hence \(\mathbf{c}\) in Equation 3 reduces to the state \(x_{t}\) itself. We pick \(\mathbf{V}\) and \(\mathbf{Q}\) to be the first \(d_{x}\) units of the last layer of the critic network. We train for 1e7 steps as a default (this is also the default in the Acme code) and if performance has not converged we extend to 1.5e7 steps. Because the \(M\) matrices impact the exploration of the algorithm, we tune the exploration parameter \(\sigma\) for both DDPG and MF-GPC. For the baseline DDPG, we typically explore \(\sigma\in\{0.15,0.2,0.25\}\). More experimental details may be found in Appendix Section B.
Results for Noisy Hopper, Walker 2D, and AntWe create a noisy Hopper, Walker 2D, and Ant environments by adding a Uniform random variable \(U[-0.1,0.1]\) to the state. The noise is added at every step for both the DDPG baseline and our MF-GPC. We plot the results for PD2, and PD3 in Figure 1. We find that PD2 and PD3 perform relatively well in these settings. Graphs depicting all runs for different \(\sigma\) are available in Appendix Section B. MF-GPC is not guaranteed to improve
performance in realistic RL settings. We find that generally PD1 does not perform well e.g. in Figure 2 a) and some examples where applying it yields performance similar to baseline are given in Appendix Section B. This is likely due to the high variance of the PD estimate. We find that neither our method nor the baseline is too sensitive to our hyper-parameter tuning (Figure 2 b) ), possibly because we start with the default Acme parameters which are already well tuned for the noiseless environment.
Linear Dynamical SystemsWe evaluate our methods on both low dimensional (\(d_{x}=2,d_{u}=1\)) and a higher dimensional (\(d_{x}=10,d_{u}=5\)) linear systems with sinusoidal disturbances to demonstrate the improvements in dimension of our method (labeled RBPC) over BPC (Gradu et al., 2020). We use the full information GPC (Agarwal et al., 2019) and LQR as baselines using implementations from Gradu et al. (2021). While performance is comparable to BPC on the small system, on the larger system, BPC could not be tuned to learn while RBPC improves upon the LQR baseline (see Figure 3). In both experiments, \(h=5\) and the learning rate and exploration noise is tuned.
Figure 1: Episode return for best performing MF-GPC model versus best performing baseline DDPG model for various OpenAI Gym environments and pseudo-estimation methods. Environment and pseudo-estimation method shown in title. Results averaged over 25 seeds. Shaded areas represent confidence intervals. We find that PD2 and PD3 perform well in these settings.
Figure 2: Left: Episode return for PD1 for Noisy Hopper. We find that PD1 is not effective for RL settings. Right: Hyper-parameter search for PD3 on Noisy Walker. We find that neither Meta-GPC nor the baseline DDPG algorithm is too sensitive to tuning.
## 6 Conclusion
We have described a new approach for model-free RL based on recent exciting advancements in model based online control. Instead of using state-based policies, online nonstochastic control proposes the use of disturbance-based policies. To create a disturbance signal without a model, we define three possible signals, called Pseudo-Disturbances, each with its own merits and limitations. We give a generic reinforce-based method using the PD signals with provable guarantees: if the underlying MDP is a linear dynamical system, we recover the strong guarantees of online nonstochastic control. Preliminary promising experimental results are discussed. We believe this is a first step in the exciting direction of applying tried-and-tested model-based control techniques for general reinforcement learning.
|
2304.00307 | Model reduction of Brownian oscillators: quantification of errors and
long-time behaviour | A procedure for model reduction of stochastic ordinary differential equations
with additive noise was recently introduced in [Colangeli-Duong-Muntean,
Journal of Physics A: Mathematical and Theoretical, 2022], based on the
Invariant Manifold method and on the Fluctuation-Dissipation relation. A
general question thus arises as to whether one can rigorously quantify the
error entailed by the use of the reduced dynamics in place of the original one.
In this work we provide explicit formulae and estimates of the error in terms
of the Wasserstein distance, both in the presence or in the absence of a sharp
time-scale separation between the variables to be retained or eliminated from
the description, as well as in the long-time behaviour.
Keywords: Model reduction, Wasserstein distance, error estimates, coupled
Brownian oscillators, invariant manifold, Fluctuation-Dissipation relation. | M. Colangeli, M. H. Duong, A. Muntean | 2023-04-01T13:06:47Z | http://arxiv.org/abs/2304.00307v1 | # Model reduction of Brownian oscillators: quantification of errors and long-time behaviour
###### Abstract
A procedure for model reduction of stochastic ordinary differential equations with additive noise was recently introduced in [1], based on the Invariant Manifold method and on the Fluctuation-Dissipation relation. A general question thus arises as to whether one can rigorously quantify the error entailed by the use of the reduced dynamics in place of the original one. In this work we provide explicit formulae and estimates of the error in terms of the Wasserstein distance, both in the presence or in the absence of a sharp time-scale separation between the variables to be retained or eliminated from the description, as well as in the long-time behaviour.
**Keywords:** Model reduction, Wasserstein distance, error estimates, coupled Brownian oscillators, invariant manifold, Fluctuation-Dissipation relation.
## 1 Introduction
The notion of scale separation is largely invoked in multiscale modelling and homogeneization methods (including model reduction and operator splitting techniques) [1, 2], and has also found far-reaching applications in different areas of science and engineering, _e.g._ in climate dynamics [1], biochemical systems [16], chemical reaction networks [14], smoldering combustion [15], and so on. A neat illustration of this notion can be traced in the preface of Haken's seminal book on Synergetics [11], where the author writes: "In large classes of systems that are originally described by _many_ variables, the behavior of a system is described and determined by only _few_ variables, the _order parameters_. They fix the behavior of the individual parts via the _slaving principle_". A physical rationale behind the slaving principle amounts to the assumption of decomposition of motions: there exists a short time-scale during which the slow variable does not change significantly, while the fast variable rapidly settles on a value determined by the slow one. The evolution of the latter, in turn, takes place on a much longer scale. A specific form of such principle is realized through the method of adiabatic elimination of fast variables, which underlies the derivation of the Smoluchowski equation from the underdamped Langevin equation. A sharp distinction between slow and fast variables is also a prerequisite for application of the Mori-Zwanzing method [17] in the derivation of reduced equations from
higher dimensional stochastic dynamics, where the Markovian structure of the original process is preserved in the reduced description by stipulating a perfect time-scale separation. The same guiding principle underpins, in kinetic theory, the Grad moment method [14, 15], and has also been exploited in the derivation of linear hydrodynamics from the Boltzmann equation using the framework of the Invariant Manifold [16, 17]. The latter method has also been exploited in [15] to characterize the deterministic component of the contracted description in a system of two coupled (underdamped) Brownian harmonic oscillators. The structure of the noise term of the Markovian reduced dynamics, in turn, was determined via the Fluctuation-Dissipation relation. A general question, then, concerns the derivation of a quantitative estimate of the error stemming from the use of the reduced dynamics in place of the original one. A first attempt, in this direction, was proposed in [14], and it was based on the study of the equilibrium correlation functions in the reduced and in the original processes. A uniform-in-time type of convergence of the correlations evaluated in the two processes was proven to hold in the so-called overdamped limit, where the friction parameter diverges.
In this work we take a step further, and compute explicitly the Wasserstein distance between the laws of the original and reduced processes. This paves the way to explicitly quantify the error inherent to the contracted description. We focus on two classical models thoroughly studied in statistical physics and molecular dynamics, namely the underdamped Brownian harmonic oscillator and a system of two coupled overdamped Brownian harmonic oscillators. In the more traditional approach based on the slow-fast decomposition of motions, a reduced description can be achieved by passing the parameter to a certain limit, thus establishing a perfect time-scale separation, see e.g. [18, 19]. In the present work, instead, we derive the reduced dynamics in a regime characterized by a finite time-scale separation, which is controlled, in the two considered models, by either the friction parameter or the coupling parameter. We show that the reduced and original dynamics are exponentially close at any time, and they coincide if we pass the parameter to the corresponding limit. We also prove that the two dynamics have the same equilibrium measure and, furthermore, they exponentially converge to the equilibrium measure with the same rate. This notable property is a direct consequence of the proposed reduction scheme, in particular of the selection of solutions to the invariance equation obtained from the Invariant Manifold method. As a consequence of this, the spectrum of the reduced drift matrix is a subset of the spectrum of the original drift matrix. The models and precise statements of the results are presented in Section 3 and Section 4.
The work is structured as follows. In Sec. 2 we review the definition of the Wasserstein distance between two probability measures and introduce the basic notation used throughout the manuscript. In Sec. 3 we compute our error estimate based on the Wasserstein distance for a Brownian harmonic oscillator, for which the laws of the original and the contracted descriptions are analytically known. In Sec. 4 we apply our method to a slightly more involved model, constituted by a pair of coupled overdamped Brownian harmonic oscillators. Conclusions and a final outlook are finally drawn in Sec. 5.
## 2 Preliminaries
In this Section we introduce the Wasserstein distance between two probability measures and also fix the notation used throughout the manuscript.
### Wasserstein distance
In this section we recall the definition of the Wasserstein distance between two probability measures and its explicit formula when the two probability measures are Gaussian distributions. The Wasserstein metric plays an central role in many research fields such as optimal transport, partial differential equations and data science. For a detailed account of the topics, we refer the reader to Villani's monograph [19].
Let \(P_{2}(\mathbb{R}^{d})\) be the space of probability measures \(\mu\) on \(\mathbb{R}^{d}\) with finite second moment, namely
\[\int_{\mathbb{R}^{d}}|x|^{2}\mu(dx)<\infty.\]
Let \(\mu\) and \(\nu\) be two probability measures belonging to \(P_{2}(\mathbb{R}^{d})\). The \(L^{2}\)-Wasserstein distance, \(W_{2}(\mu,\nu)\), between \(\mu\) and \(\nu\) is defined via
\[W_{2}^{2}(\mu,\nu):=\inf_{\gamma\in\Gamma(\mu,\nu)}\int_{\mathbb{R}^{d}\times \mathbb{R}^{d}}|x-y|^{2}\,\gamma(dx,dy), \tag{1}\]
where \(\Gamma(\mu,\nu)\) denotes the set of all couplings between \(\mu\) and \(\nu\), i.e., the set of all probability measures on \(\mathbb{R}^{d}\times\mathbb{R}^{d}\) having \(\mu\) and \(\nu\) as the first and the second marginals respectively. More precisely,
\[\Gamma(\mu,\nu):=\{\gamma\in P(\mathbb{R}^{d}\times\mathbb{R}^{d}):\gamma(A \times\mathbb{R}^{d})=\mu(A)\text{ and }\gamma(\mathbb{R}^{d}\times A)=\nu(A)\},\]
for all Borel measurable sets \(A\subset\mathbb{R}^{d}\).
In particular, the Wasserstein distance between two Gaussian measures can be computed explicitly in terms of the means and covariance matrices [10], see also e.g., [11]
\[W_{2}(\mathcal{N}(u,U),\mathcal{N}(v,V))^{2}=|u-v|^{2}+\mathrm{tr}U+\mathrm{ tr}V-2\mathrm{tr}\sqrt{V^{\frac{1}{2}}UV^{\frac{1}{2}}}, \tag{2}\]
where \(u,v\) are the means and \(U,V\) are the covariance matrices. In a one dimensional space, the above formula reduces to
\[W_{2}(\mathcal{N}(u_{1},\sigma_{1}^{2}),\mathcal{N}(u_{2},\sigma_{2}^{2})^{2} =(u_{1}-u_{2})^{2}+(\sigma_{1}-\sigma_{2})^{2}. \tag{3}\]
### Linear drift-diffusion equations
We recall here a well-known result concerning the explicit solution of a general linear drift-diffusion where the initial data is a Gaussian distribution. In the subsequent sections, we will apply this result to our models of (coupled) Brownian oscillators.
To set the stage, we consider the following general linear drift-diffusion equation
\[\partial_{t}\rho=-\operatorname{div}(Cx\rho)+\operatorname{div}(D\nabla\rho),\quad\rho(0)=\rho_{0}. \tag{4}\]
In the above equation, the unknown is a probability measure \(\rho=\rho(t,x)\) with \((t,x)\in(0,\infty)\times\mathbb{R}^{d}\); \(C\) and \(D\) are two constant matrices of order \(d\) representing the drift and diffusion matrices; the initial data \(\rho_{0}\) is a probability measure on \(\mathbb{R}^{d}\).
The following lemma provides the explicit formula for the solution of (4) when the initial data is a Gaussian distribution, see for instance [1].
**Lemma 2.1**.: _Suppose the initial data is a Gaussian, \(\rho_{0}\sim\mathcal{N}(\mu(0),\Sigma(0))\), then the solution to (4) is given by_
\[\rho(t,x)=\frac{1}{\sqrt{(2\pi)^{d}\det\Sigma(t)}}\exp\Big{[}-\frac{1}{2}(x- \mu(t))^{T}\Sigma^{-1}(t)(x-\mu(t))\Big{]} \tag{5}\]
_where \(\mu(t)\) and \(\Sigma(t)\) are given by_
\[\mu(t):=e^{tC}\mu(0),\quad\Sigma(t):=e^{tC}\Sigma(0)e^{tC^{T}}+2\int_{0}^{t}e ^{sC}De^{sC^{T}}\,ds. \tag{6}\]
_Under suitable conditions on \(C\) and \(K\), we have \(\mu(t)\to 0\) and \(\Sigma(t)\to\Sigma_{\infty}\) where_
\[\Sigma_{\infty}:=2\int_{0}^{\infty}e^{sC}De^{sC^{T}}\,ds.\]
_Note that \(\Sigma_{\infty}\) satisfies the so-called Lyapunov equation_
\[2D=C\Sigma_{\infty}+\Sigma_{\infty}C^{T}.\]
### Exponential of a \(2\times 2\) matrix
Lemma 2.1 provides the explicit form of the unique solution to the linear drift-diffusion equation (4) when the initial data is a Gaussian. However, in general the formula (6) is analytically hard to compute since it involves exponential of matrices. The following lemma provides an explicit formula for the exponential of a \(2\times 2\) matrix, which will be used in the subsequent analysis.
**Lemma 2.2**.: _Let \(a,b,c,d\in\mathbb{R}\) be taken arbitrarily with \(a^{2}+b^{2}+c^{2}+d^{2}>0\). The following identity holds_
\[\exp\begin{pmatrix}a&b\\ c&d\end{pmatrix}=\frac{1}{\Delta}\begin{pmatrix}m_{11}&m_{12}\\ m_{21}&m_{22}\end{pmatrix}, \tag{7}\]
_where \(\Delta:=\sqrt{(a-d)^{2}+4bc}\) and_
\[m_{11} :=e^{(a+d)/2}\Big{[}\Delta\cosh\frac{1}{2}\Delta+(a-d)\sinh\frac{ 1}{2}\Delta\Big{]},\] \[m_{12} :=2be^{(a+d)/2}\sinh\frac{1}{2}\Delta,\] \[m_{21} :=2ce^{(a+d)/2}\sinh\frac{1}{2}\Delta,\] \[m_{22} :=e^{(a+d)/2}\Big{[}\Delta\cosh\frac{1}{2}\Delta+(d-a)\sinh\frac {1}{2}\Delta\Big{]}.\]
Proof.: We refer the reader to [1] for a justification of the formula (7).
## 3 Model reduction of a Brownian oscillator
To start off the discussion, we begin with the investigation of a simple model of an underdamped Brownian oscillator considered in [1], which is amenable to an explicit analytical solution. The original dynamics reads as follows:
\[dx(t) =v(t)\,dt\] \[dv(t) =-\omega^{2}x(t)\,dt-\gamma v(t)\,dt+\sqrt{2\gamma\beta^{-1}}\, dW(t),\] \[(x(0),v(0)) =(x_{0},v_{0})\]
Exploiting the Invariant Manifold method and the Fluctuation-Dissipation relation (for a short summary of the method, see Section 4 below, where the same reduction procedure is applied to a system of coupled overdamped Brownian harmonic oscillators), the reduced dynamics attains the form:
\[d\bar{x}(t)=-\alpha\bar{x}(t)\,dt+\sqrt{2D_{r}}\,dW(t),\quad\bar{x}(0)=x_{0},\]
where
\[\alpha=\frac{\gamma-\sqrt{\gamma^{2}-4\omega^{2}}}{2},\quad D_{r}=\frac{ \alpha}{\omega^{2}\beta}.\]
The reader is referred to [1] to see the details of the calculations. The main result of this section is the following theorem.
**Theorem 3.1**.:
1. _(exact solutions of the original and the reduced dynamics)_ \(\mu_{t}\) _and_ \(\bar{\mu}_{t}\) _are Gaussian measures_ \[\mu_{t}=\mathcal{N}(m(t),\sigma(t)),\quad\bar{\mu}_{t}=\mathcal{N}(\bar{m}_{t },\bar{\sigma}(t)),\] (8)
_where_ \[m(t) =\frac{\lambda_{1}e^{-\lambda_{2}t}-\lambda_{2}e^{-\lambda_{1}t}}{ \lambda_{1}-\lambda_{2}}x_{0}+\frac{e^{-\lambda_{2}t}-e^{-\lambda_{1}t}}{\lambda _{1}-\lambda_{2}}v_{0},\] \[\sigma(t) =\frac{\gamma\beta^{-1}}{(\lambda_{1}-\lambda_{2})^{2}}\Big{[} \frac{\lambda_{1}+\lambda_{2}}{\lambda_{1}\lambda_{2}}+\frac{4}{\lambda_{1}+ \lambda_{2}}(e^{-(\lambda_{1}+\lambda_{2})t}-1)-\frac{1}{\lambda_{1}}e^{-2 \lambda_{1}t}-\frac{1}{\lambda_{2}}e^{-2\lambda_{2}t}\Big{]},\] \[\bar{m}(t) =e^{-\lambda_{2}t}\bar{x}_{0},\] \[\bar{\sigma}(t) =\frac{1}{\omega^{2}\beta}(1-e^{-2\lambda_{2}t})\] _where_ \[\lambda_{1}=\frac{\gamma+\sqrt{\gamma^{2}-4\omega^{2}}}{2},\quad\lambda_{2} =\frac{\gamma-\sqrt{\gamma^{2}-4\omega^{2}}}{2}=\frac{2\omega^{2}}{\gamma+ \sqrt{\gamma^{2}-4\omega^{2}}}.\] (9)
2. _(Exact Wasserstein distance between the laws of the original and reduced dynamics) The Wasserstein distance between_ \(\mu_{t}\) _and_ \(\bar{\mu}_{t}\) _can be computed explicitly via_ \[W_{2}^{2}(\mu_{t},\bar{\mu}_{t})=(m(t)-\bar{m}(t))^{2}+\Big{(}\sqrt{\sigma_{xx }(t)}-\sqrt{\bar{\sigma}(t)}\Big{)}^{2}.\] (10)
3. _(explicit rate of convergence in the high-friction limit) It holds that_ \[W_{2}^{2}(\mu_{t},\bar{\mu}_{t})\leq\frac{4}{\gamma^{2}-4\omega^{2}}\Big{[}( \omega|x_{0}|+|v_{0}|)^{2}+\frac{4}{\beta}\Big{]}\quad\forall t>0.\] (11) _As a consequence,_ \[\lim_{\gamma\to+\infty}W_{2}^{2}(\mu_{t},\bar{\mu}_{t})=0.\] _Note that (_11_) is a much stronger statement providing an explicit rate of convergence._
4. _(Common rates of convergence to equilibrium) There exists a constant_ \(C>0\)_, which can be found explicitly, such that_ \[W_{2}(\mu_{t},\mu_{\infty}),\;W_{2}(\bar{\mu}_{t},\bar{\mu}_{\infty})\leq Ce^ {-\lambda_{2}t},\] _where_ \[\mu_{\infty}=\bar{\mu}_{\infty}=\mathcal{N}\Big{(}0,\frac{1}{\beta\omega^{2 }}\Big{)}.\] _This result shows that the original dynamics and the reduced one not only share the same equilibrium, they have the same rates of convergence to equilibrium in the Wasserstein distance._
5. _(long-time behaviour) It holds that_ \[W_{2}^{2}(\mu_{t},\bar{\mu}_{t})\leq\Big{[}\frac{\omega|x_{0}|+|v_{0}|}{\sqrt {\gamma^{2}-4\omega^{2}}}+\frac{10}{\beta(\gamma^{2}-4\omega^{2})}\Big{]}e^{- \lambda_{2}t}.\] (12) _As a consequence of this, we also have_ \[\lim_{t\to+\infty}W_{2}(\mu_{t},\bar{\mu}_{t})=0,\] _which is already obtained in the previous part. Estimate (_12_) is a stronger statement, showing that the two dynamics are exponentially close at any time_ \(t>0\)_._
6. _Suppose that the initial data_ \(x_{0}\) _is randomly distributed according to an even probability measure_ \(\rho_{0}\in L^{1}(\mathbb{R})\) _then the estimates in parts_ \((iii)\) _and_ \((iv)\) _still hold true._
_Proof._\((i)\). The law \(\rho_{t}\) of \(z(t)=\begin{pmatrix}x(t)\\ v(t)\end{pmatrix}\) satisfies the kinetic Fokker Planck equation
\[\partial_{t}\rho_{t}=\mathscr{L}^{*}\rho_{t},\quad\rho|_{t=0}=\delta_{(x_{0},v_ {0})},\]
where \(\mathscr{L}^{*}\rho:=-v\partial_{x}\rho+\omega^{2}x\partial_{v}\rho+\gamma \big{[}\partial_{v}(v\rho)+\beta^{-1}\partial_{vv}^{2}\rho\big{]}\).
According to [Risken, Section 10.2]\(\mu_{t}\) is a bivariate Gaussian measure with mean \(M(t)\in\mathbb{R}^{2}\) and covariance matrix \(\Sigma(t)\in\mathbb{R}^{2\times 2}\). They are \(t\) dependent objects given by
\[M(t)=\begin{pmatrix}m_{x}(x)\\ m_{v}(t)\end{pmatrix},\quad\Sigma^{-1}(t)=\begin{pmatrix}[\sigma_{xx}(t)]^{-1}& [\sigma_{xv}(t)]^{-1}\\ [\sigma_{vx}(t)]^{-1}&[\sigma_{vv}(t)]^{-1}\end{pmatrix},\]
where
\[m_{x}(t) =\frac{\lambda_{1}e^{-\lambda_{2}t}-\lambda_{2}e^{-\lambda_{1}t }}{\lambda_{1}-\lambda_{2}}x_{0}+\frac{e^{-\lambda_{2}t}-e^{-\lambda_{1}t}}{ \lambda_{1}-\lambda_{2}}v_{0},\] \[m_{v}(t) =\omega^{2}\frac{e^{-\lambda_{1}t}-e^{-\lambda_{2}t}}{\lambda_{1 }-\lambda_{2}}x_{0}+\frac{\lambda_{1}e^{-\lambda_{1}t}-\lambda_{2}e^{-\lambda _{2}t}}{\lambda_{1}-\lambda_{2}}v_{0},\] \[\sigma_{xx}(t) =\frac{\gamma\beta^{-1}}{(\lambda_{1}-\lambda_{2})^{2}}\Big{[} \frac{\lambda_{1}+\lambda_{2}}{\lambda_{1}\lambda_{2}}+\frac{4}{\lambda_{1}+ \lambda_{2}}(e^{-(\lambda_{1}+\lambda_{2})t}-1)-\frac{1}{\lambda_{1}}e^{-2 \lambda_{1}t}-\frac{1}{\lambda_{2}}e^{-2\lambda_{2}t}\Big{]},\] \[\sigma_{xv}(t) =\frac{\gamma\beta^{-1}}{(\lambda_{1}-\lambda_{2})^{2}}(e^{- \lambda_{1}t}-e^{-\lambda_{2}t})^{2},\] \[\sigma_{vv}(t) =\frac{\gamma\beta^{-1}}{(\lambda_{1}-\lambda_{2})^{2}}\Big{[} \lambda_{1}+\lambda_{2}+\frac{4\lambda_{1}\lambda_{2}}{\lambda_{1}+\lambda_{2 }}(e^{-(\lambda_{1}+\lambda_{2})t}-1)-\lambda_{1}e^{-2\lambda_{1}t}-\lambda_{2} e^{-2\lambda_{2}t}\Big{]},\]
where
\[\lambda_{1}=\frac{\gamma+\sqrt{\gamma^{2}-4\omega^{2}}}{2},\quad\lambda_{2}= \frac{\gamma-\sqrt{\gamma^{2}-4\omega^{2}}}{2},\quad\text{thus}\quad\lambda_{ 1}+\lambda_{2}=\gamma,\quad\lambda_{1}\lambda_{2}=\omega^{2},\quad\lambda_{1}- \lambda_{2}=\sqrt{\gamma^{2}-4\omega^{2}}. \tag{13}\]
Note that, since in the overdamped regime \(\gamma\geq 2\omega\), we have
\[\lambda_{2}=\frac{\gamma-\sqrt{\gamma^{2}-4\omega^{2}}}{2}=\frac{4\omega^{2}} {2(\gamma+\sqrt{\gamma^{2}-4\omega^{2}})}\leq\frac{4\omega^{2}}{4\omega}=\omega.\]
Since \(z(t)\) is a bivariate Gaussian, it follows that the law of \(x(t)\), which is the first marginal of \(z(t)\), is a univariate Gaussian measure, \(\mu_{t}=\mathcal{N}(m(t),\sigma(t))\), with mean \(m(t)=m_{x}(t)\) and variance \(\sigma(t)=\sigma_{xx}(t)\), where \(m_{x}(t)\) and \(\sigma_{xx}(t)\) are defined above. Using (13) we can re-write \(m(t)\) and \(\sigma(t)\) as follows
\[m(t) =e^{-\lambda_{2}t}x_{0}+\frac{e^{-\lambda_{2}t}-e^{-\lambda_{1}t }}{\lambda_{1}-\lambda_{2}}(\lambda_{2}x_{0}+v_{0}), \tag{14}\] \[\sigma(t) =\frac{\gamma\beta^{-1}}{(\gamma^{2}-4\omega^{2})}\Big{[}\frac{ \gamma}{\omega^{2}}+\frac{4}{\gamma}(e^{-\gamma t}-1)-\frac{1}{\lambda_{1}}e^{- 2\lambda_{1}t}-\frac{1}{\lambda_{2}}e^{-2\lambda_{2}t}\Big{]}\] (15) \[=\frac{1}{\beta\omega^{2}\left[1-4(\omega/\gamma)^{2}\right]} \big{(}1-e^{-2\lambda_{2}t}\big{)}+\frac{\gamma\beta^{-1}}{(\gamma^{2}-4 \omega^{2})}\Big{[}\frac{4}{\gamma}(e^{-\gamma t}-1)-\frac{e^{-2\lambda_{1}t }-e^{-2\lambda_{2}t}}{\lambda_{1}}\Big{]}, \tag{16}\]
where in the last equality we have used the following equality
\[\frac{1}{\lambda_{1}}e^{-2\lambda_{1}t}+\frac{1}{\lambda_{2}}e^{- 2\lambda_{2}t} =\frac{(\lambda_{1}+\lambda_{2})e^{-2\lambda_{2}t}}{\lambda_{1} \lambda_{2}}+\frac{(e^{-2\lambda_{1}t}-e^{-2\lambda_{2}t})}{\lambda_{1}}\] \[=\frac{\gamma e^{-2\lambda_{2}t}}{\omega^{2}}+\frac{(e^{-2\lambda_{ 1}t}-e^{-2\lambda_{2}t})}{\lambda_{1}}\]
The reduced dynamics is an Ornstein-Uhlenbeck process, therefore its law is a Gaussian measure, \(\bar{\mu}_{t}=\mathcal{N}(\bar{m}(t),\bar{\sigma}^{2}(t))\), with mean
\[\bar{m}(t)=e^{-\alpha t}x_{0}=e^{-\lambda_{2}t}x_{0}, \tag{17}\]
and variance
\[\bar{\sigma}(t)=\frac{D_{r}}{\alpha}(1-e^{-2\alpha t})=\frac{1}{\omega^{2} \beta}(1-e^{-2\lambda_{2}t}). \tag{18}\]
\((ii)\) Using the general explicit formula for the Wasserstein distance between two univariate Gaussian measures, we obtain the Wasserstein distance between the original dynamics and the reduced dynamics, \(W_{2}^{2}(\mu_{t},\bar{\mu}_{t})\), as follows
\[W_{2}^{2}(\mu_{t},\bar{\mu}_{t})^{2}=\Big{(}m_{x}(t)-\bar{m}(t)\Big{)}^{2}+ \Big{(}\sqrt{\sigma_{xx}(t)}-\sqrt{\bar{\sigma}(t)}\Big{)}^{2}, \tag{19}\]
\((iii)\) We now provide estimate for \(W_{2}^{2}(\mu_{t},\bar{\mu}_{t})\) in the high-friction regime, which corresponds to a large time-scale separation, since the difference \(\lambda_{1}-\lambda_{2}=\sqrt{\gamma^{2}-4\omega^{2}}\) grows with \(\gamma\) for fixed \(\omega\). We have
\[m(t)-\bar{m}(t)=\frac{e^{-\lambda_{2}t}-e^{-\lambda_{1}t}}{\lambda_{1}- \lambda_{2}}(\lambda_{2}x_{0}+v_{0}). \tag{20}\]
Therefore, since \(|e^{-\lambda_{2}t}-e^{-\lambda_{1}t}\leq|e^{-\lambda_{2}t}|+|e^{-\lambda_{1}t }|\leq 2\),
\[|m(t)-\bar{m}(t)|\leq\frac{2}{\sqrt{\gamma^{2}-4\omega^{2}}}\big{(}\lambda_{2 }|x_{0}|+|v_{0}|\big{)}\leq\frac{2}{\sqrt{\gamma^{2}-4\omega^{2}}}\big{(} \omega|x_{0}|+|v_{0}|\big{)}\]
Next we estimate \(|\sigma(t)-\bar{\sigma}_{t}|\). Since
\[|e^{-\gamma t}-1|\leq e^{-\gamma t}+1\leq 2,\quad|e^{-2\lambda_{1}t}-e^{-2 \lambda_{2}t}|\leq e^{-2\lambda_{1}t}+e^{-2\lambda_{2}t}\leq 2,\quad\gamma_{1} \geq\frac{\gamma}{2}\]
we have
\[\Big{|}\frac{4}{\gamma}(e^{-\gamma t}-1)-\frac{e^{-2\lambda_{1}t}-e^{-2 \lambda_{2}t}}{\lambda_{1}}\Big{|}\leq\frac{4}{\gamma}|e^{-\gamma t}-1|+\frac{ |e^{-2\lambda_{1}t}-e^{-2\lambda_{2}t}|}{\lambda_{1}}\leq\frac{12}{\gamma}.\]
Therefore,
\[|\sigma(t)-\bar{\sigma}(t)| =\bigg{|}\frac{1-e^{-2\lambda_{2}t}}{\beta\omega^{2}}\Big{[}\frac {1}{1-4(\omega/\gamma)^{2}}-1\Big{]}+\frac{\gamma\beta^{-1}}{(\gamma^{2}-4 \omega^{2})}\Big{[}\frac{4}{\gamma}(e^{-\gamma t}-1)-\frac{e^{-2\lambda_{1}t} -e^{-2\lambda_{2}t}}{\lambda_{1}}\Big{]}\bigg{|} \tag{21}\] \[=\bigg{|}\frac{1-e^{-2\lambda_{2}t}}{\beta}\frac{4(1/\gamma)^{2} }{1-4(\omega/\gamma)^{2}}+\frac{\gamma\beta^{-1}}{(\gamma^{2}-4\omega^{2})} \Big{[}\frac{4}{\gamma}(e^{-\gamma t}-1)-\frac{e^{-2\lambda_{1}t}-e^{-2 \lambda_{2}t}}{\lambda_{1}}\Big{]}\bigg{|}\] \[\leq\frac{4}{\beta(\gamma^{2}-4\omega^{2})}+\frac{12}{\beta( \gamma^{2}-4\omega^{2})}=\frac{16}{\beta(\gamma^{2}-4\omega^{2})}.\]
It follows that
\[W_{2}^{2}(\mu,\bar{\mu}) =\big{(}m(t)-\bar{m}(t)\big{)}^{2}+\Big{(}\sqrt{\sigma(t)}-\sqrt{ \bar{\sigma}(t)}\Big{)}^{2}\] \[\leq\big{(}m_{x}(t)-\bar{m}(t)\big{)}^{2}+|\sigma_{xx}(t)-\bar{ \sigma}(t)|\] \[\leq\frac{4}{\gamma^{2}-4\omega^{2}}(\omega|x_{0}|+|v_{0}|)^{2}+ \frac{16}{\beta(\gamma^{2}-4\omega^{2})}=\frac{4}{\gamma^{2}-4\omega^{2}} \Big{[}(\omega|x_{0}|+|v_{0}|)^{2}+\frac{4}{\beta}\Big{]},\]
where to obtain the second line from the first line, we have used the inequality \((a-b)^{2}\leq|a^{2}-b^{2}|\) for \(a,b\geq 0\).
\((iv)\) We have
\[\lim_{t\to\infty}m(t)=\lim_{t\to\infty}\bar{m}(t)=0\quad\forall x_{0},v_{0};\quad \lim_{t\to\infty}\sigma(t)=\frac{1}{\beta\omega^{2}}=:\sigma_{\infty};\quad\lim _{t\to\infty}\bar{\sigma}(t)=\frac{1}{\beta\omega^{2}}=:\bar{\sigma}_{\infty}= \sigma_{\infty}.\]
Thus the original dynamics and the reduced one share the same equilibrium measure
\[\mu_{\infty}=\bar{\mu}_{\infty}=\mathcal{N}(0,\sigma_{\infty}).\]
Furthermore, we compute the rates of convergence explicitly
\[W_{2}(\mu_{t},\mu_{\infty})^{2} =(m(t)-m_{\infty})^{2}+(\sqrt{\sigma(t)}-\sqrt{\sigma_{\infty}})^ {2}\] \[\leq m(t)^{2}+|\sigma(t)-\sigma_{\infty}|\] \[=\Big{(}e^{-\lambda_{2}t}x_{0}+\frac{e^{-\lambda_{2}t}-e^{- \lambda_{1}t}}{\lambda_{1}-\lambda_{2}}(\lambda_{2}x_{0}+v_{0})\Big{)}^{2}+ \frac{\gamma\beta^{-1}}{(\gamma^{2}-4\omega^{2})}\Big{|}\frac{4}{\gamma}e^{- \gamma t}-\frac{1}{\lambda_{1}}e^{-2\lambda_{1}t}-\frac{1}{\lambda_{2}}e^{-2 \lambda_{2}}\Big{|}\] \[=e^{-2\lambda_{2}t}\Big{(}x_{0}+\frac{1-e^{-(\lambda_{1}-\lambda _{2})t}}{\lambda_{1}-\lambda_{2}}(\lambda_{2}x_{0}+v_{0})\Big{)}^{2}+\frac{ \gamma\beta^{-1}}{(\gamma^{2}-4\omega^{2})}e^{-2\lambda_{2}t}\Big{|}\frac{4}{ \gamma}e^{-2\lambda_{1}t}-\frac{1}{\lambda_{1}}e^{-2(\lambda_{1}-\lambda_{2}) t}-\frac{1}{\lambda_{2}}\Big{|}\] \[\leq Ce^{-2\lambda_{2}t},\]
for some constant \(C\), which can be computed explicitly (but it is not the focus of this part), where we have used the fact that \(\lambda_{1}>\lambda_{2}>0\). Thus
\[W_{2}(\mu_{t},\mu_{\infty})\leq Ce^{-\lambda_{2}t}.\]
Similarly
\[W_{2}(\bar{\mu}_{t},\bar{\mu}_{\infty})^{2} =(\bar{m}(t)-\bar{m}_{\infty})^{2}+(\sqrt{\bar{\sigma}(t)}-\sqrt{ \bar{\sigma}_{\infty}})^{2}\] \[\leq\bar{m}(t)^{2}+|\overline{\sigma}(t)-\overline{\sigma}_{\infty}|\] \[=e^{-2\lambda_{2}t}\Big{[}x_{0}^{2}+\frac{1}{\beta\omega^{2}} \Big{]}.\]
Thus we also obtain
\[W_{2}(\bar{\mu}_{t},\bar{\mu}_{\infty})\leq Ce^{-\lambda_{2}t}.\]
\((v)\) Now we estimate \(W_{2}^{2}(\mu_{t},\bar{\mu}_{t})\) in the large time regime. We only need to estimate the difference between the variances \(|\sigma(t)-\bar{\sigma}(t)|\). According to (21), we have
\[\sigma(t)-\bar{\sigma}(t) =\frac{1-e^{-2\lambda_{2}t}}{\beta\omega^{2}}\Big{[}\frac{1}{1-4 (\omega/\gamma)^{2}}-1\Big{]}+\frac{\gamma\beta^{-1}}{(\gamma^{2}-4\omega^{2} )}\Big{[}\frac{4}{\gamma}(e^{-\gamma t}-1)-\frac{e^{-2\lambda_{1}t}-e^{-2 \lambda_{2}t}}{\lambda_{1}}\Big{]}\] \[=-\frac{e^{-2\lambda_{2}t}}{\beta\omega^{2}}\Big{[}\frac{1}{1-4 (\omega/\gamma)^{2}}-1\Big{]}+\frac{\gamma\beta^{-1}}{(\gamma^{2}-4\omega^{2} )}\Big{[}\frac{4}{\gamma}e^{-\gamma t}-\frac{e^{-2\lambda_{1}t}-e^{-2\lambda _{2}t}}{\lambda_{1}}\Big{]}\]
where, to obtain the second line, we have used the following cancellation
\[\frac{1}{\beta\omega^{2}}\Big{[}\frac{1}{1-4(\omega/\gamma)^{2}}-1\Big{]}- \frac{\gamma\beta^{-1}}{(\gamma^{2}-4\omega^{2})}\frac{4}{\gamma}=0.\]
Therefore, it holds
\[|\sigma(t)-\bar{\sigma}(t)|\leq\frac{e^{-2\lambda_{2}t}}{\beta\omega^{2}} \Big{[}\frac{1}{1-4(\omega/\gamma)^{2}}-1\Big{]}+\frac{\gamma\beta^{-1}}{( \gamma^{2}-4\omega^{2})}\Big{[}\frac{4}{\gamma}e^{-\gamma t}+\frac{e^{-2 \lambda_{2}t}-e^{-2\lambda_{1}t}}{\lambda_{1}}\Big{]}.\]
\((vi)\) Now, we can estimate the Wasserstein distance \(W_{2}^{2}(\mu_{t},\bar{\mu}_{t})\) to explore the long time behaviour, viz.
\[W_{2}^{2}(\mu_{t},\bar{\mu}_{t}) =\big{(}m(t)-\bar{m}(t)\big{)}^{2}+\Big{(}\sqrt{\sigma(t)}-\sqrt{ \bar{\sigma}(t)}\Big{)}^{2}\] \[\leq\big{(}m_{x}(t)-\bar{m}(t)\big{)}^{2}+|\sigma_{xx}(t)-\bar{ \sigma}(t)|\] \[\leq\frac{e^{-\lambda_{2}t}-e^{-\lambda_{1}t}}{\lambda_{1}- \lambda_{2}}(\omega|x_{0}|+|v_{0}|)+\frac{e^{-2\lambda_{2}t}}{\beta\omega^{2}} \Big{[}\frac{1}{1-4(\omega/\gamma)^{2}}-1\Big{]}+\frac{\gamma\beta^{-1}}{( \gamma^{2}-4\omega^{2})}\Big{[}\frac{4}{\gamma}e^{-\gamma t}+\frac{e^{-2 \lambda_{2}t}-e^{-2\lambda_{1}t}}{\lambda_{1}}\Big{]}\] \[=\frac{4}{\beta(\gamma^{2}-4\omega^{2})}e^{-\gamma t}+\Big{[} \frac{\omega|x_{0}|+|v_{0}|}{\sqrt{\gamma^{2}-4\omega^{2}}}+\frac{4}{\beta( \gamma^{2}-4\omega^{2})}+\frac{\gamma}{\beta\lambda_{1}(\gamma^{2}-4\omega^{2 })}\Big{]}e^{-\lambda_{2}t}\] \[\qquad-\Big{[}\frac{\omega|x_{0}|+|v_{0}|}{\sqrt{\gamma^{2}-4 \omega^{2}}}+\frac{\gamma}{\beta\lambda_{1}(\gamma^{2}-4\omega^{2})}\Big{]}e^ {-\lambda_{1}t}\] \[\leq\Big{[}\frac{\omega|x_{0}|+|v_{0}|}{\sqrt{\gamma^{2}-4 \omega^{2}}}+\frac{8}{\beta(\gamma^{2}-4\omega^{2})}+\frac{\gamma}{\beta \lambda_{1}(\gamma^{2}-4\omega^{2})}\Big{]}e^{-\lambda_{2}t}\] \[\leq\Big{[}\frac{\omega|x_{0}|+|v_{0}|}{\sqrt{\gamma^{2}-4 \omega^{2}}}+\frac{10}{\beta(\gamma^{2}-4\omega^{2})}\Big{]}e^{-\lambda_{2}t}.\]
Here we have used the fact that \(\gamma\geq\lambda_{2}\) and \(\frac{\gamma}{\lambda_{1}}=\frac{2\gamma}{\gamma+\sqrt{\gamma^{2}-4\omega^{2} }}\leq 2\).
\((vi)\). Suppose that \(x_{0}\) is randomly distributed following an even distribution \(\rho_{0}\). Then the laws of \(x(t)\) and \(\bar{x}(t)\) are given by
\[\mu_{t}=\mathcal{N}(m(t),\sigma(t))*\rho_{0},\quad\mu_{t}=\mathcal{N}(\bar{m}( t),\bar{\sigma}(t))*\rho_{0}.\]
Since \(\mathcal{N}(m(t),\sigma(t)),\mathcal{N}(\bar{m}(t),\bar{\sigma}(t))\in\mathcal{ P}_{2}(\mathbb{R})\), according to [1, Lemma 5.2] we have
\[W_{2}^{2}(\mu_{t},\bar{\mu}_{t})=W_{2}^{2}(\mathcal{N}(m(t),\sigma(t))*\rho_{0 },\mathcal{N}(\bar{m}(t),\bar{\sigma}(t))*\rho_{0})\leq W^{2}(\mathcal{N}(m(t ),\sigma(t)),\mathcal{N}(\bar{m}(t),\bar{\sigma}(t)),\]
thus the upper bound estimates in the two previous parts are still true.
## 4 Model reduction of two coupled underdamped Brownian oscillators
We now proceed with the computation of the Wasserstein distance for a slightly more elaborate model, corresponding to a system of two coupled overdamped Brownian harmonic oscillators. The dynamics of the model can conveniently be written as follows:
\[\dot{x}_{1} =ax_{1}+k(x_{2}-x_{1})+\sigma_{1}\dot{W}_{1} \tag{22a}\] \[\dot{x}_{2} =-k(x_{2}-x_{1})+dx_{2}+\sigma_{2}\dot{W}_{2}, \tag{22b}\]
where \(\dot{W}\) denotes the formal derivative of a Wiener process, corresponding to a white noise, \(a,d<0\) are parameters characteristic of the individual oscillator (without loss of generality we also assume \(a\geq d\)), \(\sigma_{1},\sigma_{2}>0\) denote the noise strenghts, and finally, \(k>0\) is the coupling parameter.
The system (22) represents the overdamped version of the coupled underdamped Langevin dynamics of the two oscillators. A contracted description for the deterministic case (i.e., with \(\sigma_{1}=\sigma_{2}=0\)) under a suitable assumption of scale separation is studied, with applications to relaxation dynamics in proteins, in [16]. We can derive a reduced system by eliminating the variable \(x_{2}\), in (22), using the procedure introduced in [13, 14]. This consists of two distinct steps: (i) the deterministic component of the dynamics is obtained using the Invariant Manifold method, then (ii) the diffusion terms are determined via fulfilling the Fluctuation-Dissipation relation.
### Deterministic evolution
Let \(\langle\mathcal{O}\rangle\) denote the average over noise of the variable \(\mathcal{O}\). The original dynamics can be written as
\[\dot{\mathbf{z}}=\mathbf{Q}\ \mathbf{z}\, \tag{23}\]
where \(\mathbf{z}=(\langle x_{1}\rangle,\langle x_{2}\rangle)\) and
\[\mathbf{Q}=\mathbf{Q}(k)=\begin{pmatrix}a-k&k\\ k&-k+d\end{pmatrix} \tag{24}\]
The characteristic polynomial of \(\mathbf{Q}\) is
\[\lambda^{2}-(a+d-2k)\lambda+(ad-ak-dk)=0.\]
Thus \(\mathbf{Q}\) has two real negative eigenvalues:
\[\lambda_{\pm}=\lambda_{\pm}(k):=\frac{(a+d-2k)\pm\sqrt{(a-d)^{2}+4k^{2}}}{2}\, \tag{25}\]
In this model, the time-scale separation is encoded in the difference \(\lambda_{+}-\lambda_{-}=\sqrt{(a-d)^{2}+4k^{2}}\), which grows with increasing \(k\), for fixed parameters \(a,d\). We seek a closure of the form \(\langle x_{2}\rangle=\alpha\langle x_{1}\rangle\), hence, following [1], we define a _macroscopic_ time derivative of \(\langle x_{2}\rangle\) via the chain rule:
\[\partial_{t}^{macro}\langle x_{2}\rangle :=\frac{\partial\langle x_{2}\rangle}{\partial\langle x_{1} \rangle}\langle\dot{x}_{1}\rangle\] \[=(\alpha(a-k)+\alpha^{2}k)\langle x_{1}\rangle\,\]
which expresses the _slaving principle_ mentioned in Sec. 1. Furthermore, we also define the _microscopic_ time derivative of \(\langle x_{2}\rangle\) in terms of the vector field given in Eq. (23), where \(\langle x_{2}\rangle\) is expressed through the aforementioned closure. We thus set:
\[\partial_{t}^{micro}\langle x_{2}\rangle :=k\langle x_{1}\rangle+(d-k)\langle x_{2}\rangle\] \[=(k+\alpha(d-k))\langle x_{1}\rangle\.\]
The Invariant Manifold method requires that microscopic and macroscopic time derivatives of \(\langle x_{2}\rangle\) coincide, independently of the values of the observable \(\langle x_{1}\rangle\). Thus, we obtain the following invariance equation
\[\alpha(a-k)+\alpha^{2}k=k+\alpha(d-k)\quad\Longleftrightarrow\quad k\alpha ^{2}+(a-d)\alpha+k=0\, \tag{26}\]
which has two solutions
\[\alpha_{\pm}=\alpha_{\pm}(k):=\frac{-(a-d)\pm\sqrt{(a-d)^{2}+4k^{2}}}{2k}\.\]
The reduced dynamics for the deterministic part is
\[\langle\dot{x}_{1}\rangle=(a-k+k\hat{\alpha})\langle x_{1}\rangle\, \tag{27}\]
where \(\hat{\alpha}\in\{\alpha_{+},\alpha_{-}\}\) which will be specified later. It is noticeable that
\[a-k+k\alpha_{\pm}=\frac{(a+d-2k)\pm\sqrt{(a-d)^{2}+4k^{2}}}{2}\equiv\lambda_{ \pm}\.\]
Looking at (27), we notice that the coefficient multiplying \(\langle x_{1}\rangle\) coincides with one of the eigenvalues of the matrix \(\mathbf{Q}\). To pick up the right eigenvalue, we use the following criterion. We select \(\hat{\alpha}\) from solutions \(\alpha_{\pm}\) to the invariance equation (26) that satisfies \(a-k+k\hat{\alpha}\to a<0\) as \(k\to 0\), that is \(k\hat{\alpha}\to 0\) as \(k\to 0\). Since we assume that \(a\geq d\), we take
\[\hat{\alpha}=\alpha_{+}=\frac{-(a-d)+\sqrt{(a-d)^{2}-4k^{2}}}{2k}\.\]
### Incorporating the noise
To characterize the noise term, we employ the methodology proposed in [13]. Therefore, we first define the diffusion matrix \(\mathbf{D}\) as
\[\mathbf{D}=\begin{pmatrix}\sigma_{1}&0\\ 0&\sigma_{2}\end{pmatrix}\;, \tag{28}\]
and we also denote
\[\dot{\mathbf{W}}=(\dot{W}_{1},\dot{W}_{2})\;.\]
The solution of Eqs. (22) reads:
\[\mathbf{z}(t)=e^{\mathbf{Q}t}\mathbf{z}_{0}+\int_{0}^{t}e^{\mathbf{Q}(t-s)} \mathbf{D}\ \dot{\mathbf{W}}ds\;. \tag{29}\]
We thus find
\[\lim_{t\to\infty}\mathbb{E}[z_{1}(t)^{2}]\equiv\overline{\Sigma}_{11}=-\frac{1 }{2}\Big{(}\frac{1}{a-2k}+\frac{1}{a}\Big{)}.\]
The full reduced system takes hence the form
\[d\hat{x}(t)=\lambda_{+}\hat{x}(t)\,dt+\hat{D}dW_{t}\;, \tag{30}\]
where the drift coefficient \(\lambda_{+}\) is defined in (25) and the diffusion coefficient \(\hat{D}\) is given by
\[\hat{D}=-\lambda_{+}\overline{\Sigma}_{11}\;. \tag{31}\]
### Quantification of errors and the long-time behaviour
In this section we will compute explicitly the Wasserstein distance between the laws of the original dynamics of \(x_{1}\) and of the reduced dynamics (30) and study their long-time behaviour. The Fokker Planck equation associated to the full original dynamics (22) is given by the following linear-drift diffusion equation
\[\partial_{t}\rho=-\operatorname{div}(\mathbf{Q}\rho)+\operatorname{div}(\mathbf{D }\nabla\rho), \tag{32}\]
where \(\rho=\rho(t,x_{1},x_{2})\) is the joint probability density of \((x_{1},x_{2})\), the drift matrix \(\mathbf{Q}\) and the diffusion matrix \(\mathbf{D}\) are given in (24) and (28) respectively. Note that the above system is a special case of the general drift-diffusion equation introduced in Section 2.2.
Since we are focusing on the role of the coupling parameter, for simplicity of presentation, we consider identical oscillator, that is \(a=d<0\) and normalising \(\sigma_{1}=\sigma_{2}=1\), so that
\[\mathbf{Q}=\begin{pmatrix}a-k&k\\ k&a-k\end{pmatrix},\quad\text{and}\quad\mathbf{D}=I.\]
The main result of this section is the following theorem.
**Theorem 4.1**.: _Let \(\rho_{1}(t)\) be the distribution of \(x_{1}(t)\) of the original coupled dynamics (30) starting at a deterministic initial data \((x_{1},x_{2})(0)=(x_{1},x_{2})\), and \(\hat{\rho}_{1}(t)\) be the distribution of the reduced dynamics (30) starting from \(x_{1}\). Then there exists a constant \(C>0\) such that the following statements hold_
1. \(W_{2}(\rho_{1}(t),\hat{\rho}_{1}(t))^{2}\leq Ck\)_._
2. \(\max\{W_{2}(\rho_{1}(t),\rho_{\infty}),W_{2}(\hat{\rho}_{1}(t),\rho_{\infty} )\}\leq Ce^{at}\)_, where_ \(\rho_{\infty}=\mathcal{N}(0,\overline{\Sigma}_{11})\)__
3. \(W_{2}(\rho_{1}(t),\hat{\rho}_{1}(t)\leq Ce^{at}\)_._
Proof.: According to Lemma 2.1, the solution to (32) is given by \(\rho(t,x_{1},x_{2})=\mathcal{N}(\mu(t),\Sigma(t))\), where
\[\mu(t)=e^{t\mathbf{Q}}\begin{pmatrix}x_{1}\\ x_{2}\end{pmatrix},\quad\Sigma(t)=2\int_{0}^{t}e^{s\mathbf{Q}}e^{s\mathbf{Q}^{T }}\,ds.\]
Since \(\mathbf{Q}\mathbf{Q}^{T}=\mathbf{Q}^{T}\mathbf{Q}\) and \(\mathbf{Q}=\mathbf{Q}^{T}\), we have
\[e^{s\mathbf{Q}}e^{s\mathbf{Q}^{T}}=e^{s(\mathbf{Q}+\mathbf{Q}^{T})}=e^{2s \mathbf{Q}}.\]
Thus, we can simplify \(\Sigma(t)\) as
\[\Sigma(t)=2\int_{0}^{t}e^{2s\mathbf{Q}}\,ds.\]
Applying lemma 2.2, we compute
\[e^{t\mathbf{Q}}=\frac{1}{\Delta}\begin{pmatrix}m_{11}&m_{12}\\ m_{21}&m_{22}\end{pmatrix},\quad\Delta=2kt,\] \[m_{11}=m_{22}=e^{(a-k)t}\Delta\cosh\frac{1}{2}\Delta=\frac{1}{2} e^{(a-k)t}\Delta(e^{kt}+e^{-kt})=\frac{1}{2}\Delta(e^{(a-2k)t}+e^{at})\] \[m_{12}=m_{21}=2kte^{(a-k)t}\sinh\frac{1}{2}\Delta=\frac{1}{2} \Delta e^{(a-k)t}(e^{kt}-e^{-kt}).=\frac{1}{2}\Delta(e^{at}-e^{(a-2k)t}).\]
Thus
\[e^{t\mathbf{Q}}=\frac{1}{2}\begin{pmatrix}e^{(a-2k)t}+e^{at}&e^{at}-e^{(a-2k) t}\\ e^{at}-e^{(a-2k)t}&e^{(a-2k)t}+e^{at}\end{pmatrix}.\]
Similarly
\[e^{2t\mathbf{Q}}=\frac{1}{2}\begin{pmatrix}e^{2(a-2k)t}+e^{2at}&e^{2at}-e^{2( a-2k)t}\\ e^{2at}-e^{2(a-2k)t}&e^{2(a-2k)t}+e^{2at}\end{pmatrix}.\]
Therefore,
\[\Sigma(t)=2\int_{0}^{t}e^{2sQ}\,ds=\frac{1}{2}\begin{pmatrix}\frac{e^{2(a-2k) t}-1}{a-2k}+\frac{e^{2at}-1}{a}&\frac{e^{2at}-1}{a}-\frac{e^{2(a-2k)t}-1}{a-2k} \\ \frac{e^{2at}-1}{a}-\frac{e^{2(a-2k)t}-1}{a-2k}&\frac{e^{2(a-2k)t}-1}{a-2k}+ \frac{e^{2at}-1}{a}\end{pmatrix}.\]
It follows that
\[\rho_{1}(t)=\mathcal{N}(\mu_{1}(t),\Sigma_{11}(t))=\mathcal{N}\Bigg{(}\frac{1 }{2}\Big{(}(e^{(a-2k)t}+e^{at})x_{1}+(e^{at}-e^{(a-2k)t})x_{2}\Big{)},\frac{1}{ 2}\Big{(}\frac{e^{2(a-2k)t}-1}{a-2k}+\frac{e^{2at}-1}{a}\Big{)}\Bigg{)},\]
Since \(\hat{x}\) is an OU process, we obtain
\[\hat{\rho}_{1}(t)=\mathcal{N}(\hat{\mu}_{1}(t),\hat{\Sigma}_{1}(t))=\mathcal{ N}\Big{(}e^{\lambda_{+}t}x_{1},-\frac{\hat{D}}{\lambda_{+}}(1-e^{2\lambda_{+}t}) \Big{)}=\mathcal{N}\Big{(}e^{\lambda_{+}t}x_{1},\overline{\Sigma}_{11}(1-e^ {2\lambda_{+}t})\Big{)},\]
recalling that, with \(a=d\)
\[\lambda_{+}=\frac{(a+d-2k)+\sqrt{(a-d)^{2}+4k^{2}}}{2}=a,\quad\overline{ \Sigma}_{11}=-\frac{1}{2}\Big{(}\frac{1}{a-2k}+\frac{1}{a}\Big{)}.\]
The Wasserstein distance between \(\rho_{1}\) and \(\hat{\rho}_{1}\) is given by
\[W_{2}(\rho_{1}(t),\hat{\rho}_{1}(t))^{2}=(\mu_{1}(t)-\hat{\mu}_{1}(t))^{2}+ \Big{(}\sqrt{\Sigma_{11}}(t)-\sqrt{\hat{\Sigma}_{1}}(t)\Big{)}^{2} \tag{33}\]
\((i)\) We compute
\[\begin{split}|\mu_{1}(t)-\hat{\mu}_{1}(t)|&=\frac{1}{2} \Big{|}\Big{(}e^{(a-2k)t}+e^{at})x_{1}+(e^{at}-e^{(a-2k)t})x_{2}\Big{)}-e^{at}x_ {1}\Big{|}\\ &=\frac{1}{2}|(e^{at}-e^{(a-2k)t})(x_{2}-x_{1})|\\ &=\frac{1}{2}e^{at}|x_{2}-x_{1}|(1-e^{-2kt})\\ &\leq|x_{2}-x_{1}|ke^{at}t\\ &\leq k\,|x_{2}-x_{1}|\frac{1}{|a|e},\end{split} \tag{34}\]
where in the first inequality we have used the elementary inequality \(1-e^{-x}\leq x\) for all \(x>0\), and in the last inequality we have used (noting that \(a<0\))
\[\max_{t>0}te^{at}=\frac{1}{|a|e}. \tag{35}\]
We also estimate
\[\begin{split}\Sigma_{11}(t)-\hat{\Sigma}_{1}(t)&= \frac{1}{2}\Bigg{[}\frac{1}{a-2k}\Big{(}e^{2(a-2k)t}-e^{2\lambda_{+}t}\Big{)} +\frac{1}{a}\Big{(}e^{2at}-e^{2\lambda_{+}t}\Big{)}\Bigg{]}\\ &=\frac{1}{2}\frac{1}{a-2k}\Big{(}e^{2(a-2k)t}-e^{2at}\Big{)}\\ &=\frac{1}{2}\frac{1}{2k-a}e^{2at}\Big{(}1-e^{-4kt}\Big{)}\\ &\leq\frac{2k}{2k-a}e^{2at}t\\ &\leq\frac{k}{a^{2}e},\end{split} \tag{36}\]
where to go from (36) to the next line, we have used \(1-e^{-4kt}\leq 4kt\) and (35) again (with \(a\) replaced by \(2a\)). Therefore, we have
\[W_{2}(\rho_{1}(t),\hat{\rho}_{1}(t))^{2} \leq(\mu_{1}(t)-\hat{\mu}_{1}(t))^{2}+\Big{|}\Sigma_{11}(t)- \overline{\Sigma}_{1}(t)\Big{|}\] \[\leq k^{2}\,|x_{2}-x_{1}|^{2}\frac{1}{|a|^{2}e^{2}}+\frac{k}{a^{2 }e}\leq Ck,\]
for any bounded \(k\).
\((iii)\) Since \(a<0\),
\[\lim_{t\to\infty}\mu_{1}(t)=\lim_{t\to\infty}\hat{\mu}_{1}(t)=0,\quad\lim_{t \to\infty}\Sigma_{11}(t)=-\frac{1}{2}\Big{(}\frac{1}{a-2k}+\frac{1}{a}\Big{)}= \overline{\Sigma}_{11}.\]
it implies that
\[\lim_{t\to\infty}\rho_{1}(t)=\lim_{t\to\infty}\hat{\rho}_{1}(t)=\rho_{\infty}= \mathcal{N}(0,\overline{\Sigma}_{11}).\]
We can also compute explicitly the rates of convergence of these limits in the Wasserstein distance. We have
\[W_{2}(\rho_{1}(t),\rho_{\infty})^{2}=\mu_{1}(t)^{2}+\Big{(}\sqrt{\Sigma_{11}}( t)-\sqrt{\overline{\Sigma}}\Big{)}^{2}\leq\mu_{1}(t)^{2}+\Big{|}\Sigma_{11}(t)- \overline{\Sigma}_{11}\Big{|}. \tag{37}\]
We estimate each term on the right hand side of (37). For the first term, we get
\[\mu_{1}(t) =\frac{1}{2}\Big{(}(e^{(a-2k)t}+e^{at})x_{1}+(e^{at}-e^{(a-2k)t})x _{2}\Big{)}\] \[=\frac{1}{2}e^{at}\Big{(}(1+e^{-2kt})x_{1}+(1-e^{-2kt})x_{2}\Big{)}\] \[\leq Ce^{at}. \tag{38}\]
For the second term, we have
\[|\Sigma_{11}(t)-\overline{\Sigma}_{11}|=\frac{1}{2}\Big{|}\frac{e^{2(a-2k)t}}{a-2 k}+\frac{e^{2at}}{a}\Big{|}=\frac{1}{2}e^{2at}\Big{|}\frac{1}{a}+\frac{e^{-4kt}}{a-2k} \Big{|}\leq Ce^{2at}. \tag{39}\]
Substituting (38) and (39) to (37), we obtain
\[W_{2}(\rho_{1}(t),\rho_{\infty})\leq Ce^{at},\]
thus \(\rho_{1}\) exponentially converges, with a rate \(a\), to \(\rho_{\infty}\). Similarly,
\[W_{2}(\hat{\rho}_{1}(t),\rho_{\infty})^{2} =\hat{\rho}_{1}(t)^{2}+\Big{(}\sqrt{\hat{\Sigma}_{1}}(t)-\sqrt{ \overline{\Sigma}_{11}}\Big{)}^{2}\] \[\leq\hat{\rho}_{1}(t)^{2}+\Big{|}\hat{\Sigma}_{1}(t)-\overline{ \Sigma}_{11}\Big{|}\] \[=(x_{1}^{2}+\overline{\Sigma}_{11})e^{2at}.\]
Hence \(\hat{\rho}_{1}\) exponentially converges with the same rate \(a\) to \(\rho_{\infty}\).
\((iv)\) According to (34) and (36) we have
\[|\mu_{1}(t)-\hat{\mu}_{1}(t)|=\frac{1}{2}e^{at}|x_{2}-x_{1}|(1-e^ {-2kt})\leq Ce^{at},\] \[|\Sigma_{11}(t)-\hat{\Sigma}_{1}(t)|=\frac{1}{2}\frac{1}{|2k-a|}e ^{2at}\Big{(}1-e^{-4kt}\Big{)}\leq Ce^{2at}.\]
Thus
\[W_{2}(\rho_{1}(t),\hat{\rho}_{1}(t))^{2}\leq(\mu_{1}(t)-\hat{\mu}_{1}(t))^{2}+ |\Sigma_{11}(t)-\hat{\Sigma}_{1}(t)|\leq Ce^{2at},\]
that is
\[W_{2}(\rho_{1}(t),\hat{\rho}_{1}(t))\leq Ce^{at}.\]
This completes the proof of this theorem. We remark that we have assumed deterministic initial data, but the theorem can also be extended to the case where the initial data follow symmetric distributions as in Section 3.
## 5 Summary and outlook
In this work we have employed the reduction scheme recently introduced in [13, 14], which suitably combines the Invariant Manifold method with the Fluctuation-Dissipation relation, to derive a contracted description for two classical models of statistical physics, namely the underdamped Brownian harmonic oscillator and a system of two coupled overdamped Brownian harmonic oscillators. The present work significantly extends the previous results: we succeeded here to quantify explicitly the error between the original and the reduced dynamics, as well as their rates of convergence to equilibrium. The technical tool we used is the Wasserstein distance, which is widely employed in the theory of optimal transport. We have thus shown that the two dynamics are exponentially close at any time, share the same equilibrium measure, and exponentially converge to the same equilibrium measure with the same rate. Furthermore, the two dynamics are also found to coincide if the relevant parameter controlling the time-scale separation of the original model is sent to infinity. The linearity of the considered models has clearly played an important role in the analysis of this work, enabling the explicit computations of their solutions and of the involved Wasserstein distances. A key challenge for future developments is to generalize our analysis in order to deal with non-linear models, where explicit solutions and computations are not accessible. Another direction of research points toward the investigations of systems with a large numbers of degrees of freedom, e.g. models relevant to climate dynamics [1], or small systems of interest in modern nanotechnologies, such as biomolecular motors [15].
## Acknowledgements
MC's research was performed under the auspices of Italian National Group of Mathematical Physics (GNFM) of INdAM. MHD research was supported by EPSRC grants EP/W008041/1 and EP/V038516/1.
|
2307.16035 | Binary classification based Monte Carlo simulation | Acceptance-rejection (AR), Independent Metropolis Hastings (IMH) or
importance sampling (IS) Monte Carlo (MC) simulation algorithms all involve
computing ratios of probability density functions (pdfs). On the other hand,
classifiers discriminate labeled samples produced by a mixture of two
distributions and can be used for approximating the ratio of the two
corresponding pdfs.This bridge between simulation and classification enables us
to propose pdf-free versions of pdf-ratio-based simulation algorithms, where
the ratio is replaced by a surrogate function computed via a classifier. From a
probabilistic modeling perspective, our procedure involves a structured energy
based model which can easily be trained and is compatible with the classical
samplers. | Elouan Argouarc'h, François Desbouvries | 2023-07-29T17:53:31Z | http://arxiv.org/abs/2307.16035v2 | # Binary classification based Monte Carlo simulation
###### Abstract
Acceptance-rejection (AR), Independent Metropolis Hastings (IMH) or importance sampling (IS) Monte Carlo (MC) simulation algorithms all involve computing ratios of probability density functions (pdfs). On the other hand, classifiers discriminate labeled samples produced by a mixture of two distributions and can be used for approximating the ratio of the two corresponding pdfs. This bridge between simulation and classification enables us to propose pdf-free versions of pdf-ratio-based simulation algorithms, where the ratio is replaced by a surrogate function computed via a classifier. From a probabilistic modeling perspective, our procedure involves a structured energy based model which can easily be trained and is compatible with the classical samplers.
## 1 Introduction
If \(a\) and \(b\) are two positive numbers,
\[r=\frac{a}{a+b}\in(0,1)\Leftrightarrow\frac{r}{1-r}=\frac{a}{b}>0. \tag{1}\]
This identity has interesting consequences in Bayesian classification, machine learning and stochastic simulation. Indeed, if \(a\) and \(b\) are probabilities of two classes in a binary mixture context for a given sample, then ratio \(\frac{a}{a+b}\) is the the posterior probability which provides with the class probabilities for a given sample, and can be approximated by a parametric classifier \(r_{\phi}\) trained to distinguish between the two probability distributions. On the other hand, positive ratios \(\frac{a}{b}\) play a key role in AR, IMH or IS techniques. Equation (1) relates \(r\) to such positive ratios, and tells us that ratio \(\frac{a}{b}\) can be computed exactly from \(r\), or, in practice, approximately from \(r_{\phi}\), without necessarily knowing \(a\) nor \(b\). This observation enables us to propose approximate versions of these samplers which rely on weaker hypotheses.
Let \(\lambda,1\!-\!\lambda\in(0,1)\) be the prior probabilities of two categories \(k=1,0\), distributed resp. \(\sim p_{1}\) and \(p_{0}\). Binary classification tries to distinguish samples from mixture \(\lambda p_{1}+(1-\lambda)p_{0}\) by identifying the pdf which generated them. The appropriate way to classify relies on the posterior probability: \(x\) is a sample \(\sim p_{1}\) rather than \(\sim p_{0}\) with probability
\[\Pr(k=1|x,\lambda,p_{0},p_{1})=\frac{\lambda p_{1}(x)}{\lambda p_{1}(x)+(1- \lambda)p_{0}(x)}. \tag{2}\]
Indeed, as is well known (see e.g. [1, Chap. 11]), assigning a sample to the label with highest posterior probability is the optimal decision rule in the sense that it minimizes the probability of misclassification.
To compute this posterior probability, one needs to evaluate the pdfs \(p_{1},p_{0}\) and know the prior probability \(\lambda\). Unfortunately, \(\lambda\) is often unknown so (2) is intractable. If however we dispose of a set \(\mathcal{D}=\{(x_{i}^{(k_{i})},k_{i})\}_{i=1}^{N_{0}+N_{1}}\) of labelled observations, \(\lambda\) can be estimated by \(\frac{N_{1}}{N_{1}+N_{0}}\) where \(N_{1}\) and \(N_{0}\) are respectively the number of samples from \(p_{1}\) and \(p_{0}\). This leads to the (approximate) probability:
\[\Pr(k=1|x,\mathcal{D},p_{0},p_{1})=\frac{N_{1}p_{1}(x)}{N_{1}p_{1}(x)+N_{0}p_{ 0}(x)}. \tag{3}\]
However in most cases, \(p_{0}\) and \(p_{1}\) are unknown too so (3) cannot be computed either. When we only dispose of \(\mathcal{D}\), we can make use of a parametric classifier (in this paper we call classifier any function \(r_{\phi}(x)\) parameterized by \(\phi\) which mimics the unknown posterior pdf). So let us assume that we have at our disposal a function \(r_{\phi}\) such that
\[r_{\phi}(x)\approx\frac{N_{1}p_{1}(x)}{N_{1}p_{1}(x)+N_{0}p_{0}(x)}. \tag{4}\]
Our paper is based on the observation that (4) is equivalent to
\[\frac{N_{0}}{N_{1}}\frac{r_{\phi}(x)}{1-r_{\phi}(x)}\approx\frac{p_{1}(x)}{p_{ 0}(x)}, \tag{5}\]
which implies that (typically neural network based) classifiers can also be used for approximating pdf ratios.
Equation (5) has already been observed, and exploited in contexts where estimating a ratio of pdfs is relevant. First, classifiers are at the core of adversarial training techniques in which divergence measures involving a ratio are replaced by an approximation based on a classifier [2]. This enables to learn implicit generative models (i.e., with intractable pdfs) [3][4][5]. Moreover, classifier based pdf ratio approximation has been applied to estimation of such metrics as Mutual Information [6]. Finally, classifiers based ratios have been applied successfully in statistical hypothesis testing procedures [7], which heavily rely on likelihood-ratio tests.
If \(p_{0}\) is an instrumental distribution with tractable pdf, then (5) can be turned into an approximation of target pdf \(p_{1}\). So classifiers can be used for density estimation, conditional density estimation, or likelihood-to-evidence ratio estimation, making them especially relevant in a likelihood-free inference setting [8][9][10].
However, the question of sampling the corresponding model remains open, and this is precisely the point we discuss in this paper. We realize that pdf ratios also play a key role in such simulation techniques as the AR or Markov Chain Monte Carlo (MCMC) methods, in which samples from instrumental \(p_{0}\) are transformed into samples from the target \(p_{1}\) via the ratio of the two densities. This establishes a connection between classification and MC sampling, and will enable us to relax the assumption of tractable pdf \(p_{0},p_{1}\) of these sampling algorithms, at the price of approximate sampling. Our approach is therefore completely pdf-free, and as such is especially relevant when the target distribution is unknown or with intractable, noisy, or costly to evaluate pdf (see [11] for a review of MC techniques in this setting, and [12] for a review of likelihood-free Approximate Bayesian Computation techniques); and/or when the instrumental \(p_{0}\) is defined by a generative model with implicit pdf [13][14][15][16][17][3][4]. The rest of this paper is organized as follows. In SS2 we recall classical ratio-based stochastic simulation algorithms, i.e. the AR, IMH and IS techniques. In SS3 we show that classifiers computed via the Binary Cross Entropy (BCE) criterion indeed provide with an approximation of the posterior (3). Finally in SS4 we propose classification based sampling methods, illustrate our method via simulations, and revisit it under the perspective of probabilistic modeling. We end the paper with a conclusion.
## 2 Classical ratio-based sampling algorithms
Stochastic simulation includes a variety of techniques, see e.g. [18]-[24]. In this section we focus on AR, IMH and IS which share in common that they all compute a ratio of pdfs.
### The AR algorithm
#### 2.1.1 A brief reminder of AR Sampling
AR Sampling [20, chap. 2][23, chap. 3] is a simulation algorithm that yields samples distributed according to a target distribution \(p\) via samples from a proposal distribution \(q\), which are accepted or rejected as valid samples from \(p\) via some acceptance probability. More precisely, let the support of \(p\) be inside that of \(q\). This means that there exists a constant \(C\geq 1\) such that for all \(y\in\mathbb{R}^{d},p(y)\leq Cq(y)\). Let \(Y\sim q\), and let \(k\) a Bernoulli random variable with parameter \(\alpha_{AR}(y)=\frac{p(y)}{Cq(y)}\). AR sampling is based on the fact that \(Y|k=1\) is distributed according to \(p\). Note that \(\Pr(k=1)=\frac{1}{C}\), so the lower the value of \(C\), the higher the acceptance rate.
In order to use the algorithm in practice, we thus need to know pdf \(p\), and build \(q\) such that one can sample easily from \(q\) and there exists \(C\) such that \(p(y)\leq Cq(y)\) for all \(y\), we can compute one such value of \(C\), and \(C\) is as small as possible. Note finally that the algorithm can easily be adapted to the cases where \(p\) and/or \(q\) are known up to a (non necessarily common) constant, see e.g. [22, Th. 4.5].
#### 2.1.2 Revisiting AR sampling as optimal binary classification
As we shall now see, AR sampling is indeed nothing but a binary classification procedure (see also [25, SS6] for an application of this principle).
Starting from the target pdf \(p(x)\), we find an easy-to-sample distribution \(Q\) and constant \(C>1\) s.t. \(Cq(x)\) envelopes \(p(x)\). Since \(Cq(x)-p(x)\) is non negative, we write \(Cq(x)\) as \(p(x)\) plus a positive reminder which, up to a constant, is also a pdf; so enveloping \(p(x)\) with \(Cq(x)\) is nothing but building the implicit binary mixture pdf (see also figure 1 below)
\[\underbrace{q(x)}_{\mathrm{proposal}}=\frac{1}{C}\ \underbrace{p(x)}_{ \mathrm{target}}\ +\ (1-\frac{1}{C})\ \underbrace{\frac{q(x)-\frac{1}{C}p(x)}{1-\frac{1}{C}}}_{ \mathrm{reminder}} \tag{6}\]
with a priori probabilities \(\frac{1}{C}\) and \(1-\frac{1}{C}\). The first component of the mixture is the target pdf \(p\), and the second one is the law of the rejected samples. The magic of the AR algorithm consists in drawing samples from mixture \(q\) without needing to sample from its two components (see the r.h.s. of (6)). Accepting (or rejecting) a sample depending on the ratio probability
\[\alpha_{AR}(x)=\frac{p(x)}{Cq(x)}=\frac{\frac{1}{C}p(x)}{\frac{1}{C}p(x)+(1- \frac{1}{C})\frac{q(x)-\frac{1}{C}p(x)}{1-\frac{1}{C}}} \tag{7}\]
then amounts to classifying the samples with the posterior pdf (compare (7) to (2)).
### Imh
MCMC algorithms build a Markov chain whose invariant distribution is the target distribution \(p\); so simulating the chain yields samples asymptotically distributed \(\sim p\). The Metropolis-Hastings (MH) algorithm [20][21] is a particular MCMC method which constructs the Markov Chain \(x_{t}\) as a two-step procedure: given a current state \(x_{t}\), the algorithm draws a candidate \(x^{*}\) from a proposal distribution \(q(.|x_{t})\), and then calculates the acceptance probability \(\alpha_{MH}(x^{*},x_{t})=\min(1,\frac{p(x^{*})q(x_{t}|x^{*})}{p(x_{t})q(x^{*} |x_{t})})\). \(x^{*}\) is accepted as the new state \(x_{t+1}\) with probability \(\alpha_{MH}(x^{*},x_{t})\); if \(x^{*}\) is rejected then the chain remains in the current state \(x_{t}\). In practice, \(q(.|x_{t})\) plays a crucial role in the performance of the MH algorithm: if not well-tuned, the acceptance rate may be too low, leading to slow mixing of the chain, or too high, leading to poor exploration of the target distribution.
The IMH algorithm is a simplified version of MH which considers an independent transition. The new point \(x^{*}\) is hence proposed independently of the current state \(x_{t}\), according to an independent proposal \(q(.)\). In this case, the acceptance probability simplifies to \(\alpha_{IMH}(x^{*},x_{t})=\min(1,\frac{p(x^{*})q(x_{t})}{p(x_{t})q(x^{*})})\).
### Is
In many signal processing problems we want to compute the expectation of some function \(f\) with respect to pdf \(p\): \(\mu=\int f(x)p(x)\mathrm{d}x=\mathbb{E}_{P}\big{[}f(x)\big{]}\). In practice \(\mu\) can be very difficult to compute, so one needs to resort to approximations. IS is a variance reduction technique for integral MC estimates which can be traced back to the 1950's [26][27][24, SS5.4].
Figure 1: Envelopping target pdf builds an implicit mixture.
The crude MC estimate of \(\mu\) reads \(\hat{\mu}^{\mathrm{MC}}=\frac{1}{N}\sum_{i=1}^{N}f(x_{i})\) with \(x_{i}\stackrel{{\mathrm{iid}}}{{\sim}}p\). However it is generally difficult to sample directly from \(p\), moreover \(\hat{\mu}^{\mathrm{MC}}\) can be a poor estimate, particularly when the regions where \(p\) is large do not coincide with those where \(f\) is large. Rewriting \(\mu=\int f(x)\frac{p(x)}{q(x)}q(x)\mathrm{d}x\), where \(q\) is some importance distribution, leads to the IS estimator \(\hat{\mu}^{\mathrm{IS}}(q)=\frac{1}{N}\sum_{i=1}^{N}\frac{p(x_{i})}{q(x_{i})}f (x_{i}),\;\;x_{i}\stackrel{{\mathrm{iid}}}{{\sim}}q\). As far as variance reduction is concerned, one can easily show that the importance pdf which minimizes \(\mathbb{V}\operatorname{ar}(\hat{\mu}^{\mathrm{IS}}(q))\) is \(q_{\mathrm{opt}}^{\mathrm{IS}}(x)\propto|f(x)|p(x)\). Even if in practice \(\hat{\mu}^{\mathrm{IS}}(q_{\mathrm{opt}}^{\mathrm{IS}})\) cannot be computed, this tells us that the regions where it is important to sample from (whence the term "importance distribution") are not those where \(p\) is large, but rather those where \(|f|p\) is large. Note that \(\hat{\mu}^{\mathrm{IS}}\) can be computed only if \(p\) and \(q\) are known exactly, or known up to a common constant; if this is not the case one can resort to self-normalized IS [28].
Besides being a variance reduction technique, IS can also be seen as a two step sampling procedure for producing samples (approximatively) drawn from \(p\), out of samples originally drawn from \(q\). The technique is known as Rubin's SIR mechanism [29], [30], [31], [32], SS9.2]: Let \(\{x_{i}\}_{i=1}^{N}\) be \(N\) iid. samples from \(q(x)\), and given \(\{x_{i}\}_{i=1}^{N}\), let \(\{\tilde{x}^{i}\}_{i=1}^{M}\) be \(M\) iid. samples from \(\sum_{i=1}^{N}\frac{p(x_{i})/q(x_{i})}{\sum_{i=1}^{N}p(x_{i})/q(x_{i})}\delta _{x_{i}}(\mathrm{d}x)\) (in other words, we draw samples from \(q\), weight each of them with weight proportional to \(w^{m}(x_{i})=p(x_{i})/q(x_{i})\), and resample \(M\) iid. points from this random discrete probability mass function). Then \(\{\tilde{x}^{i}\}_{i=1}^{M}\) are dependent and are not \(p\)-distributed, but become iid. samples from \(p(x)\) if \(N\to\infty\).
## 3 Parametric classifier by minimizing the BCE
From now on we consider the setting where \(\lambda\), \(p_{1}\) and \(p_{0}\) are unknown, and we only have the set \(\mathcal{D}\) of labeled samples from \(p_{0}\) (with label \(k=0\)) and \(p_{1}\) (with label \(k=1\)), see SS1. In this context, we should build a parametric function \(r_{\phi}(x)\) that approximates the posterior pdf from the recorded samples. The aim of this section is to show that minimizing a BCE criterion indeed yields such a suitable approximation, since the BCE, up to constants, is nothing but an MC approximation of a Kullback-Leibler Divergence (\(D_{\mathrm{KL}}\)) between the classifier and the unavailable posterior pdf.
To see this, let us first recall the BCE criterion:
\[\mathcal{L}_{\mathrm{BCE}}(\phi)=-\sum_{i=1}^{N_{1}}\log(r_{\phi}(x_{i}^{(1)} ))-\sum_{i=1}^{N_{0}}\log(1-r_{\phi}(x_{i}^{(0)})), \tag{8}\]
where \(r_{\phi}(x)=\Pr(k=1|x,\phi)\) is the probability (under model \(\phi\)) that the label associated to an observation \(x\) is \(1\).
Let \(h(x,k)\) be the joint distribution over observations and labels:
\[h(x,k)\!=\underbrace{\frac{N_{k}}{N_{1}+N_{0}}}_{h(k)}\underbrace{p_{k}(x)} _{h(x|k)},x\in\mathbb{R}^{\,d},k=0,1. \tag{9}\]
On the other hand, using \(r_{\phi}(x)\), we construct another joint probability distribution \(h_{\phi}(x,k)=h(x)r_{\phi}(x)^{k}(1-r_{\phi}(x))^{1-k}\), where \(h(x)\) is the \(x\)-marginal in (9). As the name Cross-Entropy suggests, the BCE loss is, up to additive and multiplicative constants, nothing but an MC approximation of \(D_{\mathrm{KL}}\Big{(}h(x,k)||h_{\phi}(x,k)\Big{)}=\mathbb{E}_{h(x)}\Big{(}D_ {\mathrm{KL}}(h(k|x)||r_{\phi}(x)^{k}(1-r_{\phi}(x))^{1-k})\Big{)}\), see appendix.
The interest of this interpretation is that, as is well known, a \(D_{\mathrm{KL}}\) reaches zero if and only if the two distributions are equal almost surely. So, if \(r_{\phi}\) represented any arbitrary function, minimizing \(D_{\mathrm{KL}}\Big{(}h(x,k)||h_{\phi}(x,k)\Big{)}\) would ensure that \(r_{\phi}(x)^{k}(1-r_{\phi}(x))^{1-k}=h(k|x)\) for all \(x\in\mathbb{R}^{\,d}\) and for \(k=1,0\), i.e. that the classifier reaches the target posterior pdf. Of course, in practice, minimizing the BCE does not ensure that this \(D_{\mathrm{KL}}\) decreases to zero. First, since we only dispose of a finite number of labeled observations, minimizing an MC approximation of the \(D_{\mathrm{KL}}\) does not minimize the \(D_{\mathrm{KL}}\) itself. Next, the parametric family does not contain \(h(k|x)\) in general, in which case we can only ever reach a positive minimum of the \(D_{\mathrm{KL}}\). Lastly, standard optimization techniques would only guarantee convergence to a local minimum of the \(D_{\mathrm{KL}}\). Therefore in practice, minimizing the BCE loss only provides with \(r_{\phi}\) which approximates the unknown posterior.
## 4 Using a binary classifier for (approximate) Sampling
We now come to the heart of this paper. If \(p_{1}\) is a pdf of interest in an MC sampling setting, and \(p_{0}\) a suitable easy-to-sample instrumental distribution - be it the proposal distribution in AR, the independent Markov transition Kernel in
IMH, or the importance distribution in IS, then the three sampling algorithms involve the pdf ratio \(p_{1}(x)/p_{0}(x)\), which is unknown when at least one pdf is intractable. As explained in section 3, a parametric binary classifier trained from a set \(\mathcal{D}\) of labeled observations computes an approximation of the unknown posterior distribution. However, remember that (4) is equivalent to (5); we thus see that classifiers can also be used for approximating pdf ratios of interest, which enables us to propose approximate versions of the sampling algorithms based on this classifier-ratio approximation, and thus to relax the requirement of tractable pdf, but at the cost of approximate sampling. Of course, the closer \(p_{0}\) is to \(p_{1}\), the more efficient the algorithms. However, here \(p_{0}\) is supposed to be given and hence, our problem is not (as usual) to adjust \(p_{0}\) from a given \(p_{1}\), but to make the most of \(\mathcal{D}\) for fixed \(p_{0},p_{1}\).
Assumptions.
\(p_{1}\) is the distribution of interest and \(p_{0}\) a fixed instrumental distribution from which we can propose samples. Ratio \(p_{1}(x)/p_{0}(x)\) is unknown and we dispose of the labeled dataset \(\mathcal{D}\), and assume that we can train a binary classification model \(r_{\phi}\) which minimizes (8).
### Classifier-based sampling algorithms
Remember that a key ingredient for running the algorithms of SS2, is the ratio \(p_{1}(x)/p_{0}(x)\), which appears in \(\alpha_{AR}(x),\alpha_{IMH}(x,x_{t})\) and in \(w^{u}(x)\). Following the idea expressed in (5), we can however make use of a classifier for approximating the unavailable ratio \(p_{1}(x)/p_{0}(x)\), and finally the quantities:
\[\alpha_{AR}(x)\leftarrow\frac{1}{\widetilde{\mathcal{C}}}\frac{r_{\phi}(x)}{1 -r_{\phi}(x)}\text{ where }\widetilde{C}=\max_{y\in\mathcal{D}}\frac{r_{\phi}(y)}{1-r_{\phi}(y)}; \tag{10}\]
\[\alpha_{IMH}(x,x_{t})\leftarrow\min\Biggl{(}1,\frac{r_{\phi}(x)(1-r_{\phi}(x_ {t}))}{(1-r_{\phi}(x))r_{\phi}(x_{t})}\Biggr{)}; \tag{11}\]
\[w^{u}(x)\leftarrow\frac{r_{\phi}(x)}{1-r_{\phi}(x)}. \tag{12}\]
Our procedure is summarized by Figure 2: we first train \(r_{\phi}\) from labeled samples from \(p_{1}\) and \(p_{0}\); we next use ratio \(r_{\phi}(x)/(1-r_{\phi}(x))\) as a surrogate of \(p_{1}(x)/p_{0}(x)\), which enables us to use the AR, IMH or IS procedure, and thus to turn samples from \(p_{0}\) into (approximate) samples from \(p_{1}\). A main advantage of our approach is that a distribution which is only defined by its sampling procedure and has implicit intractable pdf can be used as instrumental \(p_{0}\). Indeed our approach does not require evaluating the pdf \(p_{0}\) neither during the training of the classifier, nor during the three proposed sampling procedures.
### Illustrating examples
We illustrate our approach (see fig. 3) on reference 2D examples in order to illustrate the mechanism of (i) obtaining an approximate of the pdf ratio from samples using a feed-forward neural network [33] with 3 hidden layers, 32 hidden units per layers and SiLU activation function that outputs \(\mathrm{logit}(r_{\phi}(x))\); and (ii) sampling from the target distribution via that pdf ratio using the AR, IMH or IS samplers. The instrumental \(p_{0}\) was set to be Gaussian with mean and covariance estimated from the samples from \(p_{1}\) (even though it can be computed, pdf \(p_{0}\) was not used during the procedure).
### Probabilistic modelling
So far, we have presented our work as a technique to perform approximate MC sampling; let us now revisit it under the scope of probabilistic modelling. If we rewrite \(p_{1}\) as
\[p_{1}(x)=\frac{p_{0}(x)(p_{1}(x)/p_{0}(x))}{\int p_{0}(z)(p_{1}(z)/p_{0}(z)) \mathrm{d}z}, \tag{13}\]
then using (5) amounts to building an approximation \(p_{\phi}\) of \(p_{1}\):
\[p_{\phi}(x)=\frac{p_{0}(x)(r_{\phi}(x)/(1-r_{\phi}(x)))}{\int p_{0}(z)(r_{\phi }(z)/(1-r_{\phi}(z)))\mathrm{d}z}. \tag{14}\]
Our procedure consists in applying the AR, IMH or IS samplers to \(p_{\phi}\) with proposal \(p_{0}\) (at least up to the approximation of constant \(C\) in the AR case). This construction corresponds to a specific energy-based model [34][35][36] with energy function \(E_{\phi}(x)=-\log(p_{0}(x))-\mathrm{logit}(r_{\phi}(x))\). Model \(p_{\phi}\) inherits the advantages of this energy structure: (i) it can be trained without evaluating the gradient of the numerator of (14) nor of the intractable normalizing constant; (ii) it is structurally compatible with the AR, IMH or IS samplers with proposal \(p_{0}\).
Figure 3: Density ratio (middle-right) via classification of samples from \(p_{1}\) (left) and \(p_{0}\) (middle-left) - approximate samples from \(p_{1}\) (right) obtained via a ratio based algorithm: AR (top), IMH (Middle), IS (bottom)
Figure 2: Summary of the classifier based sampling approach
## 5 Conclusion
In this paper we proposed a version of the classical AR, IMH or IS samplers, with target \(p_{1}\) and proposal \(p_{0}\), in which the key \((p_{1}/p_{0})\) ratio is replaced by a surrogate function trained from a labelled dataset. From an MC perspective, the advantages or our approach are threefold: (i) it is completely pdf-free; (ii) training amounts to building a (typically neural network based) classifier; (iii) the instrumental pdf \(p_{0}\) does not need to be known explicitely. From a probabilistic modeling perspective, our approximate samplers coincide with the original ones when applied to some specific energy based approximation of target \(p_{1}\) which, thanks to its specific structure, can both be trained easily via standard classification, and is structurally compatible with the AR, IMH or IS sampling techniques.
\[D_{\mathrm{KL}}(h(x,k)||h_{\phi}(x,k))=\mathbb{E}_{h(x,k)}[\log(h (x,k))]\] \[-\mathbb{E}_{h(\emptyset,x)}[\log\bigl{(}h(x)\bigr{)}]-\mathbb{E }_{h(k,x)}[\log(\Pr(k|x,\phi))]. \tag{15}\]
In (15), only the last term depends on \(\phi\); we get the BCE loss with an MC approximation of it (or, equivalently, replacing the expectation with one computed on the empirical distributions):
\[\mathbb{E}_{h(k,x)}[\log(\Pr(k|x,\phi))]\overset{\eqref{eq:mdp}}{=} \underset{k=0}{\overset{1}{\sum}}\underset{N_{1}{+}N_{0}}{\overset{1}{\sum}} \int\log(\Pr(k|x,\phi))p_{k}(x)\mathrm{d}x\] \[\approx \underset{k=0}{\overset{1}{\sum}}\underset{N_{1}{+}N_{0}}{ \overset{1}{\sum}}\sum_{i=1}^{N_{k}}\log(\Pr_{\phi}(k|x_{i}^{(k)}))\] \[= \underset{N_{1}{+}N_{0}}{\overset{1}{\sum}}\left(\underset{k=0} {\overset{N_{1}}{\sum}}\log(r_{\phi}(x_{i}^{(1)}))+\underset{k=1}{\overset{N_ {0}}{\sum}}\log(1-r_{\phi}(x_{i}^{(0)}))\right).\]
So \(D_{\mathrm{KL}}\Bigl{(}h(x,k)||h_{\phi}(x,k)\Bigr{)}\approx A+B\mathcal{L}_{ \mathrm{BCE}}(\phi)\), and \(\arg\min_{\phi}D_{\mathrm{KL}}\Bigl{(}h(x,k)||h_{\phi}(x,k)\Bigr{)}\approx \arg\min_{\phi}\mathcal{L}_{\mathrm{BCE}}(\phi)\).
|
2302.13793 | Dr ChatGPT, tell me what I want to hear: How prompt knowledge impacts
health answer correctness | Generative pre-trained language models (GPLMs) like ChatGPT encode in the
model's parameters knowledge the models observe during the pre-training phase.
This knowledge is then used at inference to address the task specified by the
user in their prompt. For example, for the question-answering task, the GPLMs
leverage the knowledge and linguistic patterns learned at training to produce
an answer to a user question. Aside from the knowledge encoded in the model
itself, answers produced by GPLMs can also leverage knowledge provided in the
prompts. For example, a GPLM can be integrated into a retrieve-then-generate
paradigm where a search engine is used to retrieve documents relevant to the
question; the content of the documents is then transferred to the GPLM via the
prompt. In this paper we study the differences in answer correctness generated
by ChatGPT when leveraging the model's knowledge alone vs. in combination with
the prompt knowledge. We study this in the context of consumers seeking health
advice from the model. Aside from measuring the effectiveness of ChatGPT in
this context, we show that the knowledge passed in the prompt can overturn the
knowledge encoded in the model and this is, in our experiments, to the
detriment of answer correctness. This work has important implications for the
development of more robust and transparent question-answering systems based on
generative pre-trained language models. | Guido Zuccon, Bevan Koopman | 2023-02-23T22:14:01Z | http://arxiv.org/abs/2302.13793v1 | # Dr ChatGPT, tell me what I want to hear:
###### Abstract.
Generative pre-trained language models (GPLMs) like ChatGPT encode in the model's parameters knowledge the models observe during the pre-training phase. This knowledge is then used at inference to address the task specified by the user in their prompt. For example, for the question-answering task, the GPLMs leverage the knowledge and linguistic patterns learned at training to produce an answer to a user question. Aside from the knowledge encoded in the model itself, answers produced by GPLMs can also leverage knowledge provided in the prompts. For example, a GPLM can be integrated into a retrieve-then-generate paradigm where a search engine is used to retrieve documents relevant to the question; the content of the documents is then transferred to the GPLM via the prompt. In this paper we study the differences in answer correctness generated by ChatGPT when leveraging the model's knowledge alone vs. in combination with the prompt knowledge. We study this in the context of consumers seeking health advice from the model. Aside from measuring the effectiveness of ChatGPT in this context, we show that the knowledge passed in the prompt can overturn the knowledge encoded in the model and this is, in our experiments, to the detriment of answer correctness. This work has important implications for the development of more robust and transparent question-answering systems based on generative pre-trained language models.
Model selection, Dense retrievers +
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
Footnote †: journal: Information systems Evaluation of retrieval results
+
+
Footnote †: journal: Information systems Evaluation of retrieval results
overturning the model answer about a treatment, (2) when evidence contrary to the ground truth is provided and ChatGPT overturns an answer generated when no evidence is provided, this is often for the worse (overall accuracy \(63\)%), i.e. incorrect evidence is able to deceive the model into providing an incorrect answer to a question that otherwise the model could answer correctly.
Previous work has shown that engineering prompts has an important impact on the effectiveness of GPLMs like ChatGPT (Gil et al., 2017). This paper adds to that understanding by contributing that it is not just the "form" of the prompt that matters (e.g. the clarity of the instructions contained in the prompt), but also that the correctness of evidence contained in the prompt can highly influence the quality of the output of the GPLMs. This is important when GPLMs are integrated in a retrieve-then-generate1 pipeline (Gil et al., 2017; Gil et al., 2017), where information related to the question is first identified from a corpus (e.g., the Web), and this is then passed to the model via the prompt to use to inform the model's output.
Footnote 1: Also called retrieve-then-read in the literature.
## 2. Related Work
Our study aims to evaluate the impact on ChatGPT responses of knowledge provided in the prompt vs. the knowledge encoded in the model. The effect of prompting pre-training language models is attracting increasing attention, especially with respect to the so call practice of prompt engineering, i.e. finding an appropriate prompt that makes the language model solve a target downstream task. Prompt engineering, as opposed to fine-tuning, doesn't modify the pre-trained model's weights when performing a downstream task (Gil et al., 2017). Prompt engineering is commonly used to enable language models, which have been pre-trained on large amounts of text, to execute few-shot or zero-shot learning tasks, reducing the need to fine-tune models and rely on supervised labels. In this prompt-learning approach, during inference, the input \(x\) is altered using a template to create a textual prompt \(x^{\prime}\), which is then provided as input to the language model to generate the output string \(y\). The typical prompt-learning setup involves constructing prompts with unfilled slots that the language model fills probabilistically to obtain a final string \(\tilde{x}\), which is then used to produce the output \(y\)(Gil et al., 2017).
In our study we do not perform prompt-learning: we instead use the prompt to pass external knowledge to the model and measure how this changes the answers it generates. Related to this direction, but in the context of few-shot learning, Zhao et al. (Zhao et al., 2018) observed that the use of prompts containing training examples for a GPLM (in particular GPT-3) provides unstable effectiveness, with the choice of prompt format, training examples and their order being major contributors to variability in effectiveness. We have similar observations in that in RQ2 we vary the document provided as evidence in the prompt and find that any two documents can have widely different effect on the answer of the model despite having the same stance about the topic of the question.
In this paper, we empirically study the impact of prompts in changing ChatGPT responses in the context of an health information seeking task. Other works have investigated the effectiveness of ChatGPT to answer health-related questions.
Gilson et al. (Gilson et al., 2017) presented a preliminary evaluation of ChatGPT on questions from the United States Medical Licensing Examination (USMLE) Step 1 and Step 2 exams. The prompt only included the exam question, and ChatGPT answer was evaluated in terms of accuracy of the answer, along with other aspects related to logical justification of the answer and presence of information internal and external to the question. The model performance were found to be comparable to a third year medical student.
Nov et al. (Nov et al., 2018) compared ChatGPT responses to those supplied by a healthcare provider in 10 typical patient-provider interactions. 392 laypeople were asked to determine if the response they received was generated by ChatGPT or a healthcare provider. The study found that it was difficult for participants to distinguish between ChatGPT and healthcare provider responses to patient inquiries, and that they were comfortable using chatbots to address less serious health concerns - supporting the realistic setting of our study.
Benoit (Benoit, 2018) evaluated the correctness of ChatGPT in the diagnosis and triage of medical cases presented as vignettes2 and found that ChatGPT displays a diagnostic accuracy of 75.6% and a triage accuracy of 57.8%.
Footnote 2: A clinical vignette is a brief patient summary that includes relevant history, physical exam results, investigative data, and treatment.
De Angelis et al. (Angelis et al., 2017) recognises that ChatGPT can be used to spread misinformation within public health topics. In our study, we show how misinformation can be injected into ChatGPT's input through the prompt and this sensibly impacts the output of the model, with potentially dire consequences.
## 3. RQ1 - General Effectiveness
Our first research question relates to how effective ChatGPT is in answering complex health information questions. Measuring this serves two purposes: on one hand it informs about the reliability of this GPLM in supporting information seeking tasks related to health advice, on the other hand it allows us to ground the effectiveness of the model when relying solely on the knowledge encoded in the model itself, i.e. without observing any additional knowledge in the prompt (i.e. the prompt only contains the question).
### Methods
We consider a total of 100 topics from the TREC 2021 and 2022 Health Misinformation track. Each topic relates to the efficacy of a treatment for a specific health issue. The topic includes a natural language question; e.g., "Does apple cider vinegar work to treat ear infections?" and a ground truth answer, either 'helpful' or 'unhelpful' for 2021 and 'yes' or 'no' for 2022. Ground truth answers were assigned based on current medical practice.
We issue the question to ChatGPT as part of the prompt shown in Figure 1, which instructs the model to provide a Yes/No answer and associated explanation. We then evaluate the correctness of the answer by comparing ChatGPT answer to the TREC Misinformation Track ground truth.
### Results
The effectiveness of ChatGPT is shown in Figure 2. Overall effectiveness was 80%. ChatGPT answered "Yes" and "No" a similar number of times and its error rate was similar when answering "Yes" and "No". With this simple prompt, ChatGPT always provided an answer of Yes or No - as opposed to when using the prompt of RQ2, as we
shall discuss below. When generating an answer, ChatGPT also produced an explanation to support its stance. We did not perform an in depth analysis of these explanations, e.g., to verify whether the claims made in the explanation are true or are hallucinations (Beng et al., 2017; Chen et al., 2018), because we lack the medical expertise necessary for this. We plan to investigate this aspect in future work through engagement with medical experts. A brief analysis of these explanations, however, reveals that they often contain remarks about the presence of limited scientific evidence (or even conflicting evidence) with respect to a treatment option for a condition, and details about specific conditions for which the answer was not valid. The answers also often contain a suggestion to contact a health practitioner for further review of the advice.
## 4. RQ2 - Evidence Biased Effectiveness
Our second research question relates to the impact on answer correctness when biasing ChatGPT by prompting with (1) supporting evidence; and (2) contrary evidence. Measuring this allows us to determine the impact of the prompt knowledge and whether this can overturn the knowledge encoded in the model when generating an answer.
### Methods
Supporting and contrary evidence is taken as individual document qrels in the TREC Misinformation track. Documents judged at 2 ("Supportive") are selected as supporting evidence. Documents judged as 0 ("Dissuades") are taken as contrary. The process to issue the evidence biased prompt to ChatGPT is as follows:
1. For each TREC topic, we select a maximum of 3 supportive evidence and 3 contrary evidence documents from the TREC qrels;3 Footnote 3: Some topics did not contain 3 contrary or supportive documents: in these cases we selected as many as there were available.
2. For each document, we generate the prompt shown in Figure 3, including both the question and the evidence text in quotes;
3. We capture the response from ChatGPT;
4. Although the prompt explicitly asks for a Yes/No answer, we found that in many cases ChatGPT did not use these words, but did provide the answer in another way (e.g., "Inhaling steam can help alleviate the symptoms of a common cold"). Thus we had to manually read the responses and determine if the answer was Yes, No, or Unsure. All documents were assessed by two people and discussion between the annotators took place to resolve label disagreement.
5. Once ChatGPT provided an answer, we then evaluate the correctness of the answer by comparing ChatGPT's answer to the TREC Misinformation Track ground truth.
We used the 35 topics from TREC 2021 Health Misinformation track that contained document-level relevance assessments. (Document-level qrels are not available for TREC 2022 at the time of writing.)
The actual documents are web pages from the noclean version of the C4 dataset: \(\approx\)1B English text extracts from the April 2019 snapshot of Common Crawl. Some passages are long and exceed the maximum token limit of ChatGPT. We trimmed long texts using the NLTK word_tokenize method to count the number of tokens in the document. Tokens from this tokenization do not necessarily match tokens from ChatGPT, as ChatGPT's tokenisation is similar to that in BERT (BERT tokenization matches candidate tokens against a controlled vocabulary and dividing tokens into matching subtokens out of vocabulary.) Through experimentation, we identified that a limit of 2,200 NLTK tokens from the document was about the maximum we could use to concatenate with the remainder of the prompt and issue to ChatGPT without encountering problems with the input size limit.
### Results
The effectiveness of ChatGPT with evidence-biased prompting is shown in Figure 4. Accuracy was less than the simpler, question only prompt (as shown previously in Figure 2).
Figure 5 shows a breakdown of effectiveness after evidence-biasing. Specifically, this indicates how the answer changed compared to RQ1's question-only condition: "Flipped" indicates that
Figure 1. GPTChat prompt format for determining general effectiveness (RQ1) on TREC Misinformation topics.
Figure 3. ChatGPT Prompt used to determine what impact a supportive or contrary passage has on answer correctness.
Figure 2. Effectiveness of ChatGPT when prompting for “Yes/No” answers to TREC Misinformation questions.
ChatGPT's answered opposite after evidence-biasing; "Unchanged" means the answer matched RQ1's question-only condition; "Unsure" indicates where ChatGPT did not provide a Yes/No answer. We observe that when ChatGPT changes it's answer (i.e., Flipped), it generally gets the answer wrong.
Figure 6 provides a detailed analysis of the answer behaviour of ChatGPT with respect to the two conditions: question-only and evidence-biased. This analysis provides a notable insight. Consider the incorrect answers provided in the evidence-biased condition (top-right condition in the Sankei diagram). First we note that there are little changes for the evidence to correct an answer that the model in the question-only condition got wrong. Then, we note that about half of the errors occur regardless of any evidence being provided - that is, they occurred also in the question-only condition (the two top green lines). Thus, in these cases, the evidence had no effect in changing the model's response, regardless of whether the stance of the evidence matched the ground truth correctness. The other half of the errors instead are cases in which the question-only setting provided the correct answer, and instead the evidence-based prompt steered the model to provide an incorrect response. This occurs more often when supporting evidence is provided, rather than contrary evidence, but this difference is minimal.
## 5. Conclusions
In this paper we have examined the correctness of ChatGPT, a rapidly emerging large pre-trained generative language model, when answering complex health information questions regarding the effectiveness of a treatment for a condition. We did this in two settings: when only the question is presented to ChatGPT (question-only) and when the question is presented along with evidence (evidence-biased), i.e. a web search results retrieved when searching for the question. Importantly, we controlled whether the evidence is in favour or against the treatment. This in turn allowed us to understand the effect of providing evidence in the prompt and the impact that prompt knowledge can have on model knowledge and consequently on the generated answer. We found that ChatGPT answers correctly \(80\%\) of the questions if relying solely on model knowledge. On the other hand, the evidence presented in the prompt can heavily influence the answer - and more importantly, it does affect the correctness of the answer, reducing ChatGPT's accuracy in our task to only \(63\%\). Often, in fact, if evidence is provided, the model tends to agree with the stance of the evidence even if the model would produce an answer with the contrary stance if the evidence was not provided in input.
Our study has a number of limitations we plan to address in future work. In ChatGPT, like in other GPLMs, answer generation is stochastic. However, we did not study the variability of answer generation (and the associated effectiveness) across multiple runs of the same question. Similarly, we did not analyse what are the characteristics of the evidence we insert in prompts that trigger
Figure 4. Effectiveness of ChatGPT when prompting with either a supporting or contrary evidence passage from TREC Misinformation qrels.
Figure 5.
Figure 6. Sankey diagram showing the overall breakdown of all results. From the left, topics are divided by ground truth answer (Yes or No); next topics are divided according to RQ1 question prompting (Correct or Not Correct); next the prompt is evidence biased (Supporting and Contrary); finally, post evidence-biased breakdown is shown.
divergent answers, despite having identical stance, nor we studied the effect of different prompt formats (including the extraction of key passages from the evidence documents used) - aspects that are know to lead to variability in effectiveness in prompt learning [13].
A key feature of ChatGPT is its interaction abilities: it can hold a multi-turn conversation. In our experiments we discounted the multi-turn setting and only considered single-turn answers. The ability to hold multi-turn conversations would allow to e.g., provide multiple evidences, demand for a binary decision instead of and unsure position, and clarify aspects of the answer that may be unclear.
Finally, when producing an answer, we instructed ChatGPT to also explain the reason for providing the specific advice. These explanations often included claims about research studies and medical practices, which we did not validate whether their hold or are hallucinations of the model. In addition, we did not ask to attribute such claims to sources [3, 10, 12]: correct attribution appears to be a critical requirement to improve the quality, interpretability and trustworthiness of these methods.
|
2305.07999 | Charm content of the proton: An analytic calculation | According to general understanding, the proton as one of the main ingredients
of the nucleus is composed of one down and two up quarks bound together by
gluons, described by Quantum Chromodynamics (QCD). In this view, heavy quarks
do not contribute to the primary wave function of the proton. Heavy quarks
arise in the proton perturbatively by gluon splitting and the probability
gradually increases as $Q^2$ increases (extrinsic heavy quarks). In addition,
the existence of non-perturbative intrinsic charm quarks in the proton has also
been predicted by QCD. In this picture, the heavy quarks also exist in the
proton's wave function. In fact, the wave function has a five-quark structure $
\vert u u d c \bar{c}\rangle $ in addition to the three-quark bound state $
\vert u u d\rangle $. So far, many studies have been done to confirm or reject
this additional component. One of the recent studies has been done by the NNPDF
collaboration. They established the existence of an intrinsic charm component
at the 3-standard-deviation level in the proton from the structure function
measurements. Most of the studies performed to calculate the contribution of
the intrinsic charm so far have been based on the global analyses of the
experimental data. In this article, for the first time we directly calculate
this contribution by an analytic method. We estimate a $x^{c\bar{c}} = (1.36
\pm 0.67)\% $ contribution for the $ \vert u u d c \bar{c}\rangle $ component
of the proton. | A. R. Olamaei, S. Rostami, K. Azizi | 2023-05-13T20:42:09Z | http://arxiv.org/abs/2305.07999v4 | # Charm content of proton: A QCD sum rule analysis
###### Abstract
According to general understanding, the proton as one of the main ingredients of the nucleus is composed of one down and two up quarks bound together by gluons, described by Quantum Chromodynamics (QCD). In this view, heavy quarks do not contribute to the primary wave function of the proton. Heavy quarks arise in the proton perturbatively by gluon splitting and the probability gradually increases as \(Q^{2}\) increases (extrinsic heavy quarks). In addition, the existence of non-perturbative intrinsic charm quarks in the proton has also been predicted by QCD. In this picture, the heavy quarks also exist in the proton's wave function. In fact, the wave function has a five-quark structure \(|uudc\bar{c}\rangle\) in addition to the three-quark bound state \(|uud\rangle\). So far, many studies have been done to confirm or reject this additional component. One of the recent studies has been done by the NNPDF collaboration. They established the existence of an intrinsic charm component at the 3-standard-deviation level in the proton from the structure function measurements. Most of the studies performed to calculate the contribution of the intrinsic charm so far have been based on the global analyses of the experimental data. In this article, for the first time we directly calculate this contribution by using QCD sum rules. We estimate a \(x^{c\bar{c}}=(1.36\pm 0.67)\%\) contribution for the \(|uudc\bar{c}\rangle\) component of the proton.
IntroductionThe existence of a non-perturbative intrinsic charm quark component in the nucleon plays an increasingly important role in hadron physics. Although the structure of the proton is known in the form of a three-quark bound state, QCD predicts the existence of a non-perturbative intrinsic heavy charm quark contribution to the fundamental structure of the proton. A Fock states of the proton's wave function with a five-quark structure \(|uudc\bar{c}\rangle\) was proposed for the first time by Brodsky, Hoyer, Peterson, and Sakai (BHPS) in Refs. [1; 2] to explain the large cross-section measured for the forward open charm production in \(pp\) collisions at the energies of the Intersecting Storage Rings (ISR) at CERN [3; 4; 5; 6]. According to the BHPS model, charm quarks in the nucleon could be either extrinsic or intrinsic. Perturbative extrinsic charm quarks arise in the proton where the gluon splits into charm-anti-charm pairs in the DGLAP \(Q^{2}\) evolution and are produced more and more when the \(Q^{2}\) scale increases. On the other hand, non-perturbative intrinsic charm quarks emerge through the fluctuations of the nucleon state to the five-quark or virtual meson-baryon states. In recent years, intrinsic charm has been an interesting subject for research from the theoretical, phenomenological, and experimental points of view [7; 8; 9; 10; 11; 12; 13].
In addition to the BHPS model, there have also been some other models to explain the intrinsic charm distribution inside the proton. For instance, in the meson cloud model (MCM) which is more dynamical compared to the BHPS, the nucleon can fluctuate to the virtual states composed by a charmed baryon plus a charmed meson [14; 15]. This picture can also be extended to the intrinsic strange content of the nucleon [16]. The main difference between the BHPS and MCM models is that the charm and anti-charm distributions are different in MCM [17], while they are the same in the BHPS. The scalar five-quark model is another approach which was presented by Pumplin [18]. In this approach, the distribution for the state \(|uudc\bar{c}\rangle\) can be derived from Feynman rules (see [18; 19] for review of these models).
Although QCD effectively describes the shape of the intrinsic charm distribution, it has nothing to say quantitatively about the probability of finding the nucleon in the configuration \(|uudc\bar{c}\rangle\). However, the experiment can shed light on this issue. From the historical point of view, trials to determine the probability of the intrinsic charm content of the proton were started in 1983 when people were motivated to use the BHPS model to explain the data of the European Muon Collaboration (EMC) [20; 21; 22]. Although these first analyses were not global, they indicated that an intrinsic charm component with probability \((0.86\pm 0.6)\%\) can exist in the nucleon. The first global analyses of Parton Distribution Functions (PDFs) considering an intrinsic charm component for the nucleon was performed by the CTEQ collaboration. The aim was also to determine the probability of finding intrinsic charm state in the proton [23; 24; 27]. Utilizing a wide range of the hard-scattering experimental data, they demonstrated that the charm content can be 2-3 times larger than the value predicted by the BHPS (\(1\%\)), while this probability was considered to be about \(0.3\%\) by the MSTW group [26]. In 2014, the CTEQ collaboration followed their previous works and found a probability of \(2\%\) for the intrinsic charm considering the BHPS model [27]. After that, a global analyses was done by Jimenez-Delgado et al [28] in which, to show how big the intrinsic charm component could be, they used looser kinematic cuts to include low-\(Q\) and high-\(x\) data. Their prediction was that the intrinsic charm contribution in the proton is about \(0.5\%\). Finally, in the most recent global analyses that has been performed by the NNPDF collaboration [29], the intrinsic charm contribution in the flavor content of the proton at the 3-standard-deviation level is estimated to be about \((0.62\pm 0.28)\%\).
In addition to these global analyses, there have been some studies to restrict the upper limit on the intrinsic charm content. For instance, an upper limit on the intrinsic charm content of the proton, using ATLAS data on measurements of differential cross sections of isolated prompt photons produced in association with a c-jet in \(pp\) collision, is set in [30] as \(1.93\%\). According to [31], using the ratio of \(\Lambda_{QCD}\) to the difference in the energies of the pentaquark and proton, the upper bound of the state \(uudc\bar{c}\) is about \(1\%\).
The existence of the intrinsic charm inside the proton has remarkable and growing experimental support. Several previous and new experiments have been or will be conducted to look for evidences of the intrinsic charm. One of the recent is related to the LHCb data on Z+c-jets over Z+jets at forward rapidity [32]. The data can be described very well after including a \(1\%\) intrinsic charm contribution in the proton. Searching for intrinsic charm is also an interesting subject at future experiments like AFTER@LHC [33; 34; 35] and ongoing ones. The AFTER@LHC experiment is a more suitable laboratory for studying the properties of doubly heavy baryons and it will be interesting to investigate how and to what extent the intrinsic charm affects the results.
The main goal of this article is to directly calculate, for the first time, the charm component of proton in a five-quark \(|uudc\bar{c}\rangle\) structure analytically using the two-point version of the QCD Sum Rules (QCDSR). The remainder of the paper is organized as follows: In the next part we describe the formalism to calculate the proton mass via the two-point QCDSR. Next, the numerical analyses and results are presented. The final part is devoted to the concluding notes.
_Formalism_ The starting point to calculate any quantity in the QCDSR approach is to write a suitable correlation function (CF). In this case, we write the two-point CF as
\[\Pi(q)=i\int d^{4}xe^{iqx}\langle 0|{\cal T}\{\eta(x)\bar{\eta}(0)\}|0\rangle, \tag{1}\]
where \(\eta(x)\) is the interpolating current of the proton which is the dual of the wave function in the quark model. \({\cal T}\) represents the time ordering operator and \(q\) is the four-momentum of the proton. The time ordered production of currents is sandwiched between two QCD vacuum states, which corresponds to the creation of the proton in one spacetime point (which according to translational invariance can be chosen to be the origin) and the annihilation in the spacetime point \(x\); after that the result is Fourier transformed to the momentum space.
The interpolating current has two components. The first one corresponds to the ordinary \(|uud\rangle\) part of the proton with the spin-parity \((\frac{1}{2})^{+}\):
\[\eta^{(3q)}(x)=2\varepsilon^{abc}\sum_{\ell=1}^{2}\Big{[}\Big{(}u^{Ta}(x)CA_{ 1}^{\ell}d^{b}(x)\Big{)}A_{2}^{\ell}u^{c}(x)\Big{]}, \tag{2}\]
where \(a\), \(b\), \(c\) are the color indices and \(C\) is the charge conjugation operator. The coefficients are \(A_{1}^{1}=I\), \(A_{1}^{2}=A_{2}^{1}=\gamma_{5}\) and \(A_{2}^{2}=\beta\) where \(\beta\) is an auxiliary parameter which for the Ioffe current is given by \(\beta=-1\). The second part corresponds to the intrinsic charm component in terms of the five-quark structure \(|uudc\bar{c}\rangle\). It has a scalar-diquark-scalar-diquark-suptaurle pure current as:
\[\eta^{(5q)}(x) = \varepsilon^{ila}\varepsilon^{ijk}\varepsilon^{lmn}\] \[\times u_{j}^{T}(x)C\gamma_{5}d_{k}(x)\,u_{m}^{T}(x)C\gamma_{5}c_{n}(x )\,\gamma_{5}C\bar{c}_{a}^{T}(x)\,,\]
where \(i\), \(j\), \(k\), \(\cdots\) are again color indices. This current component has the spin-parity \((\frac{1}{2})^{+}\), as well.
The state of proton including the intrinsic charm component is a superposition of \(3q\)- and \(5q\)-components which is:
\[|P\rangle={\cal N}\Big{(}|uud\rangle+\alpha|uudc\bar{c}\rangle\Big{)}\,, \tag{4}\]
where \(\alpha\) is a number which indicates the amplitude of the intrinsic charm contribution of the proton and \({\cal N}=(1+|\alpha|^{2})^{-1/2}\) is the normalization constant. Therefore the whole interpolating current is
\[\eta(x)={\cal N}\Big{[}\eta^{(3q)}(x)+\alpha\,\frac{\eta^{(5q)}(x)}{m_{P}^{3} }\Big{]}. \tag{5}\]
The factor \(m_{P}^{3}\) is introduced to ensure that both terms have the same mass dimension. Therefore, in the CF, \(\langle 0|{\cal T}\{\eta(x)\bar{\eta}(0)\}|0\rangle\) contains four terms where the cross terms \(\langle 0|{\cal T}\{\eta^{(3q)}(x)\bar{\eta}^{(5q)}(0)\}|0\rangle\) and \(\langle 0|{\cal T}\{\eta^{(5q)}(x)\bar{\eta}^{(3q)}(0)\}|0\rangle\) give zero contributions according to Wick's theorem for the contraction of the quark fields. Therefore, one has:
\[\Pi(q) = i\int d^{4}xe^{iqx}{\cal N}^{2}\Big{[}\langle 0|{\cal T}\{\eta^{(3q) }(x)\bar{\eta}^{(3q)}(0)\}|0\rangle \tag{6}\] \[+\frac{|\alpha|^{2}}{m_{P}^{6}}\langle 0|{\cal T}\{\eta^{(5q)}(x) \bar{\eta}^{(5q)}(0)\}|0\rangle\Big{]}.\]
The intrinsic charm contribution which is the probability that the charm component being found in the proton [1] is defined as \(x^{c\bar{c}}={\cal N}^{2}|\alpha|^{2}\) that we are going to determine.
The only two independent Lorentz structures which can contribute to the CF are \(/\!\!\!\!/\) and \(U\) and therefore we have
\[\Pi(q)=q\Pi_{1}(q^{2})+U\Pi_{2}(q^{2}), \tag{7}\]
where the invariant functions \(\Pi_{1}(q^{2})\) and \(\Pi_{2}(q^{2})\) have to be calculated. To relate the physical observables like the mass to the QCD calculations, the above mentioned CF has to be calculated in two different regimes. One in terms of hadronic parameters called the physical side which is the real part of the CF and is calculated in the timelike region of the light cone. The other side, called QCD, is evaluated in terms of quarks and gluons using the Operator Product Expansion (OPE) of the CF. It is the imaginary part of the CF and is calculated in the spacelike region of the light cone. These two sides are related to each other via a dispersion integral using the quark-hadron duality assumption which eventually gives us the corresponding sum rule for the mass of the proton. There are contributions from higher states and the continuum which contaminate the ground state. To suppress these and to enhance the ground state contribution we apply the Borel transformation as well as continuum subtraction on both sides of the CF.
The Borel transformation technically enlarges the radius of convergence of the CF integral while leaving the observables unaffected. This transformation along with continuum subtraction introduce two auxiliary parameters, the Borel parameter \(M^{2}\) and the continuum threshold \(s_{0}\), into the calculation, the working regions of which have to be determined considering the standard prescriptions of the method.
To calculate the physical side, we insert a complete set of hadronic state with the same quantum numbers as the interpolating current of the proton. We obtain:
\[\Pi(q)=\frac{\langle 0|\eta(0)|P(q,s)\rangle\langle P(q,s)|\bar{\eta}(0)|0 \rangle}{q^{2}-m_{P}^{2}}+..., \tag{8}\]
where the first term is the isolated ground state proton and dots refer to the higher states and continuum contributions. The matrix element \(\langle 0|\eta(0)|P(q,s)\rangle\) is determined as
\[\langle 0|\eta(0)|P(q,s)\rangle=\lambda_{P}u(q,s), \tag{9}\]
where \(u(q,s)\) is the spinor of the proton with spin \(s\) and \(\lambda_{P}\) is its residue. Inserting (9) into (8) and summing over the proton's spin one finds the final expression for the physical side of the CF as follows:
\[\Pi(q)=\frac{\lambda_{P}^{2}(\not{q}+m_{P}U)}{q^{2}-m_{P}^{2}}+..., \tag{10}\]
where the only two independent Lorentz structures \(\not{q}\) and \(U\) emerged as is expected from (7).
The QCD side of the CF is calculated in the spacelike sector of the light cone which is the deep Euclidean region. Applying the OPE and using Wick's theorem to contract the quark-antiquark pairs, one can calculate the QCD side of the CF in terms of quark propagators. The propagator for the light quarks (\(u\) and \(d\)) reads [36; 37]:
\[\mathcal{S}_{q}^{ij}(x)=\frac{i\not{x}}{2\pi^{2}x^{4}}\delta_{ij} -\frac{m_{q}}{4\pi^{2}x^{2}}\delta_{ij}-\frac{\langle\overline{q}q\rangle}{12} (1-i\frac{m_{q}}{4}\not{x})\delta_{ji}\] \[-\frac{x^{2}}{192}\langle\overline{q}g_{s}\sigma Gq\rangle(1-i \frac{m_{q}}{6}\not{x})\delta_{ij}\] \[-ig_{s}\int_{0}^{1}du\left\{\frac{\not{x}}{6\pi^{2}x^{2}}G_{ij}^ {\mu\nu}(ux)\sigma_{\mu\nu}-\frac{iux_{\mu}}{4\pi^{2}x^{2}}G_{ij}^{\mu\nu}(ux )\gamma_{\nu}\right.\] \[\left.-\frac{im_{q}}{32\pi^{2}}G_{ij}^{\mu\nu}(ux)\sigma_{\mu\nu} \left[\ln(\frac{-x^{2}\Lambda^{2}}{4})+2\gamma_{E}\right]\right\}, \tag{11}\]
where the subscript \(q\) stands for either \(u\) or \(d\) quark. It is written up to dimension 5, where \(\langle\overline{q}q\rangle\) is the quark condensate and \(\langle\overline{q}g_{s}\sigma Gq\rangle\) the quark-gluon mixed condensate. The first two terms correspond to the free part and the third and forth terms to the quark and mixed condensates, respectively. The integral part represents the one-gluon emission contribution and \(\Lambda\) is a cutoff which separates the perturbative and non-perturbative parts. The propagator of the charm quark can be written as [36; 37]:
\[S_{c}^{ij}(x)=\] \[\frac{m_{c}^{2}}{4\pi^{2}}\frac{K_{1}\left(m_{c}\sqrt{-x^{2}} \right)}{\sqrt{-x^{2}}}\delta_{ij}+i\frac{m_{c}^{2}}{4\pi^{2}}\frac{\not{x}K_ {2}\left(m_{c}\sqrt{-x^{2}}\right)}{\left(\sqrt{-x^{2}}\right)^{2}}\delta_{ij}\] \[-\frac{g_{s}m_{c}}{16\pi^{2}}\int_{0}^{1}dvG_{ij}^{\mu\nu}(vx) \Big{[}i(\sigma_{\mu\nu}\not{x}+\not{x}\sigma_{\mu\nu})\frac{K_{1}\left(m_{c }\sqrt{-x^{2}}\right)}{\sqrt{-x^{2}}}\] \[+2\sigma^{\mu\nu}K_{0}\left(m_{c}\sqrt{-x^{2}}\right)\Big{]}- \frac{\delta_{ij}\langle g_{s}^{2}G^{2}\rangle}{576(2\pi)^{2}}\Big{\{}(i \not{x}m_{c}\] \[-6)\frac{K_{1}(m_{c}\sqrt{-x^{2}})}{\sqrt{-x^{2}}}(-x^{2})+m_{c}( x^{2})^{2}\frac{K_{2}(m_{c}\sqrt{-x^{2}})}{(\sqrt{-x^{2}})^{2}}\Big{\}}, \tag{12}\]
where \(K_{n}(z)\) is the \(n\)-th order modified Bessel function of the second kind. In (11) and (12) we have
\[G_{ij}^{\mu\nu}=G_{A}^{\mu\nu}\lambda_{ij}^{A}/2, \tag{13}\]
where \(A=1,\,2\,\ldots 8\) and \(\lambda^{A}\) are the Gell-Mann matrices. Here we use the exponential representation of \(K_{n}(z)\) as:
\[\frac{K_{n}(m_{c}\sqrt{-x^{2}})}{(\sqrt{-x^{2}})^{n}}=\frac{1}{2}\int\frac{dt} {t^{n+1}}\exp\left[-\frac{m_{c}}{2}\left(t-\frac{x^{2}}{t}\right)\right], \tag{14}\]
where \(-x^{2}>0\) for the spacelike sector (deep Euclidean region).
In (7), the coefficients \(\Pi_{i}(q^{2})\) can be written as a dispersion integral as follows:
\[\Pi_{i}(q^{2})=\int\frac{\rho_{i}(s)}{s-q^{2}}ds. \tag{15}\]
Here \(\rho_{i}\) are the spectral densities and can be calculated from the imaginary parts of the \(\Pi_{i}\) functions as:
\[\rho_{i}(s)=\frac{1}{\pi}Im\Big{\{}\Pi_{i}(s)\Big{\}}. \tag{16}\]
We do not show the very lengthy expressions for the spectral densities in this study. According to (6), the spectral densities can be decomposed into ordinary and intrinsic charm contributions as:
\[\rho_{i}(s)=\mathcal{N}^{2}\big{[}\rho_{i}^{(3q)}(s)+\frac{|\alpha|^{2}}{m_{P}^{ 6}}\rho_{i}^{(5q)}(s)\big{]}. \tag{17}\]
The last step is to match the coefficients of the corresponding Lorentz structures in (7), in both the QCD and physical sides, which gives us the sum rules for the mass and residue of the proton. Considering the ordinary \(|uud\rangle\) part only, after applying the Borel transformation and continuum subtraction, as well as using quark hadron duality assumption, one finds the following sum rules:
\[\lambda_{P}^{2}e^{\frac{-m_{c}^{2}}{M^{2}}} = \int_{s_{L}}^{s_{0}}ds\rho_{1}^{(3q)}(s)e^{\frac{-s}{M^{2}}},\] \[\lambda_{P}^{2}m_{P}e^{\frac{-m_{P}^{2}}{M^{2}}} = \int_{s_{L}}^{s_{0}}ds\rho_{2}^{(3q)}(s)e^{\frac{-s}{M^{2}}}, \tag{18}\]
where \(M^{2}\) and \(s_{0}\) are the auxiliary Borel and continuum threshold parameters for the ordinary \(|uud\rangle\) part respectively and \(s_{L}=(2m_{u}+m_{d})^{2}\). Then the mass can be calculated from either of the equations (18) (i.e. from either of Lorentz
structure \(\not{q}\) or \(U\)) by differentiating the corresponding equation with respect to \(z=-\frac{1}{M^{2}}\) and dividing it over the equation itself which reads:
\[\left[m_{P}^{(3q)}\right]^{2}=\frac{\int_{s_{L}}^{s_{0}}dss\rho_{i}^{(3q)}(s)e^{ \frac{-s}{M^{2}}}}{\int_{s_{L}}^{s_{0}}ds\rho_{i}^{(3q)}(s)e^{\frac{-s}{M^{2}}}}. \tag{19}\]
Similar equation is valid for \(m_{P}^{(5q)}\), the mass of the intrinsic charm contribution \(|uudc\bar{c}\rangle\), with corresponding Borel parameter \(M^{\prime 2}\) and continuum threshold \(s^{\prime}_{0}\).
To add the intrinsic charm contribution of the proton, following the above recipe, one has to differentiate the sum of \(3q\)- and \(5q\)-spectral contributions with respect to \(z\). Since the Borel parameter for the \(5q\)-part is \(z^{\prime}=-\frac{1}{M^{\prime 2}}\), we need to apply the chain rule
\[\frac{\partial}{\partial z}=\frac{\partial z^{\prime}}{\partial z}\frac{ \partial}{\partial z^{\prime}}\;. \tag{20}\]
To this, using
\[\frac{z}{z^{\prime}}=\frac{M^{\prime 2}}{M^{2}}\simeq\left(\frac{m^{(5q)}}{m ^{(3q)}}\right)^{2}=b\;, \tag{21}\]
one finds the final sum rule for the proton mass as follows:
\[m_{P}^{2}= \tag{22}\] \[\frac{\int_{s_{L}}^{s_{0}}dss\rho_{i}^{(3q)}(s)e^{\frac{-s}{M^{2}} }+\frac{|\alpha|^{2}}{m_{P}^{2}}\int_{s_{L}}^{s^{\prime}_{0}}ds(\frac{s}{b}) \rho_{i}^{(5q)}(s)e^{\frac{-s}{M^{2}}}}{\int_{s_{L}}^{s_{0}}ds\rho_{i}^{(3q)}(s )e^{\frac{-s}{M^{2}}}}\;.\]
At low energies, \(s_{L}\leq s\leq s_{0}\), the charm component behaves as sea quarks and the probability of being observed is considerably low. But at high energies, \(s_{L}\ll s\leq s^{\prime}_{0}\), the charm component emerges as valence-like component and can be detected with an observable probability.
_Numerical Analysis_ The input parameters in the final sum rule (22) include quark masses and different quark, gluon and mixed condensates. The condensates are universal non-perturbative parameters, which are determined according to the analyses of many hadronic processes. The values of these parameters are listed as follows [38; 39; 40]:
\[m_{u}=2.2^{+0.5}_{-0.4}\;{\rm MeV},\;m_{d}=4.7^{+0.5}_{-0.3}\; {\rm MeV},\] \[m_{c}=1.27\pm 0.02\;{\rm GeV},\] \[\langle\overline{q}q\rangle=-(0.24\pm 0.01)^{3}\;{\rm GeV}^{3},\] \[\langle\overline{q}g_{s}\sigma Gq\rangle=m_{0}^{2}\langle\overline {q}q\rangle, \tag{23}\] \[m_{0}^{2}=(0.8\pm 0.2)\;{\rm GeV}^{2},\] \[\langle\alpha_{s}G^{2}/\pi\rangle=(0.012\pm 0.004)\;{\rm GeV}^{4}.\]
Moreover, there are four auxiliary parameters (\(M^{2}\), \(s_{0}\), \(s^{\prime}_{0}\) and \(\beta\)) that enter the calculations. The working regions of these parameters have to be determined. We should calculate their working windows such that the physical quantities be possibly independent of or have only weak dependence on these parameters. The residual dependencies appear as the uncertainties in the final results.
The interval for \(\beta\) can be evaluated as follows. Defining a new parameter \(\theta\) as \(\beta=\tan\theta\), and plotting \(m^{(3q)}\) and the OPE of the sum rule for the \(3q\)-part as a function of \(\cos\theta\), one can find the interval for \(\cos\theta\) where the variations of physical quantities are relatively small. As an example, we depict the variation of \(m^{(3q)}\) with respect to \(\cos\theta\) in Fig. (1) at average values of other auxiliary parameters.
From our analyses we find the working region \(-0.49\leq\cos\theta\leq-0.36\) which corresponds to \(-2.60\leq\beta\leq-1.80\).
To determine the working regions of \(M^{2}\), \(s_{0}\), and \(s^{\prime}_{0}\), we demand the dominance of Pole Contribution (PC) as well as OPE convergence. They can be quantify by introducing
\[{\rm PC}^{(j)}=\frac{\Pi^{(j)}(M_{j}^{2},s_{0}^{j})}{\Pi^{(j)}(M_{j}^{2},\infty )}, \tag{24}\]
and
\[R^{(j)}(M^{2})=\frac{\Pi^{\rm Dim-n(j)}(M_{j}^{2},s_{0}^{j})}{\Pi^{(j)}(M_{j}^ {2},s_{0}^{j})}, \tag{25}\]
where \(j\) stands for either \(3q\)- or \(5q\)-component and \(\Pi^{\rm Dim-n(j)}(M_{j}^{2},s_{0}^{j})\) is the sum of the three highest dimension operators contributions entered the OPE expansion of the CF which are 13, 14, 15 for \(j=3q\) and 17, 18, 19 for \(j=5q\). For the non-perturbative contributions, we follow the principles that _the perturbative part exceeds the total non-perturbative contribution and the higher the dimension of the operator, the lower its contribution to the OPE expansion_. Quantitatively, we require that the PC obeys \({\rm PC}^{(3q)}\geq 0.5\) which is used to determine \(M_{(\rm max)}^{2}\). The lower limit, \(M_{(\rm min)}^{2}\), can be found by employing the condition \(R^{(3q)}(M_{(\rm min)}^{2})\leq 0.05\) for the sum of last three highest dimensions. The above recipes lead to the following windows for the auxiliary parameters:
\[1.15\leq M^{2}\leq 1.50\;({\rm GeV}^{2}),\;\;2.15\leq s_{0}\leq 2.30\;({ \rm GeV}^{2}),\] \[24\leq s^{\prime}_{0}\leq 26\;({\rm GeV}^{2}).\]
Figure 1: \(m^{(3q)}\) (\(MeV\)) as a function of \(\cos\theta\) at average values of other auxiliary parameters in their working windows.
In order to check the dependence of the auxiliary parameters and the physical quantities to each other, we perform a python analysis and plot the resulting heatmap in Fig. (2). Skipping the last column and line of the figure, it is evident that there is a correlation between \(M^{2}\) and \(\beta\) and also \(M^{2}\) and \(s_{0}\) (which are related to the dominant \(3q\) component) as is expected by consistency considerations [41]. The other auxiliary parameters are independent from each other. From the last column and line, we see how \(x^{c\bar{c}}\) depends on the auxiliary parameters.
We now proceed to answer the main question of this research work: What is the percentage of the charm content of the proton? To this end, considering the working windows of all auxiliary parameters and the values of other inputs, we equate the mass of the proton in (22) to the world average of the experimental mass of the proton presented in PDG [38]. This leads to \(x^{c\bar{c}}=(1.36\pm 0.67)\%\) which is, within the uncertainties, in accord with the estimate of the NNPDF collaboration [29]. The error presented in the result is due to the uncertainties in the calculations of the windows for the auxiliary parameters as well as errors of other inputs. We should note that \(x^{c\bar{c}}\) is the weight of the \(5q\) component inside the proton and not exactly the \(c\bar{c}\) contribution in our method. To calculate the later one should develop a method to separate the \(c\bar{c}\) contribution from the \(5q\) component. The \(c\bar{c}\) contribution would be the expectation value \(\langle P|c\bar{c}|P\rangle\), which would correspond to the integrated probability of the \(c\bar{c}\) wave function inside the proton in a non-relativistic quark model.
_Conclusions_ In our work, for the first time we used the method of QCD sum rules to determine the contribution of the \(|uudc\bar{c}\rangle\) component in the structure of the proton. Using the two-point correlation function, we found \(x^{c\bar{c}}=(1.36\pm 0.67)\%\) for the contribution of this component in the proton. This result, within the presented uncertainties, is matched to the prediction of the NNPDF collaboration, and is compatible in the upper limit with the one obtained using the global analysis of the CTEQ collaboration. Our result persuades further dedicated studies of intrinsic charm at future experiments like AFTER@LHC [33; 34; 35] and the fixed-target programs of LHCb [42]. If confirmed experimentally, it provides more insights into the structure and properties of the proton. Moreover, the investigation of atmospheric neutrino measurements presents a viable opportunity to explore the presence of intrinsic charm. As demonstrated in [43], such measurements have revealed a noteworthy contribution of intrinsic charm to the atmospheric neutrino flux.
_Acknowledgements_ K. Azizi and S. Rostami are thankful to the Iran Science Elites Federation (Saramadan) for the financial support provided under the grant number ISEF/M/99171. S. Rostami is grateful to the CERN-TH division for their warm hospitality.
|
2307.00657 | Rainbow Greedy Matching Algorithms | We consider the problem of finding a large rainbow matching in a random graph
with randomly colored edges. In particular we analyze the performance of two
greedy algorithms for this problem. The algorithms we study are colored
versions of algorithms that were previously used to find large matchings in
random graphs (i.e. the color-free version of our present problem). | Patrick Bennett, Colin Cooper, Alan Frieze | 2023-07-02T20:21:52Z | http://arxiv.org/abs/2307.00657v1 | # Rainbow Greedy Matching Algorithms
###### Abstract
We consider the problem of finding a large rainbow matching in a random graph with randomly colored edges. In particular we analyze the performance of two greedy algorithms for this problem. The algorithms we study are colored versions of algorithms that were previously used to find large matchings in random graphs (i.e. the color-free version of our present problem).
## 1 Introduction
In this short note, we discuss greedy algorithms for finding rainbow matchings in sparse random graphs. Thus we start with the random graph \(G_{n,m},m=cn/2\) where \(c>0\) is a constant and then color each edge uniformly at random (u.a.r.) from a set of colors \(Q=[q]\). A set \(S\) of edges is said to be _rainbow colored_ if every edge in \(S\) has a different color. The decision problem for whether a colored graph has a rainbow matching of size \(k\) is NP-complete [2]. Here we discuss the efficacy of simple greedy algorithms for finding large rainbow matchings.
The color-free version of this problem has been studied in several previous papers. Dyer, Frieze and Pittel [5] studied two greedy algorithms for finding large matchings. The first algorithm (Greedy Matching) repeatedly chooses an edge \(\{x,y\}\) u.a.r., adds it to the current matching and deletes the vertices \(x,y\). This continues until the remaining graph has no edges. They showed that with high probability (i.e. with probability tending to \(1\) as \(n\) grows, henceforth abbreviated w.h.p.) this algorithm produces a matching of size asymptotic to \(\frac{n}{2}\left(1-\frac{1}{c+1}\right)\). They also considered a variation (Modified Greedy Matching) where the algorithm first chooses a vertex uniformly at random, and then chooses a uniform random incident edge and then updates the matching. This algorithm does slightly better than the first, it produces a matching of size asymptotic to \(\frac{n}{2}\left(1-\frac{\log(2-e^{-c})}{2c}\right)\). Further improvements were obtained by Karp and Sipser [7], using KSGreedy, a modification of Greedy Matching. KSGreedy chooses a random vertex of degree one, if there is one, and adds its incident edge to the matching; otherwise it chooses a random edge and adds it to the matching. Karp and Sipser studied this algorithm in \(G_{n,p},p=c/n\) and showed that it produced a matching of asymptotically maximal size. The Karp-Sipser algorithm was further studied for \(G_{n,c/n}\) by Aronson, Frieze and Pittel [1], who showed that the algorithm found a matching within \(O(n^{1/5}\log^{O(1)}n)\) of the maximum.
We will prove the following theorems which we prefix with a formal statement of the algorithms.:
Greedy Algorithm.Formally the algorithm proceeds as follows:
**GEEDY**
**begin**
\(M\leftarrow\emptyset\);
**while**\(E(G)\neq\emptyset\)**do**
**begin**
Choose \(e=\{u,v\}\in E\) u.a.r.; \(M\gets M\cup\{e\}\);
\(V\gets V\setminus\{u,v\}\) ; \(G\gets G[V]\)
\(F\leftarrow\{f\in E:c(f)=c(e)\}\) ; \(E\gets E\setminus F\);
**end**;
Output \(M\)
**end**
**Theorem 1**.: _Suppose that \(q=\kappa n\). Let \(\mu\) denote the size of the matching produced by GREEDY, then following hold w.h.p._
1. _If_ \(\kappa=1/2\) _then_ \(\mu\sim\frac{1}{2}\left(1-\frac{1}{(2c+1)^{1/2}}\right)n\)_._
2. _If_ \(\kappa=\frac{1}{2}(1+\varepsilon)\)_, where_ \(|\varepsilon|>0\) _then_ \(\mu\sim\frac{1}{2}\left(1-\frac{1+O(\varepsilon)}{(2c+1)^{1/2}}\right)n\)_._
3. _If_ \(\kappa<1/2c\) _and_ \(c>5\) _then_ \(\kappa(1-e^{-c(1/2\kappa-2)})n<\mu<\kappa(1-e^{-c/2\kappa})n\)_._
4. _If_ \(\kappa\gg 1\) _then_ \(\mu\sim\frac{1}{2}\left(1-\frac{1}{2c+1+\varepsilon_{\kappa}}\right)n\) _where_ \(\varepsilon_{k}=\frac{\log(c+1)}{2\kappa-1}-\frac{c}{2\kappa}+O\left(\frac{c}{ \kappa^{2}}\right)\)_._
Modified greedy algorithm.The modified greedy algorithm is formally described as
MODIFIED GREEDY
**begin**
\(M\leftarrow\emptyset\);
**while \(E(G)\neq\emptyset\) do begin**
Choose \(v\in V\) u.a.r.;
If \(N(v)=\emptyset\), \(V\gets V\setminus\{v\}\) ;
Else if \(N(v)\neq\emptyset\) choose \(u\in N(v)\) u.a.r.;
Let \(e=\{u,v\}\); \(M\gets M\cup\{e\}\); \(V\gets V\setminus\{u,v\}\) ;
\(F\leftarrow\{f\in E:c(f)=c(e)\}\) ; \(E\gets E\setminus F\);
\(G\gets G[V]\);
**end**;
Output \(M\)
**end**
**Theorem 2**.: _Suppose that \(q=\kappa n\). Let \(\mu\) be the size of the matching produced by MODIFIED GREEDY. Then the following hold w.h.p._
1. _Let_ \(N(\tau)\) _be the solution of_ \[N(\tau)=1-2\tau+\int_{0}^{\tau}\exp\left\{-\frac{c}{\kappa}N(\sigma)(N(\sigma) +\sigma+\kappa-1)\right\}\;d\sigma,\] _and let_ \(\tau_{0}\in[0,1]\) _be the solution to_ \(N(\tau)=0\)_. Then_ \(\mu\sim(1-\tau_{0})n\)_._
2. _If_ \(\kappa\geq 1/2\) _then_ \[\mu\leq\frac{c-1+e^{-c}}{2c-1+e^{-c}}\;n.\]
Numerical calculations suggest that the matching obtained by MODIFIED GREEDY is significantly larger than by GREEDY. In the following we have fixed \(\kappa=1/2\) so that we can use Theorem 1(a).
\begin{tabular}{|c|c|c|} \hline \(c\) & GREEDY & MODIFIED GREEDY \\ \hline
0.5 & 0.092 & 0.148 \\ \hline
1.0 & 0.146 & 0.216 \\ \hline
1.5 & 0.184 & 0.257 \\ \hline
2.0 & 0.211 & 0.285 \\ \hline
2.5 & 0.233 & 0.316 \\ \hline
3.0 & 0.250 & 0.322 \\ \hline
3.5 & 0.264 & 0.334 \\ \hline
4.0 & 0.276 & 0.345 \\ \hline
4.5 & 0.287 & 0.355 \\ \hline
5.0 & 0.296 & 0.361 \\ \hline \end{tabular}
**Conjecture:** Given the above table we conjecture that if \(\mu_{G}\) and \(\mu_{MG}\) are the sizes of the matchings produced by GREEDY and MODFIDED GREEDY respectively, then w.h.p. \(\mu_{G}\leq\mu_{MG}\).
## 2 Greedy
Let \(G(t)\) denote the unmatched graph remaining after \(t\) iterations, let \(\nu(t)\) be the number of vertices, and let \(\mu(t)\) denote the number of edges in \(G(t)\). At each step \(t\), we choose a random edge \(\{x,y\}\), add it to the matching \(M(t)\), delete the vertices \(x\) and \(y\) from \(V(t)\), and delete all edges of the same color as \(\{x,y\}\). Let \(d_{t}(\cdot)\) denote degree in \(G_{t}\). For the sake of our analysis we reveal the random graph and colors as we run the algorithm. More specifically, at each step \(t\) we reveal our matching edge \(e_{t}\) by choosing a random pair of distinct vertices. Then for each of the other \(\mu(t)-1\) other edges \(e^{\prime}\) we reveal whether or not \(e^{\prime}\) shares an endpoint with \(e\). Any \(e^{\prime}\) meeting \(e\) is deleted. Conditional on the matching edge \(e\) and the (say) \(k\) deleted edges, the remaining edges comprise a uniform random set of \(\mu(t)-1-k\) edges on the remaining set of \(\nu(t)-2\) vertices.
A priori we do not know the degrees of any of the vertices. We just know that at step \(t\) we have \(\nu(t)\) vertices and a uniform random set of \(\mu(t)\) edges. We reveal the location of one of these edges, which is equally likely to have any two distinct endpoints among the \(\nu(t)\) vertices. We know there is an edge there just because we said we were revealing the location of an edge. Of course, after we reveal the location of that edge, we know that its two endpoints must have degree at least 1. But we only know that because we revealed the edge.
We reveal the color of \(e_{t}\) by choosing a random color from among the unused colors. Finally we reveal any other edges of that same color and delete them. Thus we have
\[\mathbb{E}[\mu(t+1)\mid\mu(t)]=\left(\mu(t)-\mathbb{E}(d_{t}(x)+d_{t}(y)-1\mid \mu(t))\right)\left(1-\frac{1}{q-t}\right). \tag{1}\]
Note that the number of vertices at step \(t\) is \(\nu(t)=n-2t\). We will assume (justified later) that
\[\frac{t}{n}\leq\min\left\{\frac{1}{2},\kappa\right\}-\Omega(1)\]
so that \(\nu(t),q-t\geq\Omega(n)\). Then
\[\mathbb{E}(d_{t}(x)\mid\mu(t))=\mathbb{E}(d_{t}(y)\mid\mu(t)) =1+(\nu(t)-2)\cdot\frac{\mu(t)-1}{\binom{\nu(t)}{2}-1}\] \[=1+\frac{2(\mu(t)-1)}{\nu(t)+1}\] \[=1+\frac{2\mu(t)}{\nu(t)}+O\left(n^{-1}\right).\]
So, picking up from (1) we have
\[\mathbb{E}[\mu(t+1)\mid\mu(t)] =\left(\mu(t)-1-\frac{4\mu(t)}{\nu(t)}+O\left(n^{-1}\right)\right) \left(1-\frac{1}{q-t}\right)\] \[=\mu(t)-1-\frac{4\mu(t)}{\nu(t)}-\frac{\mu(t)}{q-t}+O\left(n^{-1}\right) \tag{2}\]
This leads us to consider the differential equation (\(t=\tau n,M(\tau)=\mu(t)/n\) here) which will simulate the process w.h.p.
\[\frac{dM}{d\tau}=-1-\frac{4M(\tau)}{1-2\tau}-\frac{M(\tau)}{\kappa-\tau}, \quad M(0)=\frac{c}{2}\text{ where }\kappa=\frac{q}{n}. \tag{3}\]
We let \(\tau_{0}\) be the smallest positive root of \(M(\tau)=0\) where \(M\) is the solution to the above initial value problem. We will show that w.h.p. the process ends with a rainbow matching with \(\tau_{0}n+o(n)\) edges. Unsurprisingly, we will not come close to a perfect matching, nor will we come close to using every color. In particular we will see in Section 2.2 that for fixed \(c,k\) we have
\[\tau_{0}\leq\min\left\{\frac{1}{2},\kappa\right\}-\Omega(1).\]
To do this we will apply the following theorem of Warnke [8].
**Theorem 3** (Warnke Theorem 2 and Lemma 11).: _Let \(a,n>1\) be integers. Let \(\mathcal{D}\subseteq\mathbb{R}^{a+1}\) be a connected and bounded open set. Let \((F_{k})_{1\leq k\leq a}\) be functions with \(F_{k}:\mathcal{D}\rightarrow\mathbb{R}\). Let \(\mathcal{F}_{0}\subseteq\mathcal{F}_{1}\subseteq\ldots\) be \(\sigma\)-fields. Suppose that the random variables \(((Y_{k}(t))_{1\leq k\leq a}\) are nonnegative and \(\mathcal{F}_{t}\)-measurable for \(t>0\). Furthermore, assume that, for all \(t>0\) and \(1\leq k\leq a\), the following conditions hold whenever \((t/n,Y_{1}(t)/n,...,Y_{a}(t)/n)\in\mathcal{D}\):_
1. \(|\mathbb{E}[Y_{k}(t+1)-Y_{k}(t)|\mathcal{F}_{t}]-F_{k}(t/n,Y_{1}(t)/n,...,Y_{ a}(t)/n)|\leq\delta\)_, where the function_ \(F_{k}\) _is_ \(L\)_-Lipschitz-continuous on_ \(\mathcal{D}\) _(the 'Trend hypothesis' and 'Lipschitz hypothesis'),_
2. \(|Y_{k}(t+1)-Y_{k}(t)|\leq\theta\) _and_ \(\mathbb{P}(|Y_{k}(t+1)-Y_{k}(t)|>\phi\mid\mathcal{F}_{t})\leq\gamma\) _(the 'Boundedness hypothesis'), and that the following condition holds initially:_
3. \(\max_{1\leq k\leq a}|Y_{k}(0)-\hat{y}_{k}n|\leq\varepsilon n\) _for some_ \((0,\hat{y}_{1},\ldots,\hat{y}_{a})\in\mathcal{D}\) _(the 'Initial condition')._
_Suppose \(R\in[1,\infty)\) and \(T\in(0,\infty)\) satisfy \(t\leq T\) and \(|F_{k}(z)|\leq R\) for all \(1\leq k\leq a\) and \(z=(\tau,y_{1},\ldots,y_{a})\in\mathcal{D}\). Then for \(\varepsilon>(\delta+\gamma\theta)\min\{T,L^{-1}\}+R/n\), with probability at least_
\[1-2a\exp\left\{-\frac{n\varepsilon^{2}}{8T\phi^{2}}\right\}-aTn\gamma \tag{4}\]
_we have_
\[\max_{0\leq t\leq\sigma n}\max_{1\leq k\leq a}|Y_{k}(t)-y_{k}(t/n)n|<3\exp\{LT \}\varepsilon n\]
_where \((y_{k}(\tau))_{1\leq k\leq a}\) is the unique solution to the system of differential equations \(y_{k}^{\prime}(\tau)=F_{k}(\tau,y_{1}(\tau),...,y_{a}(\tau))\) with \(y_{k}(0)=\hat{y}_{k}\) for \(1\leq k\leq a\), and \(\sigma=\sigma(\hat{y}_{1},...\hat{y}_{a})\in[0,T]\) is any choice of \(\sigma>0\) with the property that \((\tau,y_{1}(\tau),...y_{a}(\tau))\) has \(\mbox{\it L}_{\infty}\)-distance at least \(3\exp\{LT\}\varepsilon\) from the boundary of \(\mathcal{D}\) for all \(\tau\in[0,\sigma)\)._
The above theorem is a version of Theorem 5.1 of Wormald [9]. We use Warnke's version here because of the condition (ii). The probability bound given by Wormald's thoerem is not good enough for us if our variables could see a large one-step change. In particular, for our process our matching edge could have an endpoint with linear degree, causing the loss of a linear number of edges in a single step. However, since the probability of that event is so small, Warnke's version can handle it. Of course, Wormald [9] discusses similar situations and describes how to handle them, but for us it is more convenient to use Warnke's version since it allows us to apply a single theorem as a black box.
We apply Theorem 3 with \(a=1\) to the random variable \(Y_{1}(t)=\mu(t)\). We let
\[\mathcal{D}=\left\{(\tau,M):0\leq\tau\leq\tau_{0},\;0\leq M\leq c\right\}.\]
Now we make sure to satisfy condition (i). By (2) we can let \(\delta=O\left(n^{-1}\right)\). From (3) we have
\[F_{1}(\tau,M)=-1-\frac{4M}{1-2\tau}-\frac{M}{\kappa-\tau}\]
which is \(L\)-Lipschitz on \(\mathcal{D}\) for \(L=O(1)\) (here we use the fact that the denominators \(1-2\tau,\kappa-\tau\) are bounded away from \(0\)). We now move on to condition (ii). We have to take \(\theta=O(n)\) since it is possible (though very unlikely) to have a vertex of linear degree. We let \(\phi=2n^{0.1}\), and to find a suitable \(\gamma\) we bound
\[\mathbb{P}(|\mu(t+1)-\mu(t)|>2n^{0.1}\mid\mu(t)) \leq 2\frac{\binom{\nu(t)-1}{n^{0.1}}\binom{\binom{\nu(t)}{2}-n^{0.1}}{\mu(t)-n^{0.1}}}{\binom{\binom{\nu(t)}{2}}{\mu(t)}}\] \[=2\frac{\binom{\nu(t)-1}{n^{0.1}}(\mu(t))_{n^{0.1}}}{\left(\binom {\nu(t)}{2}\right)_{n^{0.1}}}\] \[\leq 2\frac{\binom{\nu(t)e}{n^{0.1}}\binom{n^{0.1}}{n^{0.1}} \left(\mu(t)\right)^{n^{0.1}}}{\left(\binom{\nu(t)}{2}-n^{0.1}\right)^{n^{0.1 }}}\] \[=2\left[O\left(\frac{\mu(t)}{n^{0.1}\nu(t)}\right)\right]^{n^{0.1}}\] \[=\exp\{-\Omega(n^{0.1})\}=\gamma. \tag{5}\]
Now for condition (iii), any positive \(\varepsilon\) will do (we will have to choose \(\varepsilon\) more carefully later to satisfy future conditions). We can very comfortably choose \(T=1\), and \(R=O(1).\) Now we choose
\[\varepsilon=n^{-0.1}>(\delta+\gamma\theta)\min\{T,L^{-1}\}+R/n\]
and so the probability bound in (4) goes to 1. Thus with high probability we have
\[\mu(t)=M(\tau)n+O(n^{0.9}) \tag{6}\]
uniformly for \(0\leq t\leq\tau_{0}n+O\left(n^{0.9}\right)\), where \(M(\tau)\) is the solution to (3) and \(\tau=t/n\). So w.h.p. the Greedy algorithm will produce a rainbow matching of \(\tau_{0}n+O(n^{0.9})\).
### Solution to the differential equation
When \(q=n/2\) (i.e. \(\kappa=1/2\)) the solution to (3) is
\[M(\tau)=\frac{(2c+1)(1-2\tau)^{3}-(1-2\tau)}{4}.\]
The smallest root of the above is
\[\tau_{0}=\frac{1}{2}-\frac{1}{2(2c+1)^{1/2}}. \tag{7}\]
When \(\kappa\neq 1/2\) we write (3) as
\[\frac{dm}{d\tau}+M(\tau)\left(\frac{4}{1-2\tau}+\frac{1}{\kappa-\tau}\right)= -1.\]
Let \(I=-2\log(1-2\tau)-\log(\kappa-\tau)\), then
\[M(t)=e^{-I}\left(B-\int e^{I}d\tau\right),\]
where \(B\) solves \(M(0)=c/2\). The solution for \(\kappa=1/2\) is given above.
For \(k\neq 1/2\), let \(A=1/(2\kappa-1)^{2}\) then
\[\int e^{I}d\tau= \int\frac{1}{(\kappa-\tau)(1-2\tau)^{2}}\] \[= \int A\left(\frac{1}{\kappa-\tau}-\frac{2}{1-2\tau}+\frac{4 \kappa-2}{(1-2\tau)^{2}}\right)\] \[= A\left(-\log(\kappa-\tau)+\log(1-2\tau)+\frac{2\kappa-1}{1-2 \tau}\right).\]
Thus \(B=c/2\kappa+A((2\kappa-1)-\log\kappa)\) and for \(\kappa\neq 1/2\)
\[M(\tau)=\frac{(\kappa-\tau)(1-2\tau)^{2}}{(2\kappa-1)^{2}}\left(\frac{c(2 \kappa-1)^{2}}{2\kappa}+\log\frac{\kappa-\tau}{\kappa(1-2\tau)}-\frac{2(2 \kappa-1)\tau}{1-2\tau}\right). \tag{8}\]
### Asymptotics for \(\tau_{0}\) from \(M(\tau)=0\)
From (8) let
\[f_{\kappa}(\tau)=\left(\frac{c(2\kappa-1)^{2}}{2\kappa}+\log\frac{\kappa-\tau}{ \kappa(1-2\tau)}-\frac{2(2\kappa-1)\tau}{1-2\tau}\right). \tag{9}\]
We consider the solution \(\tau_{0}\) to \(f_{\kappa}(\tau)=0\) in three cases.
**Case 1**: As \(\kappa\to 1/2\), in Lemma 4 we show that \(\tau_{0}\) tends to the value in (7), and give the rate of convergence.
**Case 2**: For \(\kappa\) large, \(M(\tau)=0\) in (8) satisfies \(c=\frac{2\tau}{1-2\tau}+O(1/\kappa)\) which implies that w.h.p. the Greedy algorithm will produce a rainbow matching of size \(\sim\left(\frac{1}{2}-\frac{1}{2(c+1)}+O(1/\kappa)\right)n\). In Lemma 5 we give a detailed asymptotic which also bounds \(\tau_{0}\) for finite \(\kappa\geq 1\). Thus in the limit as \(\kappa\to\infty\) we obtain a matching of size \(nc/2(c+1)\); the value obtained without the coloring constraint.
**Case 3**: For \(\kappa\ll 1/2\), and fixed \(c\) the solution to \(M(\tau)=0\) in (8) satisfies \(\tau\sim\kappa\left(1-e^{-c/2\kappa}\right)\). Lemma 6 gives more detail.
**Lemma 4**.: _If \(\kappa=\frac{1}{2}(1+\varepsilon)\), where \(|\varepsilon|>0\), the solution to \(M(\tau)=0\) in (8) is \(\tau_{0}=\frac{1}{2}\left(1-\frac{1+O(\varepsilon)}{\sqrt{2c+1}}\right)\)._
Proof.: Put \(h=(2\kappa-1)/(1-2\tau)\) so that \(\tau=\frac{1}{2}(1-(2\kappa-1)/h)\). From (9),
\[f_{\kappa}(h)=\alpha-h+\log(1+h),\qquad\mbox{where}\qquad\alpha=c\frac{(2 \kappa-1)^{2}}{2\kappa}+\log\frac{1}{2\kappa}+(2\kappa-1).\]
**Case 1a: \(2\kappa<1\).**
We have \(\tau\leq\kappa\) and so \(2\kappa\geq(1-(2\kappa-1)/h)\) or \(2\kappa-1\geq-(2k-1)/h\). If \(2\kappa-1<0\) then this implies that \(0>h>-1\).
Then
\[h-\frac{h^{2}}{2}+\frac{h^{3}}{2}\leq\log(1+h)\leq h-\frac{h^{2}}{2}.\]
The lower bound term \(h^{3}/2\) holds for \(|h|\leq 1/3\), by comparison with a geometric series. Thus
\[\alpha-\frac{|h|^{2}}{2}-\frac{|h|^{3}}{2}\leq 0\leq\alpha-\frac{|h|^{2}}{2},\]
so that
\[|h|\leq\sqrt{2\alpha},\qquad\mbox{and}\qquad|h|\geq\sqrt{\frac{2\alpha}{1+|h|} }\geq\sqrt{\frac{2\alpha}{1+\sqrt{2\alpha}}}.\]
**Case 1b: \(2\kappa>1,0<h<1\).**
Then \(h>0\), and now
\[h-\frac{h^{2}}{2}\leq\log(1+h)\leq h-\frac{h^{2}}{2}+\frac{h^{3}}{2}.\]
Note that these inequalities hold for all \(h>0\).
So
\[\alpha-\frac{h^{2}}{2}\leq 0\leq\alpha-\frac{h^{2}}{2}+\frac{h^{3}}{2},\]
leading to
\[\sqrt{2\alpha}\leq h\leq\sqrt{\frac{2\alpha}{1-h}}\leq\sqrt{\frac{2\alpha}{1- \sqrt{2\alpha}}}. \tag{10}\]
Finally
\[\alpha =\varepsilon^{2}(c+\tfrac{1}{2})-\varepsilon^{3}(c+1/3)+O(c \varepsilon^{4}).\] \[h =(2\alpha)^{1/2}(1+O(\alpha^{1/2})=\varepsilon\sqrt{2c+1}(1+O( \varepsilon)).\] \[\tau_{0} =\tfrac{1}{2}\left(1-\frac{\varepsilon}{h}\right)=\tfrac{1}{2} \left(1-\frac{1+O(\varepsilon)}{\sqrt{2c+1}}\right).\]
**Lemma 5**.: _Assume \(\kappa\geq 1\) and \(c\geq c^{*}=(e-1)\left(\frac{2\kappa}{2\kappa-1}\right)^{2}\). The solution to \(M(\tau)=0\) in (8) is_
\[\tau_{0}=\frac{1}{2}\left(1-\frac{1}{(c+1)-\frac{c}{2\kappa}+\frac{\log(c+1)} {2\kappa-1}+O\left(\frac{c}{\kappa^{2}}\right)}\right).\]
Proof.: Put \(z=(\kappa-\tau)/(\kappa(1-2\tau))\) so that \(\tau=\kappa(z-1)/(2\kappa z-1)\). Provided \(\kappa\geq 1/2\), \(z\geq 1\) and \(z\to 1/(1-2\tau)\) as \(\kappa\to\infty\). Then (9) becomes
\[f_{\kappa}(z)=c\frac{(2\kappa-1)^{2}}{2\kappa}+2\kappa+\log z-2\kappa z, \tag{11}\]
and \(f_{\kappa}(z)=0\) iff
\[z=\beta+\frac{\log z}{2\kappa}\qquad\text{where}\qquad\beta=c\left(\frac{2 \kappa-1}{2\kappa}\right)^{2}+1. \tag{12}\]
For \(\delta<1\) put
\[z=\beta+\frac{(1+\delta)}{2\kappa}\log\beta,\]
and equate. At equality (12) becomes
\[\beta+\frac{(1+\delta)}{2\kappa}\log\beta= \beta+\frac{1}{2\kappa}\log\left(\beta+\frac{(1+\delta)}{2 \kappa}\log\beta\right)\] \[= \beta+\frac{1}{2\kappa}\log\beta+\frac{1}{2\kappa}\log\left(1+ \frac{(1+\delta)}{2\kappa\beta}\log\beta\right).\]
Thus
\[\delta\log\beta=\log\left(1+\frac{(1+\delta)}{2\kappa\beta}\log\beta\right).\]
The conditions \(\delta<1\), \(\kappa\geq 1\) and \(c\geq c^{*}\) imply that \(\beta\geq e\) so that \((\log\beta)/\beta\leq 1/e\). Thus \(\frac{(1+\delta)}{2\kappa\beta}\log\beta<1\). If \(x<1\), then
\[\frac{x}{1+x}\leq\log(1+x)\leq x,\]
which (after canceling a \(\log\beta\) term) means \(\delta\) must satisfy
\[\frac{1+\delta}{2\kappa\beta+(1+\delta)\log\beta}\leq\delta\leq\frac{1+\delta }{2\kappa\beta}.\]
The RHS inequality is true if \(\delta\leq 1/(2\kappa\beta-1)\).
For \(c\geq c^{*}\), \(\log\beta\geq 1\), and we can strengthen the LHS to
\[\frac{1+\delta}{2\kappa\beta+(1+\delta)}\leq\delta.\]
This reduces to \(1\leq 2\delta\kappa\beta+\delta^{2}\), which is satisfied for \(\delta\geq 1/2\kappa\beta\).
We conclude that
\[z_{0}=\beta+\frac{(1+\delta)}{2\kappa}\log\beta,\qquad\text{where}\qquad\frac{ 1}{2\kappa\beta}\leq\delta\leq\frac{1}{2\kappa\beta-1}.\]
Thus
\[\tau_{0}= \frac{\kappa(z_{0}-1)}{2\kappa z_{0}-1}=\frac{1}{2}\left(1-\frac {2\kappa-1}{2\kappa z_{0}-1}\right)\] \[= \frac{1}{2}\left(1-\frac{2\kappa-1}{2\kappa\beta+(1+\delta)\log \beta-1}\right)\] \[= \frac{1}{2}\left(1-\frac{2\kappa-1}{c\frac{(2\kappa-1)^{2}}{2 \kappa}+2\kappa-1+\frac{1+\delta}{\delta}\log\left(1+\frac{1+\delta}{2\kappa \beta}\log\left(c+1-\frac{1}{2\kappa}+\frac{1}{\kappa^{2}}\right)\right)}\right)\] \[= \frac{1}{2}\left(1-\frac{1}{c+1-\frac{c}{2\kappa}+\frac{\log(c+1 )}{2\kappa-1}+O\left(\frac{c}{\kappa^{2}}\right)}\right).\]
**Lemma 6**.: _Let \(\kappa<1/2c\) where \(c>c_{0}=5\). Let \(\tau_{0}\) be the smallest positive root of (9). Then_
\[\kappa(1-e^{-c/2\kappa+2c})<\tau_{0}<\kappa(1-e^{-c/2\kappa}).\]
Proof.: Let \(\tau=\kappa(1-e^{-c/2\kappa+\varepsilon})\), then
\[f_{\kappa}(\tau) =\frac{c}{2\kappa}-2c+2c\kappa+\log(e^{-c/2\kappa+\varepsilon})+ \log\frac{1}{1-2\tau}+\frac{2\tau(1-2\kappa)}{1-2\tau}\] \[=\varepsilon-2c+2c\kappa+\log\frac{1}{1-2\tau}+\frac{2\tau(1-2 \kappa)}{1-2\tau},\]
where the last two terms on the RHS are positive.
Referring to (9) we see that \(f_{\kappa}(0)=c(1-2\kappa)^{2}/2\kappa>0\) for any \(\kappa<1/2\). If \(\varepsilon=0\) then assuming \(\tau<\kappa<1/2c\) and \(c>c_{0}\) then \(f_{\kappa}(\tau)<0\) so \(\tau>\tau_{0}\), since \(f_{\kappa}\) is monotone increasing in \(\tau\). On the other hand, if \(\varepsilon\geq 2c\) then \(f_{\kappa}(t)>0\) so \(\tau<\tau_{0}\).
## 3 Modified Greedy
Here we choose a random vertex \(x\) and then a random neighbor \(y\), add \(\{x,y\}\), to the matching \(M\) and delete the vertices of the edge from \(V(t)\) and all edges of the same color as the edge. If \(d_{t}(x)=0\) then we just delete \(x\). Let \(\nu(t)=|V(t)|\). We let \(q(t)\) denote the number of unused colors. Let \(t_{0}\) be the number of steps when a single vertex \(x\) of degree zero is deleted, and \(t_{1}\) the number of steps when a pair of vertices \(x,y\) are deleted. As \(t=t_{0}+t_{1}\), \(\nu(t)=n-t_{0}-2t_{1}\), and \(q(t)=q-t_{1}\) it follows that
\[q(t)=t+\nu(t)+q-n. \tag{13}\]
If we condition on \(\mu(t),\nu(t)\), the remaining unrevealed random graph is uniform over all graphs with \(\mu(t)\) edges and \(\nu(t)\) vertices. Let
\[\lambda=\lambda(t):=\frac{2\mu(t)}{\nu(t)}\]
be the average degree of the unrevealed graph. Recall (see, for example, [4]) that if \(b=o(a^{2/3})\) then
\[\binom{a}{b}\sim\frac{a^{b}}{b!}\exp\left\{-\frac{b^{2}}{2a}\right\}.\]
Thus if \(\mu,\nu\) are at least \(n/\log\log n\) (and observing they are at most \(O(n)\)) then the probability the first chosen vertex has degree \(k\) for \(k\leq\log^{2}n\) is
\[\frac{\binom{\nu-1}{k}\binom{\binom{\nu-1}{2}}{\mu-k}}{\binom{ \binom{\nu}{2}}{\mu}} =\frac{(\nu-1)^{k}}{k!}\frac{\binom{\nu-1}{2}^{\mu-k}}{(\mu-k)! \;\binom{\nu}{2}^{\mu}}\exp\left\{-\frac{k^{2}}{2(\nu-1)}-\frac{(\mu-k)^{2}}{ 2\binom{\nu-1}{2}}+\frac{\mu^{2}}{2\binom{\nu}{2}}\right\}\] \[=\frac{\mu}{k!}\left(\frac{\nu-1}{\binom{\nu}{2}}\right)^{k} \left(\frac{\binom{\nu-1}{2}}{\binom{\nu}{2}}\right)^{\mu}\exp\left\{\tilde{ O}\left(\frac{1}{n}\right)+\frac{\mu^{2}}{\nu(\nu-1)}\left(1-\frac{(1-k/\mu)^{2}}{1-2 /\nu}\right)\right\}\] \[=\frac{\mu^{k}}{k!}\cdot\left(\frac{2}{\nu}\right)^{k}\cdot \left(1-\frac{2}{\nu}\right)^{\mu}\exp\left\{\tilde{O}\left(\frac{1}{n}+\frac {k\mu}{\nu^{2}}+\frac{\mu^{2}}{\nu^{3}}\right)\right\}\] \[=\frac{\lambda^{k}}{k!}\exp\left\{\left(-\frac{2}{\nu}+O\left( \frac{1}{\nu^{2}}\right)\right)\mu+\tilde{O}\left(\frac{1}{n}\right)\right\}\] \[=\frac{\lambda^{k}}{k!}e^{-\lambda}+\tilde{O}\left(\frac{1}{n} \right). \tag{14}\]
Conditional on the first chosen vertex having degree \(k\) for \(1\leq k\leq\log^{2}n\), the expected number of additional neighbors of the second vertex is
\[\frac{2(\mu-k)}{\nu-1}=\lambda+\tilde{O}\left(\frac{1}{n}\right). \tag{15}\]
Conditional on a total of \(\leq 2\log^{2}n\) edges being adjacent to the two chosen vertices, the expected number of edges deleted due to being the same color as the matching edge is
\[\frac{\mu-O(\log^{2}n)}{q}=\frac{\mu}{q}+\tilde{O}\left(\frac{1}{n}\right) \tag{16}\]
where the big-O term on the last line follows since (as we will see later) we have \(q(t)=\Omega(n)\).
Let \(\xi(t)=(\mu(t),\nu(t),q(t))\). Then (explanation follows)
\[\mathbb{E}(\nu(t+1)\mid\xi(t)) =\nu(t)-2+e^{-\lambda}+\tilde{O}\left(\frac{1}{n}\right) \tag{17}\] \[\mathbb{E}(\mu(t+1)\mid\xi(t)) =\mu(t)-\lambda-(1-e^{-\lambda})\left(\lambda+\frac{\mu(t)}{q(t) }\right)+\tilde{O}\left(\frac{1}{n}\right). \tag{18}\]
Indeed, for the first line note that we lose 2 vertices unless the first vertex has degree 0 (which happens with probability about \(e^{-\lambda}\) by (14)) in which case we lose only 1. For the second line, note that we expect to lose \(\lambda\) edges adjacent to the first vertex. Then, if the first vertex has positive degree (probability about \(1-e^{-\lambda}\) by (14)), we expect to lose about \(\lambda\) additional edges adjacent to the second vertex by (15), as well as about \(\frac{\mu(t)}{q(t)}\) edges of the same color as the matching edge by (16). The big-O term comes from the approximations (14), (15) and (16).
Substituting \(\nu/n=N,\mu/n=M\) and using (13) with \(\kappa=q(0)/n\), \(\tau=t/n\), this leads to the equations
\[\frac{dM}{d\tau} =\lambda(e^{-\lambda}-2)-\frac{M(1-e^{-\lambda})}{N+\tau+\kappa- 1}. \tag{19}\] \[\frac{dN}{d\tau} =e^{-\lambda}-2, \tag{20}\]
where \(\lambda=2M/N\).
If we substitute (21) into (19) it gives the following.
\[M^{\prime}= \frac{2M}{N}N^{\prime}+M\frac{N^{\prime}+1}{N+\tau+\kappa-1}\] \[\frac{M^{\prime}}{M}= 2\frac{N^{\prime}}{N}+\frac{(N+\tau+\kappa-1)^{\prime}}{N+\tau+ \kappa-1}\qquad\text{on division by }M\] \[\implies\log M= 2\log N+\log(N+\tau+\kappa-1)+\log B\] \[M= BN^{2}(N+\tau+\kappa-1).\]
Assuming \(M(0)=c/2\) and \(N(0)=1\) this gives
\[M=\frac{c}{2\kappa}N^{2}(N+\tau+\kappa-1),\qquad\lambda=\frac{c}{\kappa}N(N+\tau+ \kappa-1).\]
Interestingly, the above expression for \(M\) is intuitive in a sense. In particular, it essentially says that the number of edges is approximately
\[\mu(t)\approx\binom{\nu(t)}{2}\cdot\frac{c}{n}\cdot\frac{q(t)}{q},\]
i.e. the edge density in the remaining graph is about the original edge density \(c/n\) times the probability that a given color is unused, \(q(t)/q\).
Substituting this expression for \(\lambda\) into (21) gives
\[N^{\prime}=e^{-\frac{c}{\kappa}N(N+\tau+\kappa-1)}-2,\qquad N(0)=1. \tag{21}\]
Let \(\tau_{0}\) be the smallest positive solution to \(M(\tau)=0\), so either \(N(\tau_{0})=0\) or \(N(\tau_{0})+\tau_{0}+\kappa-1=0\). We will show that \(N(\tau_{0})=0\), i.e. we run out of vertices before we run out of colors. This should be unsurprising since we ought to never run out of colors (e.g. a positive proportion of colors simply never appear).
Indeed, letting \(Q(\tau)=N(\tau)+\tau+\kappa-1\), we have
\[Q^{\prime}=N^{\prime}+1=e^{-\frac{c}{\kappa}NQ}-1\geq-\frac{c}{\kappa}NQ\]
and \(Q(0)=\kappa>0\) and so
\[Q(\tau)\geq\kappa e^{-\frac{c}{\kappa}\int_{0}^{\tau}N(x)\;dx}>0.\]
Therefore \(N(\tau_{0})=0\) and \(Q(\tau_{0})>0\).
We will apply Theorem 3 again. This time \(a=2\) and our two random variables are \(\nu(t)\) and \(\mu(t)\). We let
\[\mathcal{D}=\left\{(\tau,N,M):0\leq\tau\leq\frac{1}{2},\;\frac{1}{\log\log n} \leq N\leq 2,\;\frac{1}{\log\log n}\leq M\leq c\right\}.\]
Now we make sure to satisfy condition (i). By (17) and (18) we can let \(\delta=\tilde{O}\left(n^{-1}\right)\). The functions on the right hand sides of (19) and (21) are \(L\)-Lipschitz on \(\mathcal{D}\) for \(L=O(\log\log n)\). We now move on to condition (ii). We have to take \(\theta=O(n)\) since it is possible (though very unlikely) to have a vertex of linear degree. We let \(\phi=2n^{0.1}\), and then the same calculation from (5) indicates we can choose \(\gamma=\exp\{-\Omega(n^{0.1})\}\). Now for condition (iii), any positive \(\varepsilon\) will do (we will have to choose \(\varepsilon\) more carefully later to satisfy future conditions). We choose \(T=1\), \(R=O(\log\log n)\), \(\varepsilon=n^{-0.1}\). Thus with high probability we have
\[\mu(t)=M(\tau)n+\tilde{O}(n^{0.9}),\qquad\nu(t)=N(\tau)n+\tilde{O}(n^{0.9}) \tag{22}\]
uniformly for \(0\leq t\leq\tau_{0}n+\tilde{O}\left(n^{0.9}\right)\). This implies that w.h.p. our process ends when it runs out of vertices, which happens after \(\tau_{0}n+\tilde{O}(n^{0.9})\) steps. Now observe that at any step \(t\) the number of remaining vertices \(\nu(t)\) is equal to \(n-t-x\) where \(x\) is the number of edges in the matching. So when the process terminates we have \(0=n-\tau_{0}n+\tilde{O}(n^{0.9})-x\) and so the number of edges in the final matching \(\mu\) is
\[\mu=(1-\tau_{0})n+\tilde{O}(n^{0.9}). \tag{23}\]
### Upper bound on the value of \(\mu\)
Integrating (21) subject to \(N(0)=1\), we have
\[N(\tau)=1-2\tau+\int_{0}^{\tau}e^{-\frac{c}{\kappa}N(N+\sigma+k-1)}\;d\sigma. \tag{24}\]
Let \(\tau_{0}\) be the smallest positive solution to \(N(\tau)=0\). Assume the result of the previous section that the algorithm terminates when this condition is met (asymptotically).
**Lemma 7**.: _Assuming \(\kappa\geq 1/2\), the function \(F(\tau)=N(N+\tau+\kappa-1)\) is convex in \([0,\tau_{0}]\)._
Proof.: We first prove that \(N^{\prime\prime}(\tau)>0\) for \(\tau\in[0,\tau_{0})\). From (21) it is clear that \(N^{\prime}+1<0\) in this interval. Thus
\[N^{\prime\prime}=e^{-\frac{c}{\kappa}N(N+\tau+\kappa-1)}-\frac{c}{\kappa} \cdot[N(N+\tau+\kappa-1)]^{\prime},\]
where
\[[N(N+\tau+\kappa-1)]^{\prime}=N^{\prime}(N+\tau+\kappa-1)+N(N^{\prime}+1).\]
We show below that this derivative is negative for \(\kappa\geq 1/2\). The result that \(N^{\prime\prime}>0\) follows from this. As \(N>0\) and \(N^{\prime}+1<0\) we only need to show that \(N+\tau+\kappa-1\geq 0\).
(i) As \(\nu(t)=n-2t_{1}-t_{0}=n-2t+t_{0}\), \(\nu(t)\geq n-2t\). So if \(\tau\leq 1/2\) and \(\kappa\geq 1/2\),
\[N(\tau)+\tau+\kappa-1\geq 1-2\tau+\tau+\kappa-1=\kappa-\tau\geq 0.\]
(ii) Also if \(1/2\leq\tau<\tau_{0}\), then as \(N(\tau)>0\),
\[N+\tau+\kappa-1\geq\tau+\kappa-1\geq\tau-\tfrac{1}{2}\geq 0.\]
Thus if
\[F= N(N+\tau+\kappa-1),\] \[F^{\prime}= N^{\prime}(N+\tau+\kappa-1)+N(N^{\prime}+1)\leq 0,\] \[F^{\prime\prime}= N^{\prime\prime}(N+\tau+\kappa-1)+2N^{\prime}(N^{\prime}+1)+NN^ {\prime\prime}\geq 0, \tag{25}\]
as all the terms on the RHS of (25) are non-negative.
Noting that \(F\) is convex in \([0,\tau_{0}]\) and that \(F(0)=\kappa,F(\tau_{0})=0\), let \(C(\tau)=\kappa-\kappa\tau/\tau_{0}\) be the chordal line of \(F\) such that \(F(\tau)\leq C(\tau)\) in that interval. This implies that
\[e^{-\frac{c}{\kappa}F(\tau)}\geq e^{-\frac{c}{\kappa}C(\tau)}=e^{-c+c\tau/ \tau_{0}}.\]
Inserting this into (24) we have
\[N(\tau)\geq 1-2\tau+\int_{0}^{\tau}e^{-c+c\sigma/\tau_{0}}\;d\sigma.\]
Put \(\tau=\tau_{0}\) so that \(N(\tau)=0\) to obtain
\[0\geq 1-2\tau_{0}+\frac{\tau_{0}}{c}(1-e^{-c})\qquad\implies\qquad\tau_{0}\geq \frac{1}{2-\frac{1}{c}(1-e^{-c})}.\]
From (23) it follows asymptotically that
\[\mu\leq\frac{1-\frac{1}{c}(1-e^{-c})}{2-\frac{1}{c}(1-e^{-c})}\;n.\]
## 4 Final thoughts
We have made progress in understanding the performance of simple greedy algorithms for finding large rainbow matchings in sparse random graphs. Our result is precise for GREEDY in the case \(\kappa=1/2\). This is not an easy question, given that it is related to finding large matchings in sparse random 3-uniform hypergraphs. Here an edge \(\{u,v\}\) of color \(c\) can be thought of as a triple \(\{u,v,c\}\). Of course, the decision problem of whether a 3-uniform hypergraph has a matching of size \(k\) is on Karp's famous list of 21 NP-complete problems [6].
We have reduced the analysis of algorithms GREEDY and MODIFIED GREEDY to the analysis of some differential equations. While these differential equations are not terribly complex, in general they lack an explicit elementary solution. We hope that further study will lead to a better understanding. We briefly looked at a colored version of the Karp-Sipser algorithm and constructed the appropriate differential equations. We will not pursue this line here, but instead wait until we can better understand these equations.
It would be nice to prove the conjecture, mentioned in Section 1, that MODIFIED GREEDY performs better than GREEDY. In the color-free setting, Dyer, Frieze and Pittel [5] proved that the Modified Greedy Matching algorithm performs better than Greedy Matching on a random graph. However the rainbow version is complicated by the lack of explicit elementary solutions to the differential equations.
|
2310.03949 | Negative discrete moments of the derivative of the Riemann zeta-function | We obtain conditional upper bounds for negative discrete moments of the
derivative of the Riemann zeta-function averaged over a subfamily of zeros of
the zeta function which is expected to have full density inside the set of all
zeros. For $k\leq 1/2$, our bounds for the $2k$-th moments are expected to be
almost optimal. Assuming a conjecture about the maximum size of the argument of
the zeta function on the critical line, we obtain upper bounds for these
negative moments of the same strength while summing over a larger subfamily of
zeta zeros. | Hung M. Bui, Alexandra Florea, Micah B. Milinovich | 2023-10-05T23:57:12Z | http://arxiv.org/abs/2310.03949v1 | # Negative discrete moments of the derivative
###### Abstract.
We obtain conditional upper bounds for negative discrete moments of the derivative of the Riemann zeta-function averaged over a subfamily of zeros of the zeta function which is expected to have full density inside the set of all zeros. For \(k\leq 1/2\), our bounds for the \(2k\)-th moments are expected to be almost optimal. Assuming a conjecture about the maximum size of the argument of the zeta function on the critical line, we obtain upper bounds for these negative moments of the same strength while summing over a larger subfamily of zeta zeros.
Key words and phrases:Riemann zeta-function, moments, negative moments, discrete moments 2010 Mathematics Subject Classification: 11M06, 11M50
## 1. Introduction
Let \(\zeta(s)\) denote the Riemann zeta-function. In this paper, we study negative discrete moments of the derivative \(\zeta(s)\) of the form
\[J_{-k}(T)=\sum_{T<\gamma\leq 2T}|\zeta^{\prime}(\rho)|^{-2k},\]
where \(k\in\mathbb{R}\) and \(\rho=\beta+i\gamma\) runs over the non-trivial zeros of \(\zeta(s)\). We will be interested in the case \(k>0\), where we note that \(J_{-k}(T)\) is only defined if the zeros \(\rho\) are all simple.
Gonek [8] and Hejhal [12] independently conjectured that
\[J_{-k}(T)\asymp T(\log T)^{(k-1)^{2}}, \tag{1.1}\]
as \(T\to\infty\) and for every real number \(k\). Using a random matrix theory model for \(\zeta(s)\), Hughes, Keating, and O'Connell [13] have refined this conjecture to predict that
\[J_{-k}(T)\sim C_{k}\,T(\log T)^{(k-1)^{2}},\]
for a specific constant \(C_{k}\), when \(k\in\mathbb{C}\) and \(\Re(k)<3/2\). This conjecture was recovered by Bui, Gonek, and Milinovich [3] using a hybrid Euler-Hadamard product for \(\zeta(s)\). The random matrix model for \(\zeta(s)\), as well as unpublished work of Gonek, suggests that there are infinitely many zeros \(\rho\) such that \(|\zeta^{\prime}(\rho)|^{-1}\gg|\gamma|^{1/3-\varepsilon}\), for any \(\varepsilon>0\). This, in turn, would imply that for \(k>0\), \(J_{-k}(T)=\Omega(T^{2k/3-\varepsilon})\), suggesting that (1.1) only holds for \(k<3/2\). However, it is noted in [13] that while (1.1) is not expected to hold for \(k\geq 3/2\), it is plausible that (1.1) could hold when summing over a subfamily of zeros of full density inside the set of zeros with ordinates between \(T\) and \(2T\). Indeed, if one were to redefine \(J_{-k}(T)\) to exclude a set of rare points where \(|\zeta^{\prime}(\rho)|\) is very close to \(0\), then (1.1) should still predict the behavior of the redefined sum \(J_{-k}(T)\). In Theorems 1.1 and 1.2, below, we make the first progress towards this idea, by obtaining upper bounds for \(J_{-k}(T)\) over certain subfamilies of the nontrivial zeros of \(\zeta(s)\) that expected to have full density inside the set of all nontrivial zeros of the zeta function.
Results towards Conjecture (1.1) are known in a few cases. Assuming the Riemann Hypothesis (RH), Gonek [7] has shown that \(J_{1}(T)\sim\frac{1}{24\pi}T(\log T)^{4}\), while Ng [21] has shown that \(J_{2}(T)\asymp T(\log T)^{9}\). Under RH and the assumption that the zeros of \(\zeta(s)\) are all simple, Gonek [8] has also shown that \(J_{-1}(T)\gg T\) (see also [17]), which is a lower bound consistent with (1.1). Furthermore, Gonek conjectured [8] that
\[J_{-1}(T)\sim\frac{3}{\pi^{3}}T.\]
This agrees with the conjecture of Hughes, Keating, and O'Connell in the case \(k=1\).
While asymptotic formulas are known only in the cases mentioned above, progress has been made towards upper and lower bounds in certain cases. When \(k<0\) (i.e., for positive discrete moments), Ng and Milinovich [18] obtained sharp lower bounds consistent with (1.1) for integer \(k\) under the Generalized Riemann Hypothesis (GRH) for Dirichlet \(L\)-functions. Milinovich [16] obtained almost sharp upper bounds for positive discrete moments when \(k\) is an integer under the RH, and Kirila [14] extended this result to all \(k<0\), obtaining sharp upper bounds of the order conjectured in (1.1). These results settle, on GRH, conjecture (1.1) for negative integers \(k\).
When \(k>0\) (i.e., in the case of negative discrete moments), less is known. Heap, Li, and J. Zhao [11] obtained lower bounds of the size in (1.1) for all fractional \(k\) under RH and the assumption of simple zeros, while recently Gao and L. Zhao [6] extended this result to all \(k>0\) under the same assumptions. As mentioned above, these lower bounds are expected to be sharp for \(k<3/2\), but not for \(k\geq 3/2\).
### Main results
No upper bounds are known for \(J_{-k}(T)\) in the case when \(k>0\). The main results of this paper are to obtain upper bounds for the negative discrete moments when summing over two different subfamilies of the nontrivial zeros of \(\zeta(s)\), both of which are expected to have full density inside the set of all zeros. Our first result assumes RH, while our second result assumes RH and a conjectural upper bound on \(|S(t)|\), where \(S(t)=\pi^{-1}\arg\zeta(1/2+it)\) and the argument is obtained by continuous variation along the straight line segments joining the points \(2,2+it\), and \(1/2+it\) starting with the value \(\arg\zeta(2)=0\).
Let
\[\mathcal{F}=\Big{\{}\gamma\in(T,2T]:|\gamma-\gamma^{\prime}|\gg\frac{1}{\log T },\text{ for any other ordinate }\gamma^{\prime}\Big{\}}.\]
We expect \(\mathcal{F}\) to have full density inside the set of zeros in \((T,2T]\). Indeed, for any fixed \(\beta>0\), Montgomery's Pair Correlation Conjecture [19] implies that if \(\gamma^{+}\) denotes the next largest ordinate of a nontrivial zero of \(\zeta(s)\) above \(\gamma\), then
\[\#\Big{\{}\gamma\in(T,2T]:\,\gamma^{+}-\gamma\leq\frac{2\pi\beta}{\log T} \Big{\}}\ll\beta^{3}N(T),\]
where \(N(T)\) denotes the number of nontrivial zeros of \(\zeta(s)\) with ordinates \(\gamma\in(0,T]\). Therefore, the set of excluded zeros whose ordinates do not belong to the family \(\mathcal{F}\) conjecturally has measure \(0\).
We will prove the following theorem.
**Theorem 1.1**.: _Assume the Riemann hypothesis and let \(\varepsilon>0\). Then, for any \(\delta>0\), we have that_
\[\sum_{\gamma\in\mathcal{F}}|\zeta^{\prime}(\rho)|^{-2k}\ll\left\{\begin{aligned} & T^{1+\delta},& \text{ if }\ 2k\,(1+\varepsilon)\leq 1,\\ & T^{k+\frac{1}{2}+\delta},&\text{ if }\ 2k\,(1+ \varepsilon)>1.\end{aligned}\right. \tag{1.2}\]
The proof of Theorem 1.1 relies on Littlewood's classical estimate [15] that
\[|S(t)|\ll\frac{\log t}{\log\log t},\]
assuming RH as \(t\to\infty\). It is expected that maximum size of \(|S(t)|\) is much smaller than this bound. In [4], Farmer, Gonek, and Hughes conjecture that
\[|S(t)|\ll\sqrt{\log t\log\log t}. \tag{1.3}\]
Now let \(f(t)\) be any function slowly going to infinity and let
\[\mathcal{F}^{\mathrm{enl}}=\Big{\{}\gamma\in(T,2T]:|\gamma-\gamma^{\prime}| \gg\frac{1}{\exp\Big{(}\frac{\sqrt{\log T}}{f(T)\sqrt{\log\log T}}\Big{)}}, \text{ for any other ordinate }\gamma^{\prime}\Big{\}}. \tag{1.4}\]
Then we will prove the following theorem.
**Theorem 1.2**.: _Let \(\varepsilon>0\), and assume both the Riemann hypothesis and the conjecture in (1.3). Then, for any \(\delta>0\), we have that_
\[\sum_{\gamma\in\mathcal{F}^{\mathrm{enl}}}|\zeta^{\prime}(\rho)|^{-2k}\ll \left\{\begin{array}{ll}T^{1+\delta},&\text{ if }\ 2k\,(1+\varepsilon)\leq 1,\\ T^{k+\frac{1}{2}+\delta},&\text{ if }\ 2k\,(1+\varepsilon)>1.\end{array}\right.\]
Remark. Note that if we instead assume a conjectural bound for \(S(t)\) of the form
\[|S(t)|\ll h(t)\]
for some function \(h(t)\) such that \(h(t)=o(\frac{\log t}{\log\log t})\), then we can define a family of zeros
\[\mathcal{F}^{{}^{\prime}}=\Big{\{}\gamma\in(T,2T]:|\gamma-\gamma^{\prime}|\gg \frac{1}{\exp\Big{(}\frac{\log T}{f(T)h(T)}\Big{)}},\text{ for any other ordinate }\gamma^{\prime}\Big{\}},\]
where, as before, \(f(t)\) is any function slowly going to infinity. Then, modifying our proof of Theorems 1.1 and 1.2 in a straightforward manner, we can similarly prove that (1.2) holds when summing over the zeros with \(\gamma\in\mathcal{F}^{\prime}\).
### Bounds for \(J_{-k}(T)\) assuming the Weak Mertens Conjecture
If one were to sum over the full family of zeros of \(\zeta(s)\), the problem of obtaining upper bounds for the negative discrete moments becomes more delicate. In order to obtain an upper bound, it seems that simply assuming RH and the simplicity of zeros is not enough. Indeed, from equation (2.1) in the proof of Lemma 2.1, one sees that \(\log|\zeta^{\prime}(\rho)|\) is closely connected to a sum over the zeros \(\rho^{\prime}\neq\rho\), and in order to control \(\log|\zeta^{\prime}(\rho)|\) we need to understand the small differences \(\gamma^{\prime}-\gamma\). Hence, one would need to assume a lower bound on the size of the smallest difference between consecutive zeros of \(\zeta(s)\).
It is possible to obtain upper bounds on the negative discrete moments \(J_{-k}(T)\) when summing over the full family of zeros of \(\zeta(s)\) assuming a well-known conjecture for the distribution of partial sums of the Mobius function. Defining \(M(x)=\sum_{n\leq x}\mu(n)\), the Weak Mertens Conjecture (WMC) states that
\[\int_{1}^{X}\Big{(}\frac{M(x)}{x}\Big{)}^{2}\,dx\ll\log X.\]
Titchmarsh [22, Chapter 14] has shown that the WMC implies RH, that the zeros of \(\zeta(s)\) are all simple, the fact that sum
\[\sum_{\rho}\frac{1}{|\rho\,\zeta^{\prime}(\rho)|^{2}} \tag{1.5}\]
over all nontrivial zeros of \(\zeta(s)\) is convergent. From these facts, we now observe that WMC implies that
\[J_{-k}(T)=\left\{\begin{aligned} & o\big{(}T^{k+1}(\log T)^{1-k}\big{)} \,,&\text{ if }\ 0<k<1,\\ & o\big{(}T^{2k}\big{)}\,,&\text{ if }\ k\geq 1. \end{aligned}\right.\]
Though these estimates apply to sums over all zeros, we note that both of these bounds are weaker than the bounds obtained in Theorem 1.1 for zeros restricted to the set \(\mathcal{F}\).
If \(k\geq 1\), then using Cauchy's inequality and WMC, we have
\[\begin{split} J_{-k}(T)=\sum_{T<\gamma\leq 2T}\frac{1}{|\zeta^{ \prime}(\rho)|^{2k}}&\leq\Big{(}\sum_{\gamma\in(T,2T]}\frac{1}{| \rho\,\zeta^{\prime}(\rho)|^{2k}}\Big{)}^{1/2}\Big{(}\sum_{\gamma\in(T,2T]} \frac{|\rho|^{2k}}{|\zeta^{\prime}(\rho)|^{2k}}\Big{)}^{1/2}\\ &=o\Big{(}T^{k}J_{-k}(T)^{1/2}\Big{)},\end{split} \tag{1.6}\]
as \(T\to\infty\), where we used the fact that the first sum on the right-hand side of the inequality in the first line is \(o(1)\), which follows from (1.5) and the fact that \(k\geq 1\). This implies that \(J_{-k}(T)=o\big{(}T^{2k}\big{)}\) for \(k\geq 1\). On the other hand, if \(k\in(0,1)\), then using Holder's inequality, we have that
\[\begin{split} J_{-k}(T)=\sum_{T<\gamma\leq 2T}\frac{1}{|\zeta^{ \prime}(\rho)|^{2k}}&\leq\Big{(}\sum_{\gamma\in(T,2T]}\frac{1}{| \zeta^{\prime}(\rho)|^{2}}\Big{)}^{k}\Big{(}\sum_{\gamma\in(T,2T]}1\Big{)}^{1-k} \\ &=O\Big{(}J_{-1}(T)^{k}(T\log T)^{1-k}\Big{)}=o\Big{(}T^{k+1}( \log T)^{1-k}\Big{)},\end{split}\]
as \(T\to\infty\), which follows by using the bound \(J_{-1}(T)=o(T^{2})\) derived in (1.6) and the fact that \(N(2T)-N(T)\ll T\log T\).
### Overview of the paper
To prove Theorems 1.1 and 1.2, we start by relating \(\log|\zeta^{\prime}(\rho)|\) to \(\log|\zeta(\rho+1/\log T)|\) and a sum over the zeros \(\rho^{\prime}\neq\rho\), which we then bound using the lower bound on the difference between consecutive zeros provided by the families \(\mathcal{F}\) and \(\mathcal{F}^{\text{enl}}\). The problem of bounding the discrete negative moments of the derivative of \(\zeta(s)\) then reduces to bounding negative discrete shifted moments of \(\zeta(s)\). To do that, we build on work of the first two authors [1] on upper bounds for the negative continuous moments of \(\zeta(s)\), which in turn builds on work of Harper [10] and Soundararajan [23] on upper bounds for the positive moments of \(\zeta(s)\) on the critical line. Our analysis uses a discrete mean-value theorem for the mean-square of Dirichlet polynomials averaged over zeros of \(\zeta(s)\), similar in flavor to the Montgomery-Vaughan's mean-value theorem for continuous averages Dirichlet polynomials [20, Corollary 3]. The proof of this discrete mean-value estimate relies on the Landau-Gonek explicit formula [9].
In Section 2, we relate \(\log|\zeta^{\prime}(\rho)|\) to \(\log|\zeta(\rho+1/\log T)|\) building upon previous observations of Kirila [14]. Then we gather a few facts that we will need from [1] and state the main propositions which evaluate moments of certain Dirichlet polynomials. In Section 3.1, we prove our discrete mean-value theorem for Dirichlet polynomials averaged over zeros of \(\zeta(s)\), and in Section 4 we prove the main propositions stated in Section 2. We finally prove Theorems 1.1 and 1.2 in Section 5.
## 2. First steps
We first express \(\log|\zeta^{\prime}(\rho)|\) in terms of \(\log|\zeta(\rho+1/\log T)|\).
**Lemma 2.1**.: _Assume RH. For \(\gamma\in\mathcal{F}\), we have that_
\[\log\frac{1}{|\zeta^{\prime}(\rho)|}=\log\frac{1}{|\zeta(\rho+\frac{1}{\log T} )|}+O\Big{(}\frac{\log T}{\log\log T}\Big{)}.\]
Proof.: Equation 3.3 in Kirila [14] states that
\[\log|\zeta^{\prime}(\rho)| =\log|\zeta(\sigma+i\gamma)|+\Big{(}\sigma-\frac{1}{2}\Big{)} \Big{(}\frac{\log T}{2}+O(1)\Big{)}-\log\Big{|}\sigma-\frac{1}{2}\Big{|}\] \[\qquad-\frac{1}{2}\sum_{\rho^{\prime}\neq\rho}\log\frac{(\sigma- \frac{1}{2})^{2}+(\gamma-\gamma^{\prime})^{2}}{(\gamma-\gamma^{\prime})^{2}}, \tag{2.1}\]
for any \(\sigma>1/2\). We will obtain an upper bound for the sum over \(\rho^{\prime}\). Let \(\sigma-1/2=\alpha\). We write
\[\sum_{\rho^{\prime}\neq\rho}\log\frac{\alpha^{2}+(\gamma-\gamma^{\prime})^{2} }{(\gamma-\gamma^{\prime})^{2}}=\sum_{|\gamma^{\prime}-\gamma|<\alpha}\log \frac{\alpha^{2}+(\gamma-\gamma^{\prime})^{2}}{(\gamma-\gamma^{\prime})^{2}}+ \sum_{|\gamma^{\prime}-\gamma|\geq\alpha}\log\frac{\alpha^{2}+(\gamma-\gamma^ {\prime})^{2}}{(\gamma-\gamma^{\prime})^{2}}.\]
Let \(M_{1}(\gamma)\) denote the first term on the right-hand side, and let \(M_{2}(\gamma)\) denote the second.
We pick
\[\alpha=\frac{1}{\log T}.\]
We first bound \(M_{1}(\gamma)\). We have
\[M_{1}(\gamma)\ll\sum_{|\gamma-\gamma^{\prime}|<\frac{1}{\log T}}1\ll\frac{ \log T}{\log\log T}, \tag{2.2}\]
where we used the fact that \(\gamma\in\mathcal{F}\), and that
\[N\Big{(}\gamma+\frac{1}{\log T}\Big{)}-N\Big{(}\gamma-\frac{1}{\log T}\Big{)} \ll 1+\Big{|}S\Big{(}T+\frac{1}{\log T}\Big{)}-S\Big{(}T-\frac{1}{\log T} \Big{)}\Big{|}\ll\frac{\log T}{\log\log T}.\]
Standard estimates imply that
\[M_{2}(\gamma)=\sum_{|\gamma-\gamma^{\prime}|\geq\frac{1}{\log T}}\log\Big{(} 1+\frac{\alpha^{2}}{(\gamma-\gamma^{\prime})^{2}}\Big{)}\ll\frac{\log T}{ \log\log T}. \tag{2.3}\]
Hence for \(\gamma\in\mathcal{F}\), using (2.2) and (2.3), we have that
\[\log\frac{1}{|\zeta^{\prime}(\rho)|}=\log\frac{1}{|\zeta(\rho+1/\log T)|}+O \Big{(}\frac{\log T}{\log\log T}\Big{)}.\]
This completes the proof of the lemma.
We similarly obtain the following estimate.
**Lemma 2.2**.: _For \(\gamma\in\mathcal{F}^{\text{enl}}\), we have that_
\[\log\frac{1}{|\zeta^{\prime}(\rho)|}=\log\frac{1}{|\zeta(\rho+\frac{1}{\log T} )|}+O\Big{(}\frac{\log T}{f(T)}\Big{)},\]
_where \(f\) is the function appearing in (1.4)._
Proof.: Upon noticing that
\[M_{1}(\gamma)\ll\frac{\log T}{f(T)}\quad\text{ and }\quad M_{2}(\gamma)\ll\sqrt{ \log T\log\log T},\]
the proof is very similar to the proof of Lemma 2.1.
Throughout the paper, we rely on the following lemma proved in [1, Lemma 2.1 and (3.2)].
**Lemma 2.3**.: _Assume RH. Then we have that_
\[\log|\zeta(\tfrac{1}{2}+\tfrac{1}{\log T}+i\gamma)|\geq\frac{\log \frac{\gamma}{2\pi}}{2\pi\Delta}\log\big{(}1-e^{-2\pi\Delta/\log T}\big{)}- \Re\Big{(}\sum_{p\leq e^{2\pi\Delta}}\frac{b(p;\Delta)}{p^{1/2+1/\log T+i \gamma}}\Big{)}\] \[\qquad\qquad-\frac{\log\log\log T}{2}-\Big{(}\log\frac{\log T}{ \Delta}\Big{)}^{\eta(\Delta)}+O\Big{(}\frac{\Delta^{2}e^{\pi\Delta}}{1+\Delta T }+\frac{\Delta\log(1+\Delta\sqrt{T})}{\sqrt{T}}+1\Big{)},\]
_where the coefficients \(b(p;\Delta)\) can be written down explicitly, and_
\[\eta(\Delta)=\left\{\begin{array}{ll}1&\text{ if }\Delta=o(\log T),\\ 0&\text{ if }\Delta\gg\log T.\end{array}\right..\]
_Moreover, the coefficients \(b(p;\Delta)\) satisfy the bound_
\[|b(p;\Delta)|\leq b(\Delta)\Big{(}\log\frac{\log T}{\Delta}\Big{)}^{\eta( \Delta)}, \tag{2.4}\]
_where, for any \(\varepsilon>0\),_
\[b(\Delta)=\left\{\begin{array}{ll}\frac{1}{2}+\varepsilon,&\text{ if }\Delta=o(\log T),\\ \frac{1}{1-e^{-2\pi\Delta/\log T}},&\text{ if }\Delta\gg\log T.\end{array}\right. \tag{2.5}\]
Now we introduce some of the notation in [1]. Let
\[I_{0}=(1,T^{\beta_{0}}],\ I_{1}=(T^{\beta_{0}},T^{\beta_{1}}],\ \ldots,\ I_{K}=(T^{\beta_{K-1}},T^{\beta_{K}}]\]
for a sequence \(\beta_{0},\ldots,\beta_{K}\) to be chosen later such that \(\beta_{j+1}=r\beta_{j}\), for some \(r>1\). Also, let \(\ell_{j}\) be even parameters which we will also choose later on. Let \(s_{j}\) be integers. For now, we can think of \(s_{j}\beta_{j}\asymp 1\), and \(\sum_{h=0}^{K}\ell_{h}\beta_{h}\ll 1\). We let \(T^{\beta_{j}}=e^{2\pi\Delta_{j}}\) for every \(0\leq j\leq K\) and let
\[P_{u,v}(\gamma)=\sum_{p\in I_{u}}\frac{b(p;\Delta_{v})}{p^{1/2+1/\log T+i \gamma}},\]
and we extend \(b(p;\Delta)\) to a completely multiplicative function in the first variable. Let
\[\mathcal{T}_{u}=\Big{\{}\gamma\in(T,2T]:\max_{u\leq v\leq K}|P_{u,v}(\gamma)| \leq\frac{\ell_{u}}{ke^{2}}\Big{\}}.\]
Denote the set of \(\gamma\) such that \(\gamma\in\mathcal{T}_{u}\) for all \(u\leq K\) by \(\mathcal{T}^{\prime}\). For \(0\leq j\leq K-1\), let \(\mathcal{S}_{j}\) denote the subset of \(\gamma\in(T,2T]\) such that \(\gamma\in\mathcal{T}_{h}\) for all \(h\leq j\), but \(\gamma\notin\mathcal{T}_{j+1}\).
Now for \(\ell\) an integer, and for \(z\in\mathbb{C}\), let
\[E_{\ell}(z)=\sum_{s\leq\ell}\frac{z^{s}}{s!}.\]
If \(z\in\mathbb{R}\), then for \(\ell\) even, \(E_{\ell}(z)>0\). We will often use the fact that for \(\ell\) an even integer and \(z\) a complex number such that \(|z|\leq\ell/e^{2}\), we have
\[e^{\Re(z)}\leq\max\Big{\{}1,|E_{\ell}(z)|\Big{(}1+\frac{1}{15e^{\ell}}\Big{)} \Big{\}},\]
whose proof can be found in [1, Lemma 2.4].
Let \(\nu(n)\) be the multiplicative function given by
\[\nu(p^{j})=\frac{1}{j!},\]
for \(p\) a prime and any integer \(j\). We will frequently use the following fact: For any interval \(I\), \(s\in\mathbb{N}\) and \(a(n)\) a completely multiplicative function, we have
\[\Big{(}\sum_{p\in I}a(p)\Big{)}^{s}=s!\sum_{\begin{subarray}{c}p|n\to p\in I \\ \Omega(n)=s\end{subarray}}a(n)\nu(n), \tag{2.6}\]
where \(\Omega(n)\) denotes the number of prime factors of \(n\), counting multiplicity.
We now state the following result, which is similar to [1, Lemma 3.1].
**Lemma 2.4**.: _For \(\gamma\in(T,2T]\), we either have_
\[\max_{0\leq v\leq K}|P_{0,v}(\gamma)|>\frac{\ell_{0}}{ke^{2}},\]
_or_
\[|\zeta(\tfrac{1}{2}+\tfrac{1}{\log T}+i\gamma)|^{-2k}\leq S_{1}(\gamma)+S_{2} (\gamma),\]
_where_
\[S_{1}(\gamma) =(\log\log T)^{k}\Big{(}\frac{1}{1-e^{-\beta_{K}}}\Big{)}^{\frac {2k}{\beta_{K}}}\exp\Big{(}2k\Big{(}\log\frac{\log T}{\Delta_{K}}\Big{)}^{\eta (\Delta_{K})}\Big{)}\] \[\qquad\times\prod_{h=0}^{K}\max\Big{\{}1,|E_{\ell_{h}}(kP_{h,K}( \gamma))|^{2}\Big{(}1+\frac{1}{15e^{\ell_{h}}}\Big{)}^{2}\Big{\}}\] \[\qquad\times\exp\Big{(}O\Big{(}\frac{\Delta_{K}^{2}e^{\pi\Delta_ {K}}}{1+\Delta_{K}T}+\frac{\Delta_{K}\log(1+\Delta_{K}\sqrt{T})}{\sqrt{T}}+1 \Big{)}\Big{)}\]
_and_
\[S_{2}(\gamma) =(\log\log T)^{k}\sum_{j=0}^{K-1}\sum_{v=j+1}^{K}\Big{(}\frac{1} {1-e^{-\beta_{j}}}\Big{)}^{\frac{2k}{\beta_{j}}}\exp\Big{(}2k\Big{(}\log\frac {\log T}{\Delta_{j}}\Big{)}^{\eta(\Delta_{j})}\Big{)}\] \[\qquad\times\exp\Big{(}O\Big{(}\frac{\Delta_{j}^{2}e^{\pi\Delta_ {j}}}{1+\Delta_{j}T}+\frac{\Delta_{j}\log(1+\Delta_{j}\sqrt{T})}{\sqrt{T}}+1 \Big{)}\Big{)}.\]
Proof.: The proof follows as the proof of [1, Lemma 3.1], by noticing that for \(\gamma\in(T,2T]\), either \(\gamma\notin\mathcal{T}_{0}\), or \(\gamma\in\mathcal{T}^{\prime}\), or \(\gamma\in\mathcal{S}_{j}\) for some \(0\leq j\leq K-1\), and then using Lemma 2.3.
We also have the following propositions, whose proofs we postpone to Section 4.
**Proposition 2.5**.: _Assume RH. For \(0\leq v\leq K\) and \(\beta_{0}s_{0}\leq 1-\log\log T/\log T\), we have_
\[\sum_{\gamma\in(T,2T]}|P_{0,v}(\gamma)|^{2s_{0}}\ll N(T)s_{0}!b(\Delta_{v})^{2s_ {0}}\Big{(}\log\frac{\log T}{\Delta_{v}}\Big{)}^{2s_{0}\eta(\Delta_{v})}(\log \log T^{\beta_{0}})^{s_{0}}.\]
**Proposition 2.6**.: _Assume RH. Let \(0\leq j\leq K-1\). For \(\beta_{0}\) as given in (5.1) or (5.6), and for \(\sum_{h=0}^{j}\ell_{h}\beta_{h}+s_{j+1}\beta_{j+1}\leq 1-\log\log T/\log T\) and \(j+1\leq v\leq K\), we have_
\[\sum_{\gamma\in(T,2T]}\prod_{h=0}^{j}|E_{\ell_{h}}(kP_{h,j}(\gamma ))|^{2}|P_{j+1,v}(\gamma)|^{2s_{j+1}}\ll N(T)s_{j+1}!\big{(}\log T^{\beta_{j}} \big{)}^{k^{2}b(\Delta_{j})^{2}(\log\frac{\log T}{\Delta_{j}})^{2\eta(\Delta_{ j})}}\] \[\qquad\qquad\times b(\Delta_{j+1})^{2s_{j+1}}\Big{(}\log\frac{ \log T}{\Delta_{j+1}}\Big{)}^{2s_{j+1}\eta(\Delta_{j+1})}(\log r)^{s_{j+1}}.\]
**Proposition 2.7**.: _Assume RH. For \(\beta_{0}\) as given in (5.1) or (5.6), and for \(\sum_{h=0}^{K}\ell_{h}\beta_{h}\leq 1-\log\log T/\log T\), we have_
\[\sum_{\gamma\in(T,2T]}\prod_{h=0}^{K}|E_{\ell_{h}}(kP_{h,K}(\gamma))|^{2}\ll N (T)\big{(}\log T^{\beta_{K}}\big{)}^{k^{2}b(\Delta_{K})^{2}(\log\frac{\log T}{ \Delta_{K}})^{2\eta(\Delta_{K})}}.\]
## 3. A mean-value theorem
We now prove the following mean-value theorem, which is of independent interest.
**Theorem 3.1**.: _Assume RH. Then for \(x\geq 2\) and \(T>1\), we have_
\[\sum_{0<\gamma\leq T}\Big{|}\sum_{n\leq x}\frac{a_{n}}{n^{\rho}}\Big{|}^{2}=N( T)\sum_{n\leq x}\frac{|a_{n}|^{2}}{n}-\frac{T}{\pi}\Re\Big{(}\sum_{kn\leq x} \frac{\overline{a_{kn}}a_{n}\Lambda(k)}{kn}\Big{)}+O\Big{(}x\,(\log(xT))^{2} \sum_{n\leq x}\frac{|a_{n}|^{2}}{n}\Big{)},\]
_where \(\{a_{n}\}\) is any sequence of complex numbers._
The proof relies on following version of the Landau-Gonek explicit formula [9, Theorem 1].
**Lemma 3.2**.: _Let \(y,T>1\). Then_
\[\sum_{0<\gamma\leq T}y^{\rho}=-\frac{T}{2\pi}\Lambda(y)+O\Big{(}y \log(2yT)\log\log(3y)\Big{)}\] \[\qquad\qquad\qquad+O\bigg{(}\log y\min\Big{\{}T,\frac{y}{\langle y \rangle}\Big{\}}\bigg{)}+O\bigg{(}\log(2T)\min\Big{\{}T,\frac{1}{\log y}\Big{\}} \bigg{)},\]
_where \(\langle y\rangle\) denotes the distance from \(y\) to the nearest prime power other than \(y\). Here, if \(y\in\mathbb{N}\), \(\Lambda(y)\) denotes the von Mangoldt function and \(\Lambda(y)=0\), otherwise._
Proof of Theorem 3.1.: Assuming RH, we note that \(1-\rho=\overline{\rho}\). So the sum on the left-hand side of expression in Theorem 3.1 is equal to
\[\sum_{0<\gamma\leq T}\sum_{m\leq x}\frac{\overline{a_{m}}}{m^{1-\rho}}\sum_{n \leq x}\frac{a_{n}}{n^{\rho}}=N(T)\sum_{n\leq x}\frac{|a_{n}|^{2}}{n}+2\Re \bigg{(}\sum_{n\leq x}\sum_{n<m\leq x}\frac{\overline{a_{m}}a_{n}}{m}\sum_{0< \gamma\leq T}\Big{(}\frac{m}{n}\Big{)}^{\rho}\,\bigg{)}.\]
Applying Lemma 3.2 with \(y=m/n\), we have
\[\sum_{n\leq x}\sum_{n<m\leq x}\frac{\overline{a_{m}}a_{n}}{m}\sum_{0 <\gamma\leq T}\Big{(}\frac{m}{n}\Big{)}^{\rho}\] \[\qquad=-\frac{T}{2\pi}\sum_{n\leq x}\sum_{n<m\leq x}\frac{ \overline{a_{m}}a_{n}\Lambda(m/n)}{m}+O\Big{(}\log(xT)\log\log(3x)\sum_{m,n \leq x}\frac{|a_{m}a_{n}|}{n}\Big{)}\] \[\qquad\qquad+O\Big{(}\sum_{n\leq x}\sum_{n<m\leq x}\frac{|a_{m}a_ {n}|\log(m/n)}{n\left\langle m/n\right\rangle}\Big{)}+O\Big{(}\log(2T)\sum_{n \leq x}\sum_{n<m\leq x}\frac{|a_{m}a_{n}|}{m\log(m/n)}\Big{)}\] \[\qquad=\Sigma_{1}+\Sigma_{2}+\Sigma_{3}+\Sigma_{4},\]
say. In \(\Sigma_{1}\), the only nonzero terms occur if \(n\) divides \(m\) (and \(m/n\) is a prime power). Thus, making the substitution \(m=nk\), it follows that
\[\Sigma_{1}=-\frac{T}{2\pi}\sum_{kn\leq x}\frac{\overline{a_{kn}}a_{n}\Lambda( k)}{kn}.\]
Next we estimate \(\Sigma_{2}\). Observe that for any \(\Delta>0\), we have
\[|a_{m}a_{n}|\leq\frac{1}{2}\left(\frac{|a_{m}|^{2}}{\Delta}+\Delta|a_{n}|^{2} \right). \tag{3.1}\]
This implies that
\[\sum_{m,n\leq x}\frac{|a_{m}a_{n}|}{n} \ll\frac{1}{\Delta}\sum_{m\leq x}|a_{m}|^{2}\sum_{n\leq x}\frac{1 }{n}+\Delta\sum_{n\leq x}\frac{|a_{n}|^{2}}{n}\sum_{m\leq x}1\] \[\ll\frac{x\log x}{\Delta}\sum_{m\leq x}\frac{|a_{m}|^{2}}{m}+ \Delta\,x\sum_{n\leq x}\frac{|a_{n}|^{2}}{n}\ll x\,(\log x)^{1/2}\sum_{n\leq x }\frac{|a_{n}|^{2}}{n},\]
upon choosing \(\Delta=(\log x)^{1/2}\). From this calculation, we see that
\[\Sigma_{2}\ll x\,(\log(xT))^{3/2}\log\log(3x)\,\sum_{n\leq x}\frac{|a_{n}|^{2 }}{n}.\]
We next estimate \(\Sigma_{3}\). Again using (3.1), for any \(\Delta>0\), we have
\[\Sigma_{3}\ll\Delta\sum_{n\leq x}|a_{n}|^{2}\sum_{n<m\leq x}\frac{\log(m/n)}{ n\left\langle m/n\right\rangle}+\frac{1}{\Delta}\sum_{m\leq x}|a_{m}|^{2}\sum_{n<m} \frac{\log(m/n)}{n\left\langle m/n\right\rangle}=\Delta\Sigma_{31}+\frac{ \Sigma_{32}}{\Delta}, \tag{3.2}\]
say. Write \(m=qn+r\) with \(-\frac{n}{2}<r\leq\frac{n}{2}\). Then \(\left\langle m/n\right\rangle=\left\langle q+\frac{r}{n}\right\rangle=\frac{| r|}{n}\) if \(q\) is a prime power, and \(\left\langle m/n\right\rangle\geq\frac{1}{2}\) if \(q\) is not a prime power or \(r=0\). Then
\[\Sigma_{31} \ll\sum_{n\leq x}|a_{n}|^{2}\sum_{q\leq\lfloor\frac{x}{n}\rfloor +1}\Lambda(q)\sum_{r\leq n/2}\frac{1}{r}+\sum_{n\leq x}\frac{|a_{n}|^{2}}{n} \sum_{q\leq\lfloor\frac{x}{n}\rfloor+1}\log(q+1)\sum_{r\leq n/2}1\] \[\ll\log x\sum_{n\leq x}|a_{n}|^{2}\sum_{q\leq\lfloor\frac{x}{n} \rfloor+1}\Lambda(q)+\sum_{n\leq x}|a_{n}|^{2}\sum_{q\leq\lfloor\frac{x}{n} \rfloor+1}\log(q+1)\ll x\log x\sum_{n\leq x}\frac{|a_{n}|^{2}}{n}.\]
Now again writing \(m=qn+r\) with \(-\frac{n}{2}<r\leq\frac{n}{2}\) implies that \(q|(m-r)\) and that
\[\frac{\log(m/n)}{n\left\langle m/n\right\rangle}=\frac{\log(m/n)}{n\left\langle q +r/n\right\rangle}\ll\frac{\log m}{|r|}\]
if \(q\) is a prime power, and that
\[\frac{\log(m/n)}{n\left\langle m/n\right\rangle}\ll\frac{\log m}{n}\]
if \(q\) is not a prime power or \(r=0\). The number of prime powers dividing \(n\) equals \(\Omega(n)\). Since \(\Omega(n)\ll\log n\), it follows that
\[\Sigma_{32} \ll\log x\sum_{m\leq x}|a_{m}|^{2}\sum_{\begin{subarray}{c}n<m\\ qn+r=m\\ -\frac{n}{2}<r\leq\frac{n}{2},r\neq 0\\ q\text{ prime power}\end{subarray}}\frac{1}{|r|}+\log x\sum_{m\leq x}|a_{m}|^{2} \sum_{n<m}\frac{1}{n}\] \[\ll\log x\sum_{m\leq x}|a_{m}|^{2}\sum_{\begin{subarray}{c}-\frac {m}{2}<r\leq\frac{m}{2}\\ r\neq 0\end{subarray}}\frac{1}{|r|}\sum_{\begin{subarray}{c}q|(m-r)\\ q\text{ prime power}\end{subarray}}1+(\log x)^{2}\sum_{m\leq x}|a_{m}|^{2}\] \[\ll\log x\sum_{m\leq x}|a_{m}|^{2}\sum_{\begin{subarray}{c}-\frac {m}{2}<r\leq\frac{m}{2}\\ r\neq 0\end{subarray}}\frac{\Omega(m-r)}{|r|}+(\log x)^{2}\sum_{m\leq x}|a_{m}|^{2}\] \[\ll(\log x)^{2}\sum_{m\leq x}|a_{m}|^{2}\sum_{r\leq\frac{m}{2}} \frac{1}{r}\ll(\log x)^{3}\sum_{m\leq x}|a_{m}|^{2}\ll x(\log x)^{3}\sum_{m\leq x }\frac{|a_{m}|^{2}}{m}.\]
Choosing \(\Delta=\log x\) in (3.2), we see that
\[\Sigma_{3}\ll x(\log x)^{2}\sum_{n\leq x}\frac{|a_{n}|^{2}}{n}.\]
Finally, we estimate \(\Sigma_{4}.\) Note that if \(m>n\), then \(\log(m/n)=-\log\left(1-\frac{m-n}{m}\right)>\frac{m-n}{m}\), and it therefore follows that
\[\Sigma_{4}\ll\log(2T)\sum_{n\leq x}\sum_{n<m\leq x}\frac{|a_{m}a_{n}|}{m\!-\!n }\leq\frac{\log(2T)}{2}\sum_{\begin{subarray}{c}m,n\leq x\\ m\neq n\end{subarray}}\frac{|a_{m}a_{n}|}{|m\!-\!n|}.\]
We note that
\[\sum_{\begin{subarray}{c}m,n\leq x\\ m\neq n\end{subarray}}\frac{|a_{m}a_{n}|}{|m\!-\!n|}\leq\frac{1}{2}\sum_{ \begin{subarray}{c}m,n\leq x\\ m\neq n\end{subarray}}\frac{(|a_{m}|^{2}+|a_{n}|^{2})}{|m\!-\!n|}=\sum_{n\leq x }|a_{n}|^{2}\sum_{\begin{subarray}{c}m\leq x\\ m\neq n\end{subarray}}\frac{1}{|m\!-\!n|}\]
and
\[\sum_{n\leq x}|a_{n}|^{2}\sum_{\begin{subarray}{c}m\leq x\\ m\neq n\end{subarray}}\frac{1}{|m\!-\!n|}=\sum_{n\leq x}|a_{n}|^{2}\left(\sum_{ r\leq n-1}\frac{1}{r}+\sum_{r\leq x-n}\frac{1}{r}\right)\ll\log x\sum_{n\leq x }|a_{n}|^{2}.\]
Hence
\[\Sigma_{4}\ll(\log(xT))^{2}\sum_{n\leq x}|a_{n}|^{2}\ll x\,(\log(xT))^{2}\sum _{n\leq x}\frac{|a_{n}|^{2}}{n}.\]
Combining estimates, the theorem follows.
## 4. Proofs of Propositions 2.5, 2.6, and 2.7
Using Theorem 3.1, we are now ready to prove Propositions 2.5, 2.6, and 2.7.
Proof of Proposition 2.5.: Using (2.6), we have
\[P_{0,v}(\gamma)^{s_{0}}=s_{0}!\sum_{\begin{subarray}{c}p|n\Rightarrow p\leq T ^{\beta_{0}}\\ \Omega(n)=s_{0}\end{subarray}}\frac{b(n;\Delta_{v})\nu(n)}{n^{1/2+1/\log T+i \gamma}}.\]
Then it follows that
\[\sum_{\gamma\in(T,2T]}|P_{0,v}(\gamma)|^{2s_{0}}=(s_{0}!)^{2}\sum_{\gamma\in( T,2T]}\Big{|}\sum_{\begin{subarray}{c}p|n\Rightarrow p\leq T^{\beta_{0}}\\ \Omega(n)=s_{0}\end{subarray}}\frac{b(n;\Delta_{v})\nu(n)}{n^{1/2+1/\log T+i \gamma}}\Big{|}^{2}. \tag{4.1}\]
Using Theorem 3.1 twice for \(0<\gamma\leq 2T\) and \(0<\gamma\leq T\) and then differencing, we have that
\[\eqref{eq:1}=(s_{0}!)^{2}\bigg{(}\big{(}N(2T)-N(T)\big{)}\sum_{ \begin{subarray}{c}p|n\Rightarrow p\leq T^{\beta_{0}}\\ \Omega(n)=s_{0}\end{subarray}}\frac{b(n;\Delta_{v})^{2}\nu(n)^{2}}{n^{1+2/\log T}}\] \[\qquad\qquad-\frac{T}{\pi}\Re\Big{(}\sum_{\begin{subarray}{c}p| knn\Rightarrow p\leq T^{\beta_{0}}\\ \Omega(kn)=s_{0}\end{subarray}}\frac{\overline{b(kn;\Delta_{v})}b(n;\Delta_{v}) \nu(kn)\nu(n)\Lambda(k)}{k^{1+1/\log T}n^{1+2/\log T}}\Big{)}\] \[\qquad\qquad+O\Big{(}T^{\beta_{0}s_{0}}(\log T)^{2}\sum_{ \begin{subarray}{c}p|n\Rightarrow p\leq T^{\beta_{0}}\\ \Omega(n)=s_{0}\end{subarray}}\frac{b(n;\Delta_{v})^{2}\nu(n)^{2}}{n^{1+2/\log T }}\Big{)}\bigg{)}.\]
In the second sum above, note that we must have \(k=1\), in which case the term vanishes. Then it follows that
\[\eqref{eq:1}=(s_{0}!)^{2}\bigg{(}\Big{(}N(2T)-N(T)\Big{)}\sum_{ \begin{subarray}{c}p|n\Rightarrow p\leq T^{\beta_{0}}\\ \Omega(n)=s_{0}\end{subarray}}\frac{|b(n;\Delta_{v})|^{2}\nu(n)^{2}}{n^{1+2/ \log T}}\] \[\qquad\qquad+O\Big{(}T^{\beta_{0}s_{0}}(\log T)^{2}\sum_{ \begin{subarray}{c}p|n\Rightarrow p\leq T^{\beta_{0}}\\ \Omega(n)=s_{0}\end{subarray}}\frac{|b(n;\Delta_{v})|^{2}\nu(n)^{2}}{n^{1+2/ \log T}}\Big{)}\bigg{)}\] \[\ll(s_{0}!)^{2}\Big{(}N(T)+O\big{(}T^{\beta_{0}s_{0}}(\log T)^{2} \big{)}\Big{)}\sum_{\begin{subarray}{c}p|n\Rightarrow p\leq T^{\beta_{0}}\\ \Omega(n)=s_{0}\end{subarray}}\frac{|b(n;\Delta_{v})|^{2}\nu(n)^{2}}{n^{1+2/ \log T}},\]
where we have used the fact that \(N(2T)\ll N(T)\). Now using the condition that \(\beta_{0}s_{0}\leq 1-\log\log T/\log T\) and the bound \(\nu(n)^{2}\leq\nu(n)\), we get that
\[\eqref{eq:1}\ll N(T)s_{0}!\Big{(}\sum_{p\in I_{0}}\frac{|b(p; \Delta_{v})|^{2}}{p^{1+2/\log T}}\Big{)}^{s_{0}}\] \[\ll N(T)s_{0}!b(\Delta_{v})^{2s_{0}}\Big{(}\log\frac{\log T}{ \Delta_{v}}\Big{)}^{2s_{0}\eta(\Delta_{v})}(\log\log T^{\beta_{0}})^{s_{0}},\]
where we use the bound (2.4) to derive the estimate in the second line. This completes the proof.
Proof of Proposition 2.6.: Using (2.6), and similarly as in [1], we have
\[\sum_{\gamma\in(T,2T]}\prod_{h=0}^{j}|E_{\ell_{h}}(kP_{h,j}(\gamma) )|^{2}|P_{j+1,v}(\gamma)|^{2s_{j+1}}=(s_{j+1}!)^{2}\] \[\quad\times\bigg{(}\Big{(}N(2T)-N(T)+O\big{(}T^{\sum_{h=0}^{j} \ell_{h}\beta_{h}+s_{j+1}\beta_{j+1}}(\log T)^{2}\big{)}\Big{)}\] \[\quad\quad\times\Big{(}\prod_{h=0}^{j}\sum_{\begin{subarray}{c}p \in I_{h}\\ \Omega(n_{h})\leq\ell_{h}\end{subarray}}\frac{|b(n_{h};\Delta_{j})|^{2}k^{2 \Omega(n_{h})}\nu(n_{h})^{2}}{n_{h}^{1+2/\log T}}\Big{)}\sum_{\begin{subarray}{ c}p|n_{j+1}\Rightarrow p\in I_{j+1}\\ \Omega(n_{j+1})=s_{j+1}\end{subarray}}\frac{|b(n_{j+1};\Delta_{v})|^{2}\nu(n_{ j+1})^{2}}{n_{j+1}^{1+2/\log T}}\] \[\quad-\frac{T}{\pi}\Re\Big{(}\sum_{m=0}^{j}\sum_{\begin{subarray} {c}p\in I_{m}\\ 1\leq a\leq\ell_{m}\end{subarray}}\frac{(\log p)\overline{b(p^{a};\Delta_{j})} k^{a}}{p^{a(1+1/\log T)}}\prod_{\begin{subarray}{c}h=0\\ h\neq m\end{subarray}}^{j}\sum_{\begin{subarray}{c}p|n_{h}\Rightarrow p\in I_{h }\\ \Omega(n_{h})\leq\ell_{h}\end{subarray}}\frac{|b(n_{h};\Delta_{j})|^{2}k^{2 \Omega(n_{h})}\nu(n_{h})^{2}}{n_{h}^{1+2/\log T}}\] \[\quad\quad\times\sum_{\begin{subarray}{c}p|n_{m}\Rightarrow p\in I _{m}\\ \Omega(n_{m})\leq\ell_{m}-a\end{subarray}}\frac{|b(n_{m};\Delta_{j})|^{2}k^{2 \Omega(n_{m})}\nu(n_{m})\nu(p^{a}n_{m})}{n_{m}^{1+2/\log T}}\sum_{ \begin{subarray}{c}p|n_{j+1}\Rightarrow p\in I_{j+1}\\ \Omega(n_{j+1})=s_{j+1}\end{subarray}}\frac{|b(n_{j+1};\Delta_{v})|^{2}\nu(n_{ j+1})^{2}}{n_{j+1}^{1+2/\log T}}\Big{)}\bigg{)}.\]
By (2.4) and the work in [1] we get that
\[\sum_{\gamma\in(T,2T]}\prod_{h=0}^{j}|E_{\ell_{h}}(kP_{h,j}(\gamma ))|^{2}|P_{j+1,v}(\gamma)|^{2s_{j+1}}\ll N(T)s_{j+1}!b(\Delta_{j+1})^{2s_{j+1} }\Big{(}\log\frac{\log T}{\Delta_{j+1}}\Big{)}^{2s_{j+1}\eta(\Delta_{j+1})}\] \[\quad\quad\times(\log r)^{s_{j+1}}\bigg{(}\big{(}\log T^{\beta_{ j}}\big{)}^{k^{2}b(\Delta_{j})^{2}(\log\frac{\log T}{\Delta_{j}})^{2\eta( \Delta_{j})}}+\sum_{m=0}^{j}\beta_{m}\sum_{\begin{subarray}{c}p\in I_{m}\\ 1\leq a\leq\ell_{m}\end{subarray}}\frac{b(\Delta_{j})^{a}(\log\frac{\log T}{ \Delta_{j}})^{a\eta(\Delta_{j})}k^{a}}{p^{a(1+1/\log T)}}\] \[\quad\quad\times\prod_{\begin{subarray}{c}h=0\\ h\neq m\end{subarray}}^{j}\sum_{\begin{subarray}{c}p|n_{h}\Rightarrow p\in I_{h }\\ \Omega(n_{h})\leq\ell_{h}\end{subarray}}\frac{b(\Delta_{j})^{2\Omega(n_{h})}( \log\frac{\log T}{\Delta_{j}})^{2\eta(\Delta_{j})\Omega(n_{h})}k^{2\Omega(n_ {h})}\nu(n_{h})\nu(p^{a}n_{m})}{n_{h}^{1+2/\log T}}\] \[\quad\quad\times\sum_{\begin{subarray}{c}p|n_{m}\Rightarrow p\in I _{m}\\ \Omega(n_{m})\leq\ell_{m}-a\end{subarray}}\frac{b(\Delta_{j})^{2\Omega(n_{m})}( \log\frac{\log T}{\Delta_{j}})^{2\eta(\Delta_{j})\Omega(n_{m})}k^{2\Omega(n_ {m})}\nu(n_{m})\nu(p^{a}n_{m})}{n_{m}^{1+2/\log T}}\bigg{)}. \tag{4.2}\]
Now we use the fact that
\[\beta_{m}\sum_{\begin{subarray}{c}p\in I_{m}\\ 1\leq a\leq\ell_{m}\end{subarray}}\frac{b(\Delta_{j})^{a}(\log\frac{\log T}{ \Delta_{j}})^{a\eta(\Delta_{j})}k^{a}}{p^{a(1+1/\log T)}}\] \[\quad\quad\quad\times\sum_{\begin{subarray}{c}p|n_{m}\Rightarrow p \in I_{m}\\ \Omega(n_{m})\leq\ell_{m}-a\end{subarray}}\frac{b(\Delta_{j})^{2\Omega(n_{m})}( \log\frac{\log T}{\Delta_{j}})^{2\eta(\Delta_{j})\Omega(n_{m})}k^{2\Omega(n_ {m})}\nu(n_{m})\nu(p^{a}n_{m})}{n_{m}^{1+2/\log T}}\] \[\quad\quad\quad\times\sum_{\begin{subarray}{c}p\in I_{m}\\ 1\leq a\leq\ell_{m}\end{subarray}}\frac{b(\Delta_{j})^{2\Omega(n_{m})}(\log \frac{\log T}{\Delta_{j}})^{2\eta(\Delta_{j})\Omega(n_{m})}k^{2\Omega(n_{m})} \nu(n_{m})\nu(p^{a}n_{m})}{n_{m}^{1+2/\log T}}\] \[\quad\quad\quad\times\sum_{\begin{subarray}{c}p\in I_{m}\\ 1\leq a\leq\ell_{m}\end{subarray}}\frac{b(\Delta_{j})^{2\Omega(n_{m}
\[\ll\beta_{m}\sum_{\begin{subarray}{c}p\mid n_{m}\Rightarrow p\in I_{m}\\ \Omega(n_{m})\leq\ell_{m}\end{subarray}}\frac{b(\Delta_{j})^{2\Omega(n_{m})}( \log\frac{\log T}{\Delta_{j}})^{2\eta(\Delta_{j})\Omega(n_{m})}k^{2\Omega(n_{m} )}\nu(n_{m})}{n_{m}^{1+2/\log T}}. \tag{4.3}\]
Indeed, if \(m=0\) and if \(b(\Delta_{j})(\log\frac{\log T}{\Delta_{j}})^{\eta(\Delta_{j})}k<1\), then
\[\sum_{\begin{subarray}{c}p\in I_{0}\\ 1\leq a\leq\ell_{0}\end{subarray}}\frac{b(\Delta_{j})^{a}(\log\frac{\log T}{ \Delta_{j}})^{a\eta(\Delta_{j})}k^{a}}{p^{a(1+1/\log T)}}\ll\sum_{p\leq T\beta _{0}}\frac{1}{p}\ll\log(\beta_{0}\log T),\]
and with the choice (5.1) and (5.6), we get that
\[\beta_{0}\sum_{\begin{subarray}{c}p\in I_{0}\\ 1\leq a\leq\ell_{0}\end{subarray}}\frac{b(\Delta_{j})^{a}(\log\frac{\log T}{ \Delta_{j}})^{a\eta(\Delta_{j})}k^{a}}{p^{a(1+1/\log T)}}=O(1).\]
We also have that
\[\sum_{\begin{subarray}{c}p\mid n_{0}\Rightarrow p\in I_{0}\\ \Omega(n_{0})\leq\ell_{0}-a\end{subarray}}\frac{b(\Delta_{j})^{2\Omega(n_{0} )}(\log\frac{\log T}{\Delta_{j}})^{2\eta(\Delta_{j})\Omega(n_{0})}k^{2\Omega(n _{0})}\nu(n_{0})\nu(p^{a}n_{0})}{n_{0}^{1+2/\log T}}\] \[\leq\sum_{\begin{subarray}{c}p\mid n_{0}\Rightarrow p\in I_{0}\\ \Omega(n_{0})\leq\ell_{0}\end{subarray}}\frac{b(\Delta_{j})^{2\Omega(n_{0})}( \log\frac{\log T}{\Delta_{j}})^{2\eta(\Delta_{j})\Omega(n_{0})}k^{2\Omega(n_{0 })}\nu(n_{0})}{n_{r}^{1+2/\log T}},\]
which shows (4.3) in the case \(m=0\).
When \(m\geq 1\), if \(b(\Delta_{j})(\log\frac{\log T}{\Delta_{j}})^{\eta(\Delta_{j})}k<1\), then
\[\beta_{m}\sum_{\begin{subarray}{c}p\in I_{m}\\ 1\leq a\leq\ell_{m}\end{subarray}}\frac{b(\Delta_{j})^{a}(\log\frac{\log T}{ \Delta_{j}})^{a\eta(\Delta_{j})}k^{a}}{p^{a(1+1/\log T)}}\ll\sum_{p\in I_{m}} \frac{1}{p}=O(1),\]
and again
\[\sum_{\begin{subarray}{c}p\mid n_{m}\Rightarrow p\in I_{m}\\ \Omega(n_{m})\leq\ell_{m}-a\end{subarray}}\frac{b(\Delta_{j})^{2\Omega(n_{m} )}(\log\frac{\log T}{\Delta_{j}})^{2\eta(\Delta_{j})\Omega(n_{m})}k^{2\Omega( n_{m})}\nu(n_{m})\nu(p^{a}n_{m})}{n_{m}^{1+2/\log T}}\] \[\leq\sum_{\begin{subarray}{c}p\mid n_{m}\Rightarrow p\in I_{m}\\ \Omega(n_{m})\leq\ell_{m}\end{subarray}}\frac{b(\Delta_{j})^{2\Omega(n_{m} )}(\log\frac{\log T}{\Delta_{j}})^{2\eta(\Delta_{j})\Omega(n_{m})}k^{2\Omega( n_{m})}\nu(n_{m})}{n_{m}^{1+2/\log T}}.\]
The two bounds above establish (4.3) when \(b(\Delta_{j})(\log\frac{\log T}{\Delta_{j}})^{\eta(\Delta_{j})}k<1\).
On the other hand, if \(b(\Delta_{j})(\log\frac{\log T}{\Delta_{j}})^{\eta(\Delta_{j})}k\geq 1\), then we have that
\[b(\Delta_{j})^{a}(\log\frac{\log T}{\Delta_{j}})^{a\eta(\Delta_{j})}k^{a}\leq b (\Delta_{j})^{2a}(\log\frac{\log T}{\Delta_{j}})^{2a\eta(\Delta_{j})}k^{2a},\]
and then
\[\beta_{m}\sum_{\begin{subarray}{c}p\in I_{m}\\ 1\leq a\leq\ell_{m}\end{subarray}}\frac{b(\Delta_{j})^{a}(\log\frac{\log T}{ \Delta_{j}})^{a\eta(\Delta_{j})}k^{a}}{p^{a(1+1/\log T)}}\\ \times\sum_{\begin{subarray}{c}p|n_{m}\Rightarrow p\in I_{m}\\ \Omega(n_{m})\leq\ell_{m}-a\end{subarray}}\frac{b(\Delta_{j})^{2\Omega(n_{m})} (\log\frac{\log T}{\Delta_{j}})^{2\eta(\Delta_{j})\Omega(n_{m})}k^{2\Omega(n_{ m})}\nu(n_{m})\nu(p^{a}n_{m})}{n_{m}^{1+2/\log T}}\\ \ll\beta_{m}\sum_{\begin{subarray}{c}p\in I_{m}\\ 1\leq a\leq\ell_{m}\end{subarray}}\frac{b(\Delta_{j})^{2a}(\log\frac{\log T}{ \Delta_{j}})^{2a\eta(\Delta_{j})}k^{2a}}{p^{a(1+1/\log T)}}\\ \times\sum_{\begin{subarray}{c}p|n_{m}\Rightarrow p\in I_{m}\\ \Omega(n_{m})\leq\ell_{m}\end{subarray}}\frac{b(\Delta_{j})^{2\Omega(n_{m})} (\log\frac{\log T}{\Delta_{j}})^{2\eta(\Delta_{j})\Omega(n_{m})}k^{2\Omega(n_ {m})}\nu(n_{m})\nu(p^{a}n_{m})}{n_{m}^{1+2/\log T}}\\ \ll\beta_{m}\sum_{\begin{subarray}{c}p|n_{m}\Rightarrow p\in I_{m} \\ \Omega(n_{m})\leq\ell_{m}\end{subarray}}\frac{b(\Delta_{j})^{2\Omega(n_{m})} (\log\frac{\log T}{\Delta_{j}})^{2\eta(\Delta_{j})\Omega(n_{m})}k^{2\Omega(n_ {m})}\nu(n_{m})}{n_{m}^{1+2/\log T}},\]
which follows by rewriting \(n_{m}\mapsto n_{m}p^{a}\) and noting that \(p^{a/\log T}\ll 1\). Hence (4.3) also follows in this case.
Now from (4.2) and (4.3), it follows that
\[\sum_{\gamma\in(T,2T]} \prod_{h=0}^{j}|E_{\ell_{h}}(kP_{h,j}(\gamma))|^{2}|P_{j+1,v}( \gamma)|^{2s_{j+1}}\ll N(T)s_{j+1}!b(\Delta_{j+1})^{2s_{j+1}}\Big{(}\log\frac {\log T}{\Delta_{j+1}}\Big{)}^{2s_{j+1}\eta(\Delta_{j+1})}\] \[\times(\log r)^{s_{j+1}}\bigg{(}\big{(}\log T^{\beta_{j}}\big{)} ^{k^{2}b(\Delta_{j})^{2}(\log\frac{\log T}{\Delta_{j}})^{2\eta(\Delta_{j})}} \\ +\prod_{h=0}^{j}\sum_{\begin{subarray}{c}p|n_{h}\Rightarrow p\in I _{h}\\ \Omega(n_{h})\leq\ell_{h}\end{subarray}}\frac{b(\Delta_{j})^{2\Omega(n_{h})} (\log\frac{\log T}{\Delta_{j}})^{2\eta(\Delta_{j})\Omega(n_{h})}k^{2\Omega(n_ {h})}\nu(n_{h})}{n_{h}^{1+2/\log T}}\bigg{)}.\]
Now using the work in [1] to deal with the product over \(h\leq j\), we get that
\[\sum_{\gamma\in(T,2T]}\prod_{h=0}^{j}|E_{\ell_{h}}(kP_{h,j}(\gamma ))|^{2}|P_{j+1,v}(\gamma)|^{2s_{j+1}}\ll N(T)s_{j+1}!b(\Delta_{j+1})^{2s_{j+ 1}}\Big{(}\log\frac{\log T}{\Delta_{j+1}}\Big{)}^{2s_{j+1}\eta(\Delta_{j+1})}\\ \times(\log r)^{s_{j+1}}\big{(}\log T^{\beta_{j}}\big{)}^{k^{2}b( \Delta_{j})^{2}(\log\frac{\log T}{\Delta_{j}})^{2\eta(\Delta_{j})}}.\]
This completes the proof of the proposition.
The proof of Proposition 2.7 is very similar to the proof of Proposition 2.6, so we leave the details to the interested reader.
## 5. Proof of Theorems 1.1 and 1.2
We now prove Theorem 1.1. The proof of Theorem 1.2 will follow in exactly the same way.
Proof of Theorem 1.1.: If \(2k(1+\varepsilon)\leq 1\), we choose
\[\beta_{0}=\frac{a(2d-1)\log\log T}{(1+2\varepsilon)k\log T},\,s_{0}=\Big{[} \frac{a}{\beta_{0}}\Big{]},\,\ell_{0}=2\Big{[}\frac{s_{0}^{d}}{2}\Big{]}, \tag{5.1}\]
and
\[\beta_{j}=r^{j}\beta_{0},\,s_{j}=\Big{[}\frac{a}{\beta_{j}}\Big{]},\,\ell_{j}= 2\Big{[}\frac{s_{j}^{d}}{2}\Big{]}, \tag{5.2}\]
where we can choose
\[a=\frac{4-3k\varepsilon}{2(2-k\varepsilon)},\quad r=\frac{2}{2-k\varepsilon},\quad d=\frac{8-7k\varepsilon}{2(4-3k\varepsilon)},\]
so that
\[\frac{a(2d-1)}{r}=1-k\varepsilon. \tag{5.3}\]
We choose \(K\) such that
\[\beta_{K}\leq c, \tag{5.4}\]
for \(c\) a small constant such that
\[c^{1-d}\Big{(}\frac{a^{d}r^{1-d}}{r^{1-d}-1}+\frac{2r}{r-1}\Big{)}\leq 1-a- \frac{\log\log T}{\log T}. \tag{5.5}\]
The conditions above ensure that
\[\sum_{h=0}^{j}\ell_{h}\beta_{h}+s_{j+1}\beta_{j+1}\leq 1-\frac{\log\log T}{ \log T},\]
for any \(j<K\), and that
\[\sum_{h=0}^{K}\ell_{h}\beta_{h}\leq 1-\frac{\log\log T}{\log T}.\]
If \(2k(1+\varepsilon)>1\), we choose the parameters as follows. Let
\[\beta_{0}=\frac{(2k+2d-1-\frac{a(2d-1)}{r})\log\log T}{(1+\delta)k\log T}, \quad s_{0}=\Big{[}\frac{1}{\beta_{0}}\Big{]},\quad\ell_{0}=2\Big{[}\frac{s_{ 0}^{d}}{2}\Big{]}, \tag{5.6}\]
where we pick
\[a=\frac{1-3k\varepsilon}{1-2k\varepsilon},\quad r=\frac{1}{1-2k\varepsilon}, \quad d=\frac{2-7k\varepsilon}{2(1-3k\varepsilon)},\]
so that
\[\frac{a(2d-1)}{r}=1-4k\varepsilon. \tag{5.7}\]
We choose \(\beta_{K}\) as in (5.4) and (5.5).
We have that
\[\sum_{\gamma\in\mathcal{F}}|\zeta^{\prime}(\rho)|^{-2k}=\sum_{\begin{subarray} {c}\gamma\in\mathcal{F}\\ \gamma\notin\mathcal{T}_{0}\end{subarray}}|\zeta^{\prime}(\rho)|^{-2k}+\sum_{ \begin{subarray}{c}\gamma\in\mathcal{F}\\ \gamma\in\mathcal{T}_{0}\end{subarray}}|\zeta^{\prime}(\rho)|^{-2k}.\]
Using Lemma 2.1, we have that for some \(0\leq v\leq K\),
\[\sum_{\begin{subarray}{c}\gamma\in\mathcal{F}\\ \gamma\notin\mathcal{T}_{0}\end{subarray}}|\zeta^{\prime}(\rho)|^{-2k}\leq\exp \Big{(}O\Big{(}\frac{\log T}{\log\log T}\Big{)}\Big{)}\sum_{\begin{subarray}{c }\gamma\in\mathcal{F}\\ \gamma\notin\mathcal{T}_{0}\end{subarray}}|\zeta(1/2+1/\log T+i\gamma)|^{-2k}\]
\[\leq\exp\Big{(}O\Big{(}\frac{\log T}{\log\log T}\Big{)}\Big{)}\sum_{ \gamma\in(T,2T]}|\zeta(1/2+1/\log T+i\gamma)|^{-2k}\Big{(}\frac{ke^{2}}{\ell_{0} }|P_{0,v}(\gamma)|\Big{)}^{2s_{0}}\] \[\leq\exp\Big{(}O\Big{(}\frac{\log T}{\log\log T}\Big{)}\Big{)} \Big{(}\frac{ke^{2}}{\ell_{0}}\Big{)}^{2s_{0}}\Big{(}\frac{1}{1-(\log T)^{-2/ \log T}}\Big{)}^{\frac{(1+\varepsilon)k\log T}{\log\log T}}\sum_{\gamma\in(T,2 T]}|P_{0,v}(\gamma)|^{2s_{0}},\]
where we used the pointwise bound
\[|\zeta(1/2+1/\log T+i\gamma)|^{-1}\leq\Big{(}\frac{1}{1-(\log T)^{-2/\log T}} \Big{)}^{\frac{(1+\varepsilon)\log T}{2\log\log T}}, \tag{5.8}\]
which is proven in [1, Lemma 2.2]. Using Proposition 2.5, we have that the above is
\[\ll\exp\Big{(}O\Big{(}\frac{\log T}{\log\log T}\Big{)}\Big{)}N(T)s_{0}!b( \Delta_{v})^{2s_{0}}\Big{(}\log\frac{\log T}{\Delta_{v}}\Big{)}^{2s_{0}\eta( \Delta_{v})}(\log\log T^{\beta_{0}})^{s_{0}}\Big{(}\frac{ke^{2}}{\ell_{0}} \Big{)}^{2s_{0}}T^{(1+\varepsilon)k}.\]
Now similarly as in [1], using Stirling's formula, we have that the above is
\[\ll\exp\Big{(}O\Big{(}\frac{\log T}{\log\log T}\Big{)}\Big{)}N(T) T^{(1+\varepsilon)k}\sqrt{s_{0}}\exp\bigg{(}-(2d-1)s_{0}\log s_{0}\] \[+2s_{0}\log\Big{(}ke^{3/2}b(\Delta_{0})\Big{(}\log\frac{\log T}{ \Delta_{0}}\Big{)}^{\eta(\Delta_{0})}\sqrt{\log\log T^{\beta_{0}}}\Big{)} \bigg{)}.\]
If \(2k(1+\varepsilon)\leq 1\) then using the choice of parameters (5.1) and the fact that \(\eta(\Delta_{0})=1\), we get that
\[\sum_{\begin{subarray}{c}\gamma\in\mathcal{F}\\ \gamma\notin\mathcal{T}_{0}\end{subarray}}|\zeta^{\prime}(\rho)|^{-2k}=o(N(T)). \tag{5.9}\]
If \(2k(1+\varepsilon)>1\), then using the choice (5.6), again the fact that \(\eta(\Delta_{0})=1\), and the pointwise bound (5.8) (with \(\varepsilon\) replaced by \(\delta\) on the right-hand side of the inequality) we get that
\[\sum_{\begin{subarray}{c}\gamma\in\mathcal{F}\\ \gamma\notin\mathcal{T}_{0}\end{subarray}}|\zeta^{\prime}(\rho)|^{-2k}\ll T^{ 1+(1+\delta)k\frac{2k-\frac{a(2d-1)}{r}}{2k-\frac{a(2d-1)}{r}+2d-1}}\exp\Big{(} \frac{\log T\log\log\log T}{\log\log T}\Big{)}. \tag{5.10}\]
Now assume that \(\gamma\in\mathcal{T}_{0}\). Using Lemma 2.4, we have that
\[\sum_{\begin{subarray}{c}\gamma\in\mathcal{F}\\ \gamma\in\mathcal{T}_{0}\end{subarray}}|\zeta^{\prime}(\rho)|^{-2k}\leq\exp \Big{(}O\Big{(}\frac{\log T}{\log\log T}\Big{)}\Big{)}\Big{(}\sum_{\gamma\in \mathcal{F}}S_{1}(\gamma)+\sum_{\gamma\in\mathcal{F}}S_{2}(\gamma)\Big{)}. \tag{5.11}\]
Using again Lemma 2.4, the choice (5.4) for \(\beta_{K}\), and noting that \(\eta(\Delta_{K})=0\), we have that
\[\sum_{\gamma\in\mathcal{F}}\!S_{1}(\gamma)\ll(\log\log T)^{k}\sum_{\gamma\in( T,2T]}\prod_{h=0}^{K}\max\Big{\{}1,|E_{\ell_{h}}(kP_{h,K}(\gamma))|^{2}\Big{(}1+ \frac{1}{15e^{\ell_{h}}}\Big{)}^{2}\Big{\}}.\]
Note that in the inequality above, we can assume without loss of generality that
\[\max\Big{\{}1,|E_{\ell_{h}}(kP_{h,K}(\gamma))|^{2}\Big{(}1+\frac{1}{15e^{\ell _{h}}}\Big{)}^{2}\Big{\}}=|E_{\ell_{h}}(kP_{h,K}(\gamma))|^{2}\Big{(}1+\frac{1} {15e^{\ell_{h}}}\Big{)}^{2}.\]
Using Proposition 2.7 it follows that
\[\sum_{\gamma\in\mathcal{F}}S_{1}(\gamma)\ll N(T)(\log T)^{O(1)}, \tag{5.12}\]
in both cases \(2k(1+\varepsilon)\leq 1\) and \(2k(1+\varepsilon)>1\).
Now we evaluate the contribution from \(S_{2}(\gamma)\). Similarly to equation 4.9 in [1], by using Proposition 2.6 we have that
\[\sum_{\gamma\in\mathcal{F}}S_{2}(\gamma)\ll N(T)(\log\log T)^{k} \sum_{j=0}^{K-1}(K-j)\sqrt{\frac{1}{\beta_{j+1}}}\exp\bigg{(}\frac{\log\log T} {\beta_{j}}\Big{(}2k-\frac{a(2d-1)}{r}\Big{)}\] \[\quad+\frac{\log(\beta_{j}\log T)}{\beta_{j}}\Big{(}\frac{a(2d-1) }{r}-2k\Big{)}+\frac{2a}{r\beta_{j}}\log\Big{(}ke^{3/2}b(\Delta_{j+1})\Big{(} \log\frac{\log T}{\Delta_{j+1}}\Big{)}^{\eta(\Delta_{j+1})}\Big{)}\]
\[\quad+\frac{2k}{\beta_{j}}\log\frac{1}{1-c/2}+\frac{a(2d-1)}{r\beta_{j}}\log \frac{r}{a}+2k\Big{(}\log\frac{\log T}{\Delta_{j}}\Big{)}^{\eta(\Delta_{j})} \Big{)}\big{(}\log T^{\beta_{j}}\big{)}^{k^{2}b(\Delta_{j})^{2}(\log\frac{\log T }{\Delta_{j}})^{2\eta(\Delta_{j})}}.\]
Rearranging, we have that
\[\sum_{\gamma\in\mathcal{F}}S_{2}(\gamma)\ll N(T)(\log\log T)^{k} \sum_{j=0}^{K-1}(K-j)\sqrt{\frac{1}{\beta_{j+1}}}\exp\bigg{(}\frac{\log\beta_{ j}}{\beta_{j}}\Big{(}\frac{a(2d-1)}{r}-2k\Big{)}\] \[\quad+\frac{2}{\beta_{j}}\log\Big{(}\log\frac{\log T}{\Delta_{j+ 1}}\Big{)}^{\eta(\Delta_{j+1})}+2k\Big{(}\log\frac{\log T}{\Delta_{j}}\Big{)} ^{\eta(\Delta_{j})}+k^{2}b(\Delta_{j})^{2}\Big{(}\log\frac{1}{\Delta_{j} \alpha}\Big{)}^{2\eta(\Delta_{j})}\log(\beta_{j}\log T)\] \[\quad+O\Big{(}\frac{1}{\beta_{j}}\Big{)}\bigg{)}. \tag{5.13}\]
First assume that \(2k(1+\varepsilon)\leq 1\). Then note that
\[2k-\frac{a(2d-1)}{r}\leq-k\varepsilon.\]
We first consider the contribution from those \(j\) for which \(\Delta_{j}=o(\log T)\), i.e., those \(j\) for which \(\beta_{j}\to 0\), in which case \(\eta(\Delta_{j})=1\). Let \(R_{1}\) denote this contribution. Note that the first term inside the exponential is negative, so we have
\[R_{1}\ll T(\log\log T)^{k+1}\sum_{j=0}^{K-1}\exp\Big{(}k^{2}\Big{(}\frac{1}{4} +\varepsilon\Big{)}\Big{(}\log\frac{1}{\beta_{j}}\Big{)}\log(\beta_{j}\log T) +O\Big{(}\log\frac{1}{\beta_{j}}\Big{)}\Big{)}.\]
Since \(\beta_{j}\geq\beta_{0}\) and \(\beta_{0}\asymp\frac{\log\log T}{\log T}\) (see the choice (5.1)), we get that
\[R_{1}\ll N(T)(\log\log T)^{k+1}\sum_{j=0}^{K-1}\exp\Big{(}k^{2}\Big{(}\frac{1} {4}+\varepsilon\Big{)}\Big{(}\log\frac{1}{\beta_{j}}\Big{)}\log(\beta_{j}\log T )+O(\log\log T)\Big{)}.\]
Now if we let
\[f(x)=\Big{(}\log\frac{1}{x}\Big{)}\log(x\log T),\]
we note that \(f(x)\) attains its maximum at \(x=\frac{1}{\log T}\), and then
\[R_{1}\ll N(T)\exp\Big{(}k^{2}(\log\log T)^{2}\Big{)}\ll T^{1+\delta}. \tag{5.14}\]
We now consider the contribution from those \(j\) in (5.13) for which \(\eta(\Delta_{j})=0\), i.e., those \(j\) for which \(\beta_{j}\gg 1\). It is easy to see that in this case, we have
\[R_{2}\ll N(T)(\log T)^{O(1)}. \tag{5.15}\]
Combining the bounds (5.14) and (5.15) shows that
\[\sum_{\gamma\in\mathcal{F}}S_{2}(\gamma)\ll T^{1+\delta}, \tag{5.16}\]
when \(2k(1+\varepsilon)\leq 1\). Combining the bounds (5.16), (5.12), and (5.9) proves Theorem 1.1 in the case \(2k(1+\varepsilon)\leq 1\).
We now consider the case when \(2k(1+\varepsilon)>1\). From equation (5.13), we get that
\[\sum_{\gamma\in\mathcal{F}}S_{2}(\gamma)\ll T(\log\log T)^{k+1}\sum_{j=0}^{K-1 }\exp\Big{(}\frac{\log(1/\beta_{j})}{\beta_{j}}\Big{(}2k-\frac{a(2d-1)}{r} \Big{)}+O\Big{(}\frac{\log T\log\log\log T}{\log\log T}\Big{)}\Big{)}.\]
Now notice that since
\[2k-\frac{a(2d-1)}{r}>\frac{\varepsilon}{2},\]
the term inside the exponential is decreasing as a function of \(j\), and hence it attains its maximum at \(\beta_{0}\). Using the expression (5.6), we get that
\[\sum_{\gamma\in\mathcal{F}}S_{2}(\gamma)\ll T^{1+(1+\delta)k\frac{2k-\frac{a( 2d-1)}{r}}{2k-\frac{a(2d-1)}{r}+2d-1}}\exp\Big{(}\frac{\log T\log\log\log T}{ \log\log T}\Big{)}. \tag{5.17}\]
Now combining the bounds (5.17), (5.12), and (5.10) and after a relabeling of the \(\varepsilon,\delta\), Theorem 1.1 in the case \(2k(1+\varepsilon)>1\) follows.
The proof of Theorem 1.2 follows in exactly the same way, the only difference being the use of Lemma 2.2 instead of Lemma 2.1.
## Acknowledgments
AF was supported by the NSF grant DMS-2101769 and MBM was supported by the NSF grant DMS-2101912.
|
2305.11123 | Satellite Optical Brightness | The apparent brightness of satellites is calculated as a function of
satellite position as seen by a ground-based observer in darkness. Both direct
illumination of the satellite by the Sun as well as indirect illumination due
to reflection from the Earth are included. The reflecting properties of the
satellite components and of the Earth must first be estimated (the
Bidirectional Reflectance Distribution Function, or BRDF). The reflecting
properties of the satellite components can be found directly using lab
measurements or accurately inferred from multiple observations of a satellite
at various solar angles. Integrating over all scattering surfaces leads to the
angular pattern of flux from the satellite. Finally, the apparent brightness of
the satellite as seen by an observer at a given location is calculated as a
function of satellite position. We develop an improved model for reflection of
light from Earth's surface using aircraft data. We find that indirectly
reflected light from Earth's surface contributes significant increases in
apparent satellite brightness. This effect is particularly strong during civil
twilight. We validate our approach by comparing our calculations to multiple
observations of selected Starlink satellites and show significant improvement
on previous satellite brightness models. Similar methodology for predicting
satellite brightness has already informed mitigation strategies for
next-generation Starlink satellites. Measurements of satellite brightness over
a variety of solar angles widens the effectiveness of our approach to virtually
all satellites. We demonstrate that an empirical model in which reflecting
functions of the chassis and the solar panels are fit to observed satellite
data performs very well. This work finds application in satellite design and
operations, and in planning observatory data acquisition and analysis. | Forrest Fankhauser, J. Anthony Tyson, Jacob Askari | 2023-05-18T17:10:56Z | http://arxiv.org/abs/2305.11123v2 | # Satellite Optical Brightness
###### Abstract
The apparent brightness of satellites is calculated as a function of satellite position as seen by a ground-based observer in darkness. Both direct illumination of the satellite by the sun as well as indirect illumination due to reflection from the Earth are included. The reflecting properties of each satellite component and of the Earth must first be estimated (the Bidirectional Reflectance Distribution Function, BRDF). Integrating over all scattering surfaces leads to the angular pattern of the flux reflected from the satellite. Finally, the apparent brightness of the satellite as seen by an observer at a given location is calculated as a function of satellite position. We validate our calculations by comparing to observations of selected Starlink satellites and show significant improvement on previous satellite brightness models. With multiple observations of a satellite at various solar angles and with minimal assumptions regarding the satellite, BRDF model coefficients for each satellite component can be accurately inferred, obviating the need to import direct BRDF lab measurements. This widens the effectiveness of this model approach to virtually all satellites. This work finds application in satellite design and operations, and in planning observatory data acquisition and analysis. Similar methodology for predicting satellite brightness has already informed mitigation strategies for next generation Starlink satellites.
Artificial satellites (68); Night sky brightness (1112); Optical astronomy (1776); Photometry (1234); Astronomical techniques (1684); Astronomy data analysis (1858) +
Footnote †: journal: AJSTeX631
0000-0002-4882-788X]Forrest Fankhauser
0000-0002-4882-788X]J. Anthony Tyson
0000-0002-4882-788X]Jacob Askari
## 1 Introduction
In recent years, numerous large Low Earth Orbit (LEO) satellite constellations have been proposed. There are currently more than 6,000 LEO satellites in operation, a 6-fold increase over just two years. This is expected to increase exponentially over the next decade. The impact on astronomy research Tyson et al. (2020); Hu et al. (2022) and on the night sky environment Venkatesan et al. (2020); Lawrence et al. (2022); Barentine et al. (2023) has been discussed widely. Technical mitigation involves innovation in satellite design, satellite operations, and astronomy data processing and analysis. The science pursued by ground-based wide-field sky surveys such as Rubin Observatory's Legacy Survey of Space and Time (LSST) Ivezic et al. (2019), as well as all other optical observatories, large and small, is impacted by satellite streaks.
After dusk and before dawn, LEO satellites scatter sunlight onto the Earth's surface. This sunlight is both direct and indirect (reflected from Earth). This scattered light can interfere with both casual stargazing and large ground-based observatories. The net effect depends on several variables including: satellite geometry, satellite material properties, satellite orientation, wavelength, satellite location, observatory location, satellite range, and number of satellites. In order to quantify this effect, it is necessary to predict satellite brightness. To make this prediction, we must measure the material properties of satellite surfaces, either directly in the lab
or indirectly using satellite brightness observations. We must also know both the orientations and areas of the satellite's surfaces.
This paper presents techniques for calculating the brightness of satellites seen by observers on the Earth's surface. We consider two sources of light which can be scattered by a satellite. First, there is light directly incident from the sun. Second, we include light scattered from the portion of Earth's surface illuminated by the sun and visible to the satellite. We refer to the latter as \(earthshine\). We treat the sun as a plane-wave source and the Earth as a sphere. Since the geometry and the light sources are relatively simple, we can directly calculate fluxes incident on the satellite. Then, using a simple model for the satellite's reflectance and the position of the satellite in the sky, we calculate the overall satellite brightness. This technique offers a respectable improvement to previous diffuse sphere models of satellites and is computationally efficient. A diagram of the geometry and the light sources is shown in Figure 1.
We model a satellite as a collection of opaque surfaces. The light scattered from each surface is defined by an isotropic Bidirectional Reflectance Distribution Function (BRDF) Greynolds (2015). Physically, this means that a rotation of a surface about its normal vector does not change scattering properties. Even though individual surfaces are isotropic, in most cases the overall effective scatter from the satellite will be anisotropic. The BRDF depends on both the surface's material and surface finish. For example, a smooth metallic surface such as bare aluminum is very specular, but a rough painted surface is mostly diffuse. At this time, we do not consider shadowing between surfaces (mutual shadowing) nor anisotropic BRDFs. If there are complex components on a satellite, a raytracing analysis can be used to determine an 'effective BRDF' for individual components. Raytracing allows the inclusion of shadowing and scatter from multiple bounces. We find that this complexity is not required to achieve good observation correlation for Starlink satellite architecture. If lab measured BRDFs are not available, the BRDF can also be estimated by best-fitting to satellite brightness observations taken over a variety of solar angles. This essentially corresponds to an indirect measurement of the BRDF.
These calculations have been wrapped in a publicly available Python package called Lumos-Sat. Tools for predicting satellite brightness are critical for both constellation operators and observatories. Constellation operators can use Lumos-Sat to include satellite brightness as a design constraint, by quantifying the brightness effects of changing satellite material, geometry, or orientation. Meanwhile, Lumos-Sat lets observatories predict and mitigate the impact of existing satellites on science.
## 2 Light Transport
First, we compute a simplified light transport equation. Since the distances between the sun, the earth, and the satellite are much larger than the scale of a satellite, we can make a variety of simplifying assumptions. Given a distant light source, a surface, and a distant observer our goal is to find the flux scattered from the source onto the observer. For a given geometry, the BRDF is a function of the unit vector to the light source \(\hat{w}_{i}\) and the unit vector to the observer \(\hat{w}_{o}\). The BRDF is defined as follows:
\[\text{BRDF}=f_{r}(\hat{w}_{i},\hat{w}_{o})\equiv\frac{1}{\cos(\phi_{o})\cos( \phi_{i})L_{i}}\frac{\partial L_{o}}{\partial\hat{w}_{i}} \tag{1}\]
The in-going radiance is \(L_{i}\) and the out-going radiance is \(L_{o}\). \(\phi_{i}\) is defined as the angle between the surface normal \(\hat{n}\) and the vector to the source \(\hat{w}_{i}\). Likewise, \(\phi_{o}\) is defined as the angle between the surface normal \(\hat{n}\) and the vector to the observer \(\hat{w}_{o}\)1. In this analysis, we consider a very distant point source, which can be treated as a plane wave with flux \(I_{in}\) at the surface. This geometry is shown in Figure 2. We can therefore
Figure 1: Both sunlight and earthshine are scattered by the satellite onto the night side of Earth’s surface.
rearrange Equation 1 to find:
\[\begin{split} L_{o}&=\int f_{r}\cos(\phi_{i})\cos( \phi_{o})L_{i}dw_{i}\\ &=\int f_{r}\cos(\phi_{i})\cos(\phi_{o})AI_{in}\delta(w_{i})dw_{i} \\ &=f_{r}\cos(\phi_{i})\cos(\phi_{o})AI_{in}\end{split} \tag{2}\]
The fraction of flux scattered by the surface from the source to an observer at distance \(d\) is given:
\[\begin{split}& G(\text{source}\rightarrow\text{observer})\\ &\equiv\frac{I_{out}}{I_{in}}=(\hat{w}_{i}\cdot\hat{n})(\hat{w}_ {o}\cdot\hat{n})f_{r}(\hat{w}_{i},\hat{w}_{o})\frac{A}{d^{2}}\end{split} \tag{3}\]
Recall that \(f_{r}\) is the BRDF of the surface. The fraction of light scattered to the observer increases with surface area perpendicular to the source or observer and decreases with distance as an inverse square law. The angular distribution is determined by the BRDF. We will use the light transport equation given in Equation 3 to calculate both the flux of light scattered by the Earth's surface and the flux scattered by the satellite. The next step in our analysis is to find good BRDF models, so that our light transport equation is accurate.
## 3 BRDF Models
Our brightness calculations will only be as accurate as our BRDF models. We recommend fitting most measured data to a binomial BRDF model. The model parameters are \(b_{ik}\), \(c_{ik}\), and \(d\).
\[\begin{split}\text{log}(\text{BRDF})=\\ &\sum_{k=0}^{n}\bigg{\{}\sum_{i=0}^{m}b_{ik}D^{i}+\frac{1}{2}\sum _{i=l_{1}}^{l_{2}}c_{ik}\log(1+d^{i}D^{2})\bigg{\}}V^{k}\end{split} \tag{4}\]
\[D^{2}=\|\vec{\rho}-\vec{\rho_{0}}\|^{2}\ \ \ \ \ V=\vec{\rho}\cdot\vec{\rho_{0}} \tag{5}\]
The vectors \(\vec{\rho}\) and \(\vec{\rho_{0}}\) are the projection of the outgoing unit vector onto the surface and the projection of the specularly reflected unit vector \(\hat{s}\) onto the surface respectively. Both of these vectors can be written as functions of the incoming vector \(\hat{w}_{i}\), the outgoing vector \(\hat{w}_{o}\), and the surface normal vector \(\hat{n}\):
\[\begin{split}\vec{\rho}&=\hat{w}_{o}-(\hat{w}_{o} \cdot\hat{n})\hat{n}\\ \vec{\rho_{0}}&=\hat{s}-(\hat{s}\cdot\hat{n})\hat{n} \\ \hat{s}&=2(\hat{w}_{i}\cdot\hat{n})-\hat{w}_{i} \end{split} \tag{6}\]
Note that \(\hat{s}\) is the specularly reflected unit vector.
The binomial model is ideal because it enforces physical-realism and has been used extensively in commercial optical analysis Greynolds (2015). Fitting binomial models does require a degree of caution. The number of coefficients should be kept as small as possible to avoid over fitting. Binomial fits should always be reviewed carefully before using them in a calculation. In general however, any BRDF model could be used, such as the Harvey-Shack model Nattinger (2020), Ross-Li model Wanner et al. (1995) or Phong BRDF Phong (1973). BRDFs can also be interpolated from measured data, however, data is usually taken at a small number of incident angles, so interpolation can lead to extrapolation errors or enhance measurement noise. Additionally, directly interpolating in spherical coordinates will not work, since the specular peak of the BRDF shifts. For successful interpolation it is necessary to first make a coordinate transform. We find the best results by interpolating the BRDF as a function of \((\theta_{o},\phi_{o}-\phi_{s})\), where \((\theta_{o},\phi_{o})\) is the outgoing direction of scattered light and \((\theta_{s},\phi_{s})\) is specularly reflected direction. BRDFs can also be interpolated as a function of \(D\), given in Equation 5. When done cautiously, binomial model development will overcome the shortfalls and complexity of interpolation.
The BRDF of each surface on a satellite can be experimentally measured Germer & Asmail (1997) or estimated from a catalog of known material BRDFs Matusik et al. (2003). An effective BRDF for Earth's surface can be fit to data gathered from remote sensors.
Figure 2: The geometry for scattering of light from a point source off of a surface with area
A to an observer at range \(d\).
In particular, we use measurements from NASA's Cloud Absorption Radiometer Gatebe & King (2016). The aircraft's radiometer has one degree angular resolution in 14 spectral bands 340 - 2300 \(nm\). We use the 479 \(nm\) data averaged over hundreds of images, uncorrected for atmospheric absorption. This data is then fit to a Phong BRDF, of the following form:
\[BRDF=\frac{K_{d}}{\pi}+K_{s}\frac{n+2}{2\pi}(\hat{w}_{r}\cdot\hat{w}_{o})^{n} \tag{7}\]
The parameter \(K_{d}\) controls the magnitude of the diffuse component of the BRDF, while \(K_{s}\) controls the magnitude of a specular lobe and \(n\) controls the width of the specular peak. The vectors \(\hat{w}_{r}\) and \(\hat{w}_{o}\) are the specularly reflected unit vector and the outgoing unit vector respectively. We find that the Phong model yields more reliable results when fit to aggregate data than a binomial model.
## 4 Calculations in the satellite-centered frame
Our goal now is to use the equation for the fraction of scattered flux \(G\)(source \(\rightarrow\) observer) given in Equation 3 to calculate the flux scattered by a satellite onto an observer on Earth's surface. We start by introducing a coordinate frame which simplifies this calculation. Then, we find the contribution of light directly scattered by the satellite from the sun. Finally, we include the contribution of light scattered from earthshine.
### The Satellite-Centered Frame
To simplify calculations, we'd like to use a reference frame which reduces dependent variables as much as possible. Our choice of frame is called the satellite-centered frame2. The z-axis points along geodetic zenith. The y-axis is in the plane defined by the center of the earth, the sun, and the satellite and is perpendicular to the z-axis. The angle between the y-axis and the vector from the center of the earth to the sun must be less than 180\({}^{\circ}\). The x-axis is defined by the right-hand rule. This frame is shown in Figure 3.
Footnote 2: The satellite-centered frame should not be confused with a satellite body frame, which is fixed to the satellite’s chassis and has an origin at the satellite’s center of mass
In the satellite-centered frame, the flux seen by an observer depends only on the angle of the satellite past terminator \(\alpha\), the vector from the satellite to the observer \(\vec{v}\), the radius of Earth \(R_{E}\) and the geodetic height of the satellite \(h\).
### Flux from the Sun
The fraction of flux scattered by a single surface has been derived in Equation 3. We simply sum over all \(N_{s}\) surfaces in a satellite to arrive at:
\[\begin{split}& I_{\rm observer}=I_{\rm sun}\sum_{s=1}^{N_{s}}G_{s}( \text{sun}\rightarrow\text{observer})\\ & G_{s}(\text{sun}\rightarrow\text{observer})=\frac{A_{s}f_{s}( \hat{v}_{\rm sat\rightarrowsun},\hat{v}_{\rm sat\rightarrowobs})\mathcal{N}_{ s}}{\|\vec{x}_{\rm obs}-\vec{x}_{\rm sat}\|^{2}}\\ &\mathcal{N}_{s}=(\hat{n}_{s}\cdot\hat{v}_{\rm sat\rightarrowsun })(\hat{n}_{s}\cdot\hat{v}_{\rm sat\rightarrowobs})\end{split} \tag{8}\]
The area, BRDF, and normal of satellite surface \(s\) are \(A_{s}\), \(f_{s}\), and \(\hat{n}_{s}\) respectively. The distance from the satellite to the observer is \(\|\vec{x}_{\rm obs}-\vec{x}_{\rm sat}\|\). The unit vector from the satellite to the sun is \(\hat{v}_{\rm sat\rightarrowsun}\) and the unit vector from the satellite to the observer is \(\hat{v}_{\rm sat\rightarrowobs}\).
### Flux from Earthshine
The flux seen by the observer caused by earthshine can be calculated similarly. Light is scattered first by the Earth's surface, then by the satellite. The flux scattered by a single satellite surface is calculated by integrating over the portion of earth's surface \(E\) which is illuminated by the sun and is visible to the satellite. This is then summed over the number of satellite surfaces \(N_{s}\). We find the flux seen by an observer due to earthshine:
\[\begin{split}& I_{\rm observer}=\\ &\quad I_{\rm sun}\sum_{s=0}^{N_{s}}\iint\limits_{E}G_{s}(\partial A \rightarrow\text{obs})\cdot\partial G(\text{sun}\rightarrow\text{sat})\end{split} \tag{9}\]
\(\partial A\) is a differential area of Earth's surface. \(G_{s}\) is the fraction of earthshine flux scattered from \(\partial A\) to the observer by a surface. \(\partial G\) is the differential fraction of sun
Figure 3: The geometry of the satellite-centered frame. \(\hat{x}\) is out of the page.
flux scattered from the sun to the satellite by \(\partial A\). We can approximate Equation 9 by discretizing the integral. This yields:
\[I_{\rm observer}=\\ I_{\rm sun}\sum_{s=0}^{N_{s}}\sum_{p=0}^{N_{p}}G_{s}(\Delta A_{p} \rightarrow\rm obs)\cdot\Delta G_{p}(\rm sun\rightarrow sat) \tag{10}\]
We refer to each area element on the Earth's surface \(\Delta A_{p}\) as a pixel. As the number of pixels on the Earth's visible and illuminated surface, \(N_{p}\), goes to infinity, Equation 9 and Equation 10 become equivalent. The fraction of earthshine scattered to the observer is then given by:
\[G_{s}(\Delta A_{p}\rightarrow\rm obs)=\frac{A_{s}f_{s}(\hat{v}_{\rm sat \rightarrowpixel},\hat{v}_{\rm sat\rightarrowobs})\mathcal{N}_{s}}{\|\vec{x} _{\rm obs}-\vec{x}_{\rm sat}\|^{2}} \tag{11}\]
Recall again that the area, BRDF, and normal of satellite surface \(s\) are \(A_{s}\), \(f_{s}\), and \(\hat{n}_{s}\) respectively. The distance from the satellite to the observer is \(\|\vec{x}_{\rm obs}-\vec{x}_{\rm sat}\|\). The unit vector from the satellite to the pixel on the Earth's surface is \(\hat{v}_{\rm sat\rightarrowpixel}\) and the unit vector from the satellite to the observer is \(\hat{v}_{\rm sat\rightarrowobs}\).
The fraction of sun flux scattered from the sun to the satellite by a differential area of of Earth's surface is:
\[\Delta G_{p}(\rm sun\rightarrow sat)=\frac{\Delta A_{p}f_{p}( \hat{v}_{\rm sun\rightarrowpixel},\hat{v}_{\rm pixel\rightarrow sat}) \mathcal{N}_{p}}{\|\vec{x}_{\rm sat}-\vec{x}_{\rm pixel}\|^{2}} \tag{12}\] \[\mathcal{N}_{p}=(\hat{n}_{p}\cdot\hat{v}_{\rm pixel\rightarrow sun })(\hat{n}_{p}\cdot\hat{v}_{\rm pixel\rightarrow sat})\]
The area, BRDF, and normal of a pixel \(p\) on the Earth's surface are \(\Delta A_{p}\), \(f_{p}\), and \(\hat{n}_{p}\) respectively. The distance from the satellite to the observer is \(\|\vec{x}_{\rm obs}-\vec{x}_{\rm sat}\|\). The unit vector from the satellite to the pixel on the Earth's surface is \(\hat{v}_{\rm sat\rightarrowpixel}\) and the unit vector from the satellite to the observer is \(\hat{v}_{\rm sat\rightarrowobs}\).
### Discretization of Earth's Surface
It is now necessary to discretize the portion of Earth which is visible to the satellite and illuminated by the sun. At first glance, using standard spherical coordinates seems like the easiest solution. Unfortunately, this results in a discretization which is heavily weighted at the poles of the Earth. We instead propose the following coordinate system:
\[x =z\tan\psi \tag{13}\] \[y =z\tan\Omega\] \[z =\frac{R_{E}}{\sqrt{1+\tan^{2}\psi+\tan^{2}\Omega}}\]
The variables \(\psi\) and \(\Omega\) represent angle-off-plane and angle-on-plane respectively, where the plane is defined by \(\hat{y}\) and \(\hat{z}\). The radius of the Earth is given as \(R_{E}\)
Using this coordinate system results in pixels which have much more even spacing. Figure 4 shows 400-point meshes of a quarter-sphere. We can see that using standard spherical coordinates results in 'bunching up' at the poles.
In order to calculate the flux we need to know the area \(\Delta A_{p}\) of each pixel \(p\) in the mesh. This is approximated using the Jacobian determinant as follows:
\[\Delta A_{p}=\frac{\partial(x,y,z)}{\partial(\psi,\Omega,R_{E})}\Delta\psi \Delta\Omega \tag{14}\]
Finally, we must only include pixels which are both visible to the satellite and illuminated by the sun. Consider a pixel at the point \((x,y,z)\), measured in the satellite-centered frame. The satellite is visible to a pixel if the following is true:
\[\cos\left(\frac{R_{E}}{R_{E}+h}\right)<\frac{z}{R_{E}} \tag{15}\]
A pixel is illuminated by the sun if \(y>0\). Using Equation 13, these two constraints can be related back to the angle-off-plane and the angle-on-plane, \((\Psi,\Omega)\).
In practice, we find that discretization causes some noise in our brightness calculations. The amplitude of the noise decreases with number of pixels and the frequency increases. We recommend applying some smoothing to results for calculations which include earthshine.
### Converting Flux to AB Magnitude
It is useful to convert from incident flux to AB magnitude. This allows for comparison between satellite brightness and the brightness of celestial objects. This
Figure 4: We compare 400 pixel discretizations of a quarter-sphere, representing a portion of Earth’s surface. Our coordinate system creates a more evenly distributed discretization.
conversion is simply:
\[\text{AB magnitude}=-2.5\log_{10}\left(\frac{I}{f}\right)-56.1 \tag{16}\]
The flux incident on the observer from the satellite is \(I\) in units of W/m\({}^{2}\) and \(f\) is the frequency of the light in Hz. AB magnitude is defined for a flat spectrum, such as a very hot star. The apparent AB magnitude in a given spectral filter can be found by integrating the flux over the filter's bandpass.
### Other Light Sources
Although it is beyond the scope of this paper to include in our simulations, we would also like to offer a 'back-of-the-envelope' calculation for the brightness contributions of other light sources. These potentially include celestial objects such as stars, planets, and the Moon. We can modify Equation 3 to estimate the flux scattered by a surface from a source to an observer:
\[I_{observer}=I_{source}\frac{\cos\phi_{i}\cdot\cos\phi_{o}\cdot BRDF\cdot A}{ d^{2}} \tag{17}\]
Recall that \(\phi_{i}\) is the angle between the surface normal and the source and \(\phi_{o}\) the angle between the surface normal and the observer, \(A\) is the area of the surface, and \(d\) is the range of the satellite. To find an order-of-magnitude calculation for the worst case scenario, \(\cos\phi_{i}\approx\cos\phi_{o}\approx 1\), \(A\approx 1\,m^{2}\) and \(d\approx 250\,km\) (very low Earth orbit). We assume light from our celestial source is very specularly reflected, so that \(BRDF\approx 10^{4}\). We can then use Equation 16 to convert Equation 17 to AB magnitude and find:
\[(\text{AB mag})_{observer}\sim(\text{AB mag})_{source}+17 \tag{18}\]
From this, we see that in a worst case scenario, light scattered from a celestial source onto an observer will be 17 AB magnitude dimmer than the incident light from the source. This means that light from the full Moon could potentially cause brightness up to 4 AB magnitude and Venus up to 12 AB magnitude. The brightest stars like Vega could cause satellite brightness of only around 17 AB magnitude. Using data from NASA's Visible Infrared Imaging Radiometer Suite (VIIRS) Elvidge et al. (2017), we estimate that city lights cause brightness of roughly 15 AB magnitude. Since LEO satellites are not point sources and move quickly across the focal plane of a telescope, it is unlikely that light from stars or city lights will be detectable by observatories like LSST.
For a typical LEO satellite at 500 km, the motion across the focal plane is about 0.5 degrees per second. This means the effective exposure time on a 0.7 arcsecond PSF footprint is just 0.5 milliseconds, independent of the camera exposure time. For LSST, a 0.7 arcsecond PSF has 50 pixels. This means the peak LEO satellite trail surface brightness of a 15 AB magnitude satellite is only 1.8 electrons per pixel. This is buried in the night sky noise from a 15 second LSST exposure.
It is outside the scope of this paper, but future work in satellite brightness modeling should seek to incorporate incident light from the Moon and possibly Venus.
## 5 Calculations in the Observer Frame
The satellite-centered frame is ideal for calculations, but we need to know what brightness a ground-based observer will see. We use the horizontal coordinate system shown in Figure 5. Note that \(\hat{z}\) corresponds to an altitude angle of 90\({}^{\circ}\). Given the position of an observer on Earth and the position of a satellite in the sky, our goal is to transform variables from the satellite-centered frame \(S\) to the observer frame \(O\).3 We are given a unit vector from the observer to the satellite \([\hat{v}_{\text{sat}}]_{O}\) and the satellite's geodetic height \(h\). The unit vector from the observer to the satellite as well as the satellite's height can either be measured by an observer or calculated using the satellite's orbital elements. We also know the unit vector towards the sun \([\hat{v}_{\text{sun}}]_{O}\), which can be calculated using the latitude and longitude of the observer and the time of observation. The basis vectors in the observer's frame are:
Footnote 3: Note that a similar process to the one shown below can be used to convert to or from the satellite-centered frame. In particular, satellite operators can use these transformations to convert surface normal vectors from a satellite body frame to our satellite centered frame. This allows brightness calculations to be generated in simulations or in real time using data from onboard satellite sensors
\[[\hat{x}]_{O} =(1,0,0) \tag{19}\] \[[\hat{y}]_{O} =(0,1,0)\] \[[\hat{z}]_{O} =(0,0,1)\]
Moving forward, we will drop the basis notation for vectors in the observer's frame, O. Quantities which are
Figure 5: Geometry as seen in the observer’s frame \(O\)
measured in the satellite-centered frame will be marked with an \(S\). We first need to find the basis vectors of the satellite-centered frame, as measured in the observer's frame.
From geometric inspection, we find the following:
\[\begin{split}\phi&=\arccos\big{(}\hat{z}\cdot\hat{v}_{ \text{sat}}\big{)}\\ &\qquad-\arcsin\bigg{(}\|\hat{v}_{\text{sat}}\times\hat{z}\| \cdot\frac{R_{E}}{R_{E}+h}\bigg{)}\\ \theta&=\arctan_{2}(\hat{y}\cdot\hat{v}_{\text{sat} },\hat{x}\cdot\hat{v}_{\text{sat}})\\ d^{2}&=R_{E}^{2}+(R_{E}+h)^{2}-2R_{E}(R_{E}+h) \cos\phi\end{split} \tag{20}\]
Remember that all vectors are in the observer's frame. The range of the satellite is \(d\). The angle \(\phi\) is the angular separation between the z-axis of the satellite-centered frame and the z-axis of the observer's frame. Angle \(\theta\) is the rotation of the satellite-centered frame's z-axis as measured from the x-axis of the observer frame. Using these values, we can find an expression \([\hat{z}_{S}]_{O}\), which is the \(\hat{z}\) of the satellite-centered basis measured in the observer frame:
\[[\hat{z}_{S}]_{O}=(\sin\phi\cos\theta)\hat{x}+(\sin\phi\sin\theta)\hat{y}+( \cos\phi)\hat{z} \tag{23}\]
It is defined that \(\hat{y}_{S}\) is in the plane containing the vector towards the sun and \(\hat{z}_{S}\). Additionally, \(\hat{y}_{S}\) is orthonormal to \(\hat{z}_{S}\). This gives three constraints which fully define \(\hat{y}_{S}\).
\[\hat{y}_{S}=a\hat{z}_{S}+b\hat{v}_{\text{sun}}\qquad\hat{y}_{S}\cdot\hat{z}_{S }=0\qquad\|\hat{y}_{S}\|=1 \tag{24}\]
We can solve the three constraints given in Equation 24 to find \(a\) and \(b\). Physically, \(a\) and \(b\) are the components of \(\hat{y}_{S}\) in the \(\hat{z}_{S}\) direction and the \(\hat{v}_{\text{sun}}\) direction respectively:
\[b=\frac{1}{\sqrt{1-(\hat{z}_{S}\cdot\hat{v}_{\text{sun}})^{2}}}\qquad\quad a= -b(\hat{z}_{S}\cdot\hat{v}_{\text{sun}}) \tag{25}\]
Finally, \(\hat{x}_{S}\) is defined by the right hand rule:
\[\hat{x}_{S}=\hat{y}_{S}\times\hat{z}_{S} \tag{26}\]
The transform from the observer reference frame to the satellite-centered frame is:
\[\begin{split} T&=T_{\text{observer}\to \text{satellite}}\\ &=T_{\text{satellite}\to\text{observer}}^{-1}\\ &=[\hat{x}_{S},\hat{y}_{S},\hat{z}_{S}]^{-1}\end{split} \tag{27}\]
We can then find the quantities we need to do calculations in the satellite-centered frame. The vector from the satellite to the observer in the satellite-centered frame is:
\[[\hat{v}_{\text{sat}\to\text{obs}}]_{S}=(R_{E}+h)\hat{z}-d(T\cdot[\hat{v}_{ \text{sat}}]_{O}) \tag{28}\]
Second, the angle of the satellite past the terminator is given:.
\[\alpha=-\arcsin(\hat{z}\cdot T\cdot[\hat{v}_{\text{sun}}]_{O}) \tag{29}\]
Recall that \(R_{E}\) is the radius of the earth, \(h\) is the geodetic height of the satellite, and \(d\) is the satellite's range. \([\hat{v}_{\text{sat}}]_{O}\) is the vector from the observer to the satellite as measured in the observer's frame. \([\hat{v}_{\text{sun}}]_{O}\) is the vector from the observer to the sun as measured in the observer's frame.
If we are interested in a particular satellite, we must know that satellite's position in the sky relative to an observer and geodetic height. These quantities can be found using a satellite's orbital elements. SpaceX and other constellation operators publish traditional TLEs (Two-Line Element Sets) on Space-Track.org. For Starlink, more accurate supplemental TLEs are published on celerstr.org. These supplemental TLEs are fit to Starlink propagated ephemerides and covariances which are available from 'Public Files' on Space-Track.org.
Once a satellite's orbital parameters are known from a TLE, its position at a past or future time can be calculated using a Simplified General Perturbations algorithm. In particular, we use the SGP4 Python package. A satellite's altitude and azimuth as seen from a given location on Earth is then found using tools provided by the Astropy software Astropy Collaboration et al. (2022).
## 6 Model Validation
In order to validate our calculations, we create a simple brightness model for an existing satellite - selected configurations of Starlink V1.5 - and compare our AB magnitude calculations to observations. We show that the BRDFs for each satellite surface can either be measured in a laboratory or found by best-fitting to brightness measurements. We also compare our calculations and observations to the previously standard satellite brightness model - a diffuse sphere.
### Starlink v1.5 Brightness Model
The two largest surfaces on Starlink v1.5 are the solar array and the chassis. The solar array has an area of 22.68 \(m^{2}\) and the chassis nadir has an area of 3.64 \(m^{2}\). SpaceX has provided BRDF data for each surface and information about the normal vectors of these surfaces in the brightness regime of interest. Using this information we can create a brightness model for a subset of the Starlink v1.5 satellites. The Starlink v1.5
satellites have gone through a variety of design changes to reduce brightness, so we only consider the Starlink v1.5 satellites with the latest brightness configuration. These satellites have a reflective sticker on the chassis nadir and dark pigmented backsheet on the solar arrays SpaceX (2022). In nominal operations, the chassis points directly nadir and the solar array is perpendicular to the chassis and in the direction of the sun.
SpaceX has also provided experimentally measured BRDF data for each of these surfaces. This SpaceX contracted BRDF data was measured by Scatterworks using an SS4 scatterometer operating at a wavelength of 532 nanometers. The data is taken at an angular resolution of \(1^{\circ}\). This BRDF data is fit to binomial models, using the methods described in Section 3. The data and the fits are shown in Figure 6.
To find a representative BRDF for Earth's surface, we fit Phong models to CARs data, a technique also described in Section 3. We use data from two missions. First, the CLASIC mission which gathered BRDF data for vegetation over Oklahoma. Using this data to construct an 'effective BRDF' for generic vegetation, we find parameters \(K_{d}=0.53\), \(K_{s}=0.28\), and \(n=7.31\). Second, the CLAMS mission which gathered BRDF data for the ocean off the East Coast of the United States. For ocean water, we find \(K_{d}=0.48\), \(K_{s}=0.08\), and \(n=16.45\).
We can then feed our knowledge of the satellite's primary surfaces and BRDF models for the Earth's surface into our software and calculate satellite brightness.
### Starlink v1.5 Model without BRDF Data
For many satellites, BRDF measurements for primary surfaces may not be available. Not only do BRDF measurements require specialized equipment, but also collaboration between some satellite operators and astronomers may prove difficult. In this case, we need to use a different approach to find an accurate satellite brightness model. We assume minimal information of the satellite, just the pointing directions of primary surfaces. For Starlink v1.5 these surfaces are the chassis, which points directly nadir and the solar array, which is perpendicular to the chassis and towards the sun. We then assign each of these two surfaces a Phong BRDF, which has three free parameters. The areas of each surface are set to \(1\)\(m^{2}\) - this means that the best-fit albedo of each surface may be greater than \(1\). In total, our satellite model has \(6\) unknown parameters. These parameters can be found by fitting to observed Starlink brightness data. As long as there are a variety of observations over many different solar angles, this makes
Figure 6: Measured data and BRDF best fits for surfaces of Starlink v1.5 Satellite. Data is shown at four incident angles. The in-going and out-going directions are given in spherical coordinates \((\theta_{i},\phi_{i})\) and \((\theta_{o},\phi_{o})\) respectively, where \(\theta\) is the azimuthal angle and \(\phi\) is the zenith angle. All BRDF data is ‘in-plane’, so \(\theta_{i}=180^{\circ}\) and \(\theta_{o}=0^{\circ}\). Dashed grey lines show the BRDF at intermediate angles, spaced \(10\) degrees apart. The solar array fit misses the specular peak, but since there is no orientation where light is specularly reflected by the solar array onto observers, this is not a problem.
for an accurate brightness model. This technique essentially uses brightness observations to indirectly measure the overall effective BRDF of the satellite. The satellite's BRDF is then decomposed into the individual surface BRDFs by best-fitting. The ability to find effective BRDFs for satellite surfaces from only ground-based brightness data is extremely important to astronomers. It makes our brightness modeling approach viable for most satellites.
### Diffuse Sphere Model
We can also compare our brightness model to the commonly used diffuse sphere model. For this model, the flux of light scattered by the satellite and incident on an observer is simply:
\[I=\frac{I_{sun}}{d^{2}}\frac{2}{3\pi^{2}}A\rho\big{(}(\pi-\phi)\cos\phi+\sin \phi\big{)} \tag{30}\]
Here, \(\phi\) is the solar phase angle - the angle between the observer, the satellite, and the sun. The effective area and effective albedo of the satellite are \(A\) and \(\rho\) respectively. The satellite's range is \(d\). The flux of the sun incident on the satellite is \(I_{sun}\). There is one free parameter, the albedo-area product \(\rho A\), which must be best fit to satellite brightness observations. While this model is extremely simple, it has little basis in reality and does not correlate well with brightness measurements.
### Brightness Distributions on the Night Sky
We run our calculations for sun altitudes ranging from \(-27\arcdeg\) to \(-3\arcdeg\). The sun's azimuth is fixed at \(90\arcdeg\). This corresponds to a period of time prior to sunrise for an observer on the Equator. Satellite altitude is fixed at 550 kilometers4. We use our new brightness models both with and without the earthshine contributions. For calculations with earthshine, we use an earthshine discretization of 151 x 151 pixels. The diffuse sphere calculation is also shown to provide comparison. Results are shown in Figures 8 - 11. Note that these plots show the brightness that a Starlink v1.5 satellite would appear if it was at a given point on the sky. The center of each image corresponds to a satellite directly zenith. The outer edge is the horizon. Cardinal directions are marked - North is towards the top of the image and East is towards the left side. The grid lines show altitude increments of \(10\arcdeg\) and azimuth increments of \(90\arcdeg\).
Footnote 4: Typical Starlink v1.5 orbital altitude per FCC filings.
Our brightness modeling shows significantly different patterns over the night sky compared to the diffuse sphere model. The diffuse sphere model shows little change in brightness with respect to sun altitude or satellite position. Additionally, the diffuse model predicts that the brightest satellites will be in the western sky, throughout dawn. On the other hand, our models shows how the location of peak satellite brightness changes from the eastern to the western sky as the sun rises. Earlier in the morning, bright satellites in the eastern sky are caused by light forward-scattering off of the specular chassis. Near dawn, bright satellites in the western sky are caused by light back-scattered by the solar array. We see that earthshine causes additional brightness in the eastern sky, particularly during civil and nautical twilight.
### Comparison to Observed Brightness
Using SpaceX contracted data gathered at Mount Lemmon in Arizona by the Pomenis Observatory Pearce et al. (2018), we can compare our brightness calculations to actual observations of satellite brightness. A scatter plot of these observations is shown in Figure 7. For this correlation, we once again only use observations of Starlink satellites with a pigmented solar array backsheet and reflective chassis sticker. We also exclude observations of satellites which do not have their solar array and chassis in the orientation we used for brightness modeling. The majority of these excluded observations are from satellites which were raising or lowering their orbit at the time of observation. Starlink satellites performing orbital maneuvers use a distinct brightness mitigation orientation as much as possible. The albedo-area product in the diffuse sphere model
Figure 7: Plot of measured Starlink v1.5 brightness vs. solar phase angle. At low solar phase angles, brightness is dominated by backscatter from the solar array. At intermediate phase angles, light is diffusely scattered by the chassis. At high phase angles, light is more specularly reflected by the chassis. Note that 2nd Generation Starlinks will off-point solar arrays and use a chassis material with lower diffuse reflection. These two changes reduce brightness at low and intermediate phase angles.
Figure 8: The diffuse sphere model shows little variation in brightness. Satellites in the western sky are shadowed by Earth (**1**). The controlling factors of brightness are satellite range and the illuminated fraction of the sphere. Satellite brightness falls off with satellite range (**2**).
Figure 10: Earthshine from vegetation adds an additional component of brightness, low in the eastern sky (**6**). This effect becomes more and more pronounced as the sun rises.
Figure 9: In our model, two distinct brightness peaks are clearly visible. The first is visible earlier in the morning and is caused by light forward-scattered from the chassis nadir. This specular peak is visible just above the eastern horizon (**3**). The second brightness peak is caused by light back-scattering from the solar array. It is most prominent in the western sky near dawn (**5**). There is also a transition where both peaks are visible (**4**).
is found using a best-fit to minimize RMS error between observations and the model. For these observations we find \(\rho A=0.65\). The parameters for our model without BRDF lab measurements are similarly found using a best-fit to observations. For the chassis, we find Phong parameters \(K_{d}=0.34\), \(K_{s}=0.40\), and \(n=8.9\). For the solar array, we find \(K_{d}=0.15\), \(K_{s}=0.25\), and \(n=0.26\). On the other hand, our model with BRDF lab measurements uses no prior knowledge of brightness observations. Rather, it is modeled using only the geometry and BRDF data provided by SpaceX. Figure 12 shows a comparison between the correlation of our models5 with observation and the correlation of the diffuse sphere model.
Footnote 5: Correlation does not change significantly when including earthshine because the Pomenis dataset does not include many satellite observations where earthshine dominates brightness. For simplicity, our models are shown without earthshine
We see that our models are roughly a 50% improvement over the diffuse sphere model, and both can be used to accurately predict satellite brightness. We notice that our model using laboratory measured BRDFs generally under-predicts brightness. This is likely due to brightness driven by satellite components we did not model in our analysis. Astronomers could account for the under-estimation best fitting one free parameter (which accounts for light scatter from unmodeled surfaces) to observations. The under-prediction is not an issue for satellite operators, as they are most interested in how different satellite designs increase or decrease brightness. It is important to note that this model does not incorporate any prior knowledge of observed brightness, while our model with best-fit BRDFs and the diffuse sphere model require many brightness observations of satellites over a range of solar angles. Additionally, we note that the diffuse sphere model only coincidentally fits the Starlink v1.5 data well because the solar array back-scatters light in a similar manner to the diffuse sphere model. Starlink V2 off-points solar arrays, so the dominant brightness is from forward-scattering. A diffuse sphere model will not represent these satellites well. Our model with measured BRDFs is good enough to provide satellite operators with directional knowledge about how changing satellite design can reduce brightness. Additionally, both of our new models can be used by astronomers to estimate which areas of the night sky are least impacted by existing and future satellite constellations.
## 7 Discussion
Our motivation for this paper is to provide both satellite constellation operators and astronomers with tools for modeling satellite optical brightness. In particular, we develop a software package known as Lumos-Sat for satellite optical brightness predictions. In order to validate our predictions, we chose a selection of satellites for which there are time resolved ground-based brightness observations and for which we also have BRDF data for both the satellite components and the Earth. Using SpaceX's existing Starlink v1.5 satellites, we show that our models have better predictive power than the traditional diffuse sphere calculation. For the Starlink v1.5 satellites we find that our models are about 50% better than a calibrated diffuse sphere model. Our modeling technique can use either laboratory measured BRDFs or BRDFs which are best-fit to satellite observation data. We also note that the diffuse grey sphere model does not capture the specular nature of scattering from the satellite, in particular from the chassis deck. Despite this, the diffuse sphere model has been used in a variety of papers Lawler et al. (2021); Hainaut and Williams (2020). This is largely because the diffuse sphere model is very simple to implement and no better brightness calculation was previously known.
Improving correlation with observation is the next major challenge for this work. Moving forward, our brightness model could be improved by including mutual shadowing between satellite surfaces. For example, our current modeling does not include the effect of the chassis nadir blocking the solar array from an observer's view. This is a logical next step for the work. Another shortcoming of this technique is that many satellite parameters will not be known by the public. Our model requires the normal vectors, areas, and BRDFs of the primary surfaces of a satellite. While it is known that during nominal operation the Starlink v1.5 chassis deck points directly nadir and that the solar array is perpendicular to the chassis deck in the brightness regime of concern, this information may not be known during orbit raising or lowering, or other off-nominal operations. Additionally, there is a limit to how much BRDF and geometric data will be known about satellites in the future. Generating these brightness models will require some degree of collaboration between satellite operators and astronomers. As a workaround if cooperation is not possible, BRDFs for a satellite's primary surfaces could be calibrated based on on-orbit brightness observations and limited knowledge of satellite geometry made available by FCC filings. Many LEO satellites, by necessity, will have a solar array that points towards the sun to
generate power and a chassis which points towards nadir to provide internet or other communications.
One of the most difficult parts of this analysis is accurate modeling of earthshine. It is beyond the scope of this paper, but earthshine deserves further investigation. We suggest two possible avenues for improvement. First, BRDF data for the Earth's surface could be gathered from the MODIS instrument which operates on NASA's Terra and Aqua satellites. The MODIS instrument gathers Ross-Li BRDF coefficients for a variety of terrain Strahler et al. (1999). Another option would be to implement analytic BRDFs, such as those used in the SHARM radiative transfer software Lyapustin (2005).
Even when there is good collaboration with satellite operators, accounting for every small component of a satellite will be very difficult. Specularly reflecting objects on the scale of centimeters can cause 'giants' (changing satellite brightness on short time-scales). This includes MLI thermal blankets or even small pieces of aluminum. For these specular materials, scattered flux can change by orders of magnitude over a few degrees or less. This issue is exacerbated by additional moving parts on a satellite bus - parabolic dishes, laser connections, or other components may rotate quickly and cause glints which are extremely difficult to predict. These glints, if not mitigated by design, are a potentially serious contributor to bogus alerts6. Bright glints such as Iridium flares are so obvious that they would be rejected in the science analysis. Faint glints or flares are more problematic: they can imitate astrophysical flares or interfere with asteroid detection. The only mitigation is better satellite design, which can be driven by our software. Designs can be improved to reduce glints by eliminating or covering offending specular surfaces and using diffuse coatings or materials on moving components and complex geometries. While satellite glints have been detected and are a known issue, little has been done to quantify or predict the impact of glints on time-domain astronomy.
Footnote 6: False detection of time-domain events by an observatory
The ability to predict satellite brightness has major impacts for constellation operators. It allows satellite operators to use brightness as a constraint during the design process. If satellite brightness is readily predictable, operators can choose to use satellite materials and configurations or satellite conops which reduce brightness. Observations presented in this paper correspond to the 1st generation Starlink satellites. While these satellites did not include brightness as a design constraint early in development, SpaceX has used brightness analysis similar to that presented in this paper to inform brightness mitigating designs on the recently developed 2nd Generation Starlinks. Using this predictive analysis ensured that the most effective and efficient brightness mitigations were implemented on 2nd Generation Starlinks. These mitigations include the use of specular material on the chassis which have order of magnitude reductions in diffuse scatter, dark paint where specular materials are not effective, and off-pointing of the solar arrays SpaceX (2022). These improvements can be quantified using Lumos-Sat before satellites are in space.
Our calculations show that the sky near zenith can have bright satellite trails in astronomical twilight during dawn and dusk night observatory operations Ivezic et al. (2019). In our simulations, earthshine causes notable satellite brightness on the eastern horizon. This possibly impacts the search for potentially hazardous asteroids (PHAs), which scans the sky towards the sun at
Figure 12: Comparison of satellite brightness models. For our calculations using BRDFs measured in the lab, we find a Pearson correlation of \(R=0.69\). For our model with Phong BRDFs best-fit to brightness observations we find \(R=0.68\). Both of these models are a respectable improvement over the optimal diffuse sphere model, which has a correlation of \(R=0.47\).
dawn and dusk to find asteroids interior to the earth's orbit. This works by connecting candidate detections in pairs of exposures during a night (tracklets) and extending the tracklets to additional nights Schwamb et al. (2023). PHAs can be detected higher than 30deg above the horizon. Unfortunately earthshine can produce visible satellite trails in this region of the sky up to 40deg. Bogus detections caused by satellites create noise in the tracklet extrapolation process, which causes PHA detections to fail. Our model makes it possible to quantify the impact of satellites on the PHA detection process and telescope operations more generally. Improved brightness analysis in this regime can be incorporated into PHA detection processes and satellite design and operation.
For the astronomy community, accurate brightness modeling is important to predict when and where satellites are most likely to interfere with observatory operations and data quality. Satellites have complex light scattering properties, so their optical brightness is highly dependent on their location in the sky. This knowledge can be used to inform better telescope scheduling algorithms to 'dodge' LEO satellites. Satellite brightness predictions could even be used to create better satellite streak removal software. Using orbital parameters, as well as brightness prediction, the width and magnitude of a satellite streak in an image can be estimated ahead of time. Additionally, astronomers can better quantify the scientific impacts of LEO satellites.
As this is one of the first attempts to accurately predict satellite brightness, not all potential applications are known. Brightness models still need to be developed for other satellites, such as Starlink's V2 and V2 Mini, Amazon's Project Kuiper, AST's SpaceMobile, and OneWeb. We hope this promising technique and the open-source software will be further developed and utilized in the future to improve satellite design, and astronomy operations and data analysis.
## Acknowledgements
We acknowledge support from NSF/AURA/LSST grant N56981CC and NSF grant AST-2205095 to UC Davis. We acknowledge useful discussions with Adam Snyder, Craig Lage, Daniel Polin and David Goldstein. We are grateful to Jared Greene, Perry Vargas, Andrew Fann, Michael Sholl, and Doug Knox for their early work on predicting Starlink brightness. We thank Tony Hobza, Veronica Rafla, and Kezhen Yin for their work in materials science which make brightness mitigations possible. We thank Harry Krantz and Eric Pearce for measurements from the Pomenis observatory.
Software: Astropy Astropy Collaboration et al. (2022), NumPy Harris et al. (2020), SciPy Virtanen et al. (2020), Matplotlib Hunter (2007), pandas McKinney (2010), SGP4, Celestrak, Space-Track.
## Data Availability
The software written for this project is available at Github: [https://github.com/Forrest-Fankhauser/satellite-optical-brightness](https://github.com/Forrest-Fankhauser/satellite-optical-brightness). Lumos-Sat is an open-source Python package that was created for this project. Documentation and installation instructions for Lumos-Sat can be found at: [https://lumos-sat.readthedocs.io/](https://lumos-sat.readthedocs.io/). Development of Lumos-Sat is ongoing and collaboration is encouraged. |
2310.01186 | Hypergraph anti-Ramsey theorems | The anti-Ramsey number $\mathrm{ar}(n,F)$ of an $r$-graph $F$ is the minimum
number of colors needed to color the complete $n$-vertex $r$-graph to ensure
the existence of a rainbow copy of $F$. We prove a general upper bound for
$\mathrm{ar}(n,F)$ when $F$ is the expansion of a hypergraph with smaller
uniformality, which refines the general bound $\mathrm{ar}(n,F) =
\mathrm{ex}(n,F_{-}) + o(n^r)$ by Erd{\H o}s--Simonovits--S{\' o}s. Here
$F_{-}$ is the family of $r$-graphs obtained from $F$ by removing one edge. We
also determine the exact value of $\mathrm{ar}(n,F)$ for large $n$ when $F$ is
the expansion of a complete graph, extending a result of Erd{\H
o}s--Simonovits--S{\' o}s from graphs to hypergraphs. | Xizhi Liu, Jialei Song | 2023-10-02T13:26:14Z | http://arxiv.org/abs/2310.01186v2 | # Hypergraph anti-Ramsey theorems
###### Abstract
The anti-Ramsey number \(\mathrm{ar}(n,F)\) of an \(r\)-graph \(F\) is the minimum number of colors needed to color the complete \(n\)-vertex \(r\)-graph to ensure the existence of a rainbow copy of \(F\). We prove a general upper bound for \(\mathrm{ar}(n,F)\) when \(F\) is the expansion of a hypergraph with smaller uniformality, which refines the general bound \(\mathrm{ar}(n,F)=\mathrm{ex}(n,F_{-})+o(n^{r})\) by Erdos-Simonovits-Sos. Here \(F_{-}\) is the family of \(r\)-graphs obtained from \(F\) by removing one edge. We also determine the exact value of \(\mathrm{ar}(n,F)\) for large \(n\) when \(F\) is the expansion of a complete graph, extending a result of Erdos-Simonovits-Sos from graphs to hypergraphs.
**Keywords:** anti-Ramsey problem, hypergraph Turan problem, expansion of hypergraphs, splitting hypergraphs, stability.
## 1 Introduction
Fix an integer \(r\geq 2\), an \(r\)-graph \(\mathcal{H}\) is a collection of \(r\)-subsets of some finite set \(V\). We identify a hypergraph \(\mathcal{H}\) with its edge set and use \(V(\mathcal{H})\) to denote its vertex set. The **anti-Ramsey number**\(\mathrm{ar}(n,F)\) of an \(r\)-graph \(F\) is the minimum number \(m\) such that any surjective map \(\chi\colon K_{n}^{r}\to[m]\) contains a rainbow copy of \(F\), i.e. a copy of \(F\) in which every edge receives a unique value under \(\chi\). Given a family \(\mathcal{F}\) of \(r\)-graphs, we say \(\mathcal{H}\) is \(\mathcal{F}\)**-free** if it does not contain any member of \(\mathcal{F}\) as a subgraph. The **Turan number**\(\mathrm{ex}(n,\mathcal{F})\) of \(\mathcal{F}\) is the maximum number of edges in an \(\mathcal{F}\)-free \(r\)-graph on \(n\) vertices. The study of \(\mathrm{ex}(n,\mathcal{F})\) and its variant has been a central topic in Extremal graph and hypergraph theory since the seminal work of Turan [34], and we refer the reader to surveys [6, 1, 30, 16] for more related results.
In [3], Erdos-Simonovits-Sos initiated the study of anti-Ramsey problems and proved various results for graphs and hypergraphs. In particular, they proved that for every
\(r\)-graph, \(r\geq 2\),
\[\operatorname{ar}(n,F)\leq\operatorname{ex}(n,F_{-})+o(n^{r}), \tag{1}\]
where \(F_{-}\) denotes the family of \(r\)-graphs obtained from \(F\) by removing exactly one edge. For complete graphs, they proved that for fixed \(\ell\geq 2\) and sufficiently large \(n\),
\[\operatorname{ar}(n,K_{\ell+1})=\operatorname{ex}(n,K_{\ell})+2. \tag{2}\]
Later, Montellano-Ballesteros and Neumann-Lara [23] refined their result by showing that (2) holds for all integers \(n\geq\ell\).
Determine the value of \(\operatorname{ar}(n,F)\) for graphs \(F\) has received a lot of attention and there has been substantial progress since the work of Erdos-Simonovits-Sos. Taking a more comprehensive perspective, Jiang [13] showed that \(\operatorname{ar}(n,F)\leq\operatorname{ex}(n,F_{-})+O(n)\) for subdivided1 graphs \(F\). For further results on various classes of graphs, we refer the reader to a survey [5] by Fujita-Magnant-Ozeki for more details. In contrast, the value of \(\operatorname{ar}(n,F)\) is only known for a few classes of hypergraphs such as matchings (see e.g. [27, 4, 15, 12]), linear paths and cycles (see e.g. [11, 32, 31]), and the augmentation of certain linear trees (see e.g. [22]). In this note, we contribute to hypergraph anti-Ramsey theory by refining (1) and extending (2) to hypergraph expansions.
Footnote 1: Here, ‘subdivided’ means every edge in \(F\) is incidence to a vertex of degree two.
Let \(r>k\geq 2\) be integers. The **expansion**\(H_{F}^{r}\) of a \(k\)-graph \(F\) is the \(r\)-graph obtained from \(F\) by adding a set of \(r-k\) new vertices to each edge, ensuring that different edges receive disjoint sets (see Figure 1). The expansion \(H_{\mathcal{F}}^{r}\) of a family \(\mathcal{F}\) is simply the collections of expansions of all elements in \(\mathcal{F}\). Expansions are important objects in Extremal set theory and Hypergraph Turan problems. Its was introduced by Mubayi in [24] as a way to extend Turan's theorem to hypergraphs. There has been lots of progress in the study of expansion over the last few decades, and we refer the reader to the survey [25] by Mubyai-Verstraete for more related results.
### Expansion of hypergraphs
In this subsection, we present a refinement of (1). The following definitions will play a crucial role in our results.
Given a \(k\)-graph \(F\), \(k\geq 2\), and a vertex \(u\in V(F)\), the \(u\)**-splitting** of \(F\) is a \(k\)-graph, denoted by \(F\lor u\), obtained from \(F\) by
* removing the vertex \(u\) and all edges containing \(u\) from \(F\), and then
Figure 1: Expansions of \(P_{4}\) and \(C_{6}\) for \(r=3\).
* adding a \(d_{F}(u)\)-set \(\{v_{e}\colon e\in L_{F}(u)\}\) of new vertices to the vertex set and adding \(\{e\cup\{v_{e}\}\colon e\in L_{F}(u)\}\) to the edge set.
In other words,
\[F\lor u:=\{e\cup\{v_{e}\}\colon e\in L_{F}(u)\}\cup(F-u)\,,\]
where \(\{v_{e}\colon e\in L_{F}(u)\}\) is a \(d_{F}(u)\)-set of new vertices outside \(V(F)\).
Given an independent set \(I:=\{u_{1},\ldots,u_{\ell}\}\) in \(F\), the \(I\)**-splitting** of \(F\), denoted by \(F\lor I\), is defined inductively by letting \(F_{0}:=F\) and \(F_{i}:=F_{i-1}\lor u_{i}\) for \(i\in[\ell]\) (see Figure 3). It is easy to see that since \(I\) is independent, the ordering of vertices in \(I\) does not affect the resulting \(k\)-graph. The **splitting family**\(\operatorname{Split}(F)\) of \(F\) is defined as
\[\operatorname{Split}(F):=\left\{\hat{F}\colon\exists\text{ an independent set $I$ in $F$ such that $\hat{F}=F\lor I$}\right\}.\]
In the definition above, we allow \(I\) to be empty. Hence, \(F\in\operatorname{Split}(F)\). Note that \(|\hat{F}|=|F|\) for all \(\hat{F}\in\operatorname{Split}(F)\).
Given a coloring \(\chi\colon K_{n}^{r}\to\mathbb{N}\), we say a subgraph \(\mathcal{H}\subseteq K_{n}^{r}\) is **rainbow** if no two edges in \(\mathcal{H}\) have the same color. The coloring \(\chi\) is a **rainbow-\(\mathcal{F}\)-free** for some family \(\mathcal{F}\) if any rainbow subgraph of \(K_{n}^{r}\) is \(\mathcal{F}\)-free.
The main result of this subsection is as follows.
**Theorem 1.1**.: _Let \(n\geq r>k\geq 2\) be integers and \(F\) be a \(k\)-graph. Suppose that \(\chi\colon K_{n}^{r}\to\mathbb{N}\) is a rainbow-\(H_{F}^{r}\)-free coloring. Then every rainbow subgraph \(\mathcal{H}\subseteq K_{n}^{r}\) can be made \(H_{F_{-}}^{r}\)-free by removing at most \((|F|-1)\cdot\operatorname{ex}\left(n,\operatorname{Split}(F)\right)\) edges. In particular, for every integer \(n\geq r\),_
\[\operatorname{ar}(n,H_{F}^{r})\leq\operatorname{ex}(n,H_{F_{-}}^{r})+(|F|-1) \cdot\operatorname{ex}\left(n,\operatorname{Split}(F)\right).\]
Since \(\operatorname{Split}(F)\) is a family of \(k\)-graphs (and \(k\leq r-1\)), we have \(\operatorname{ar}(n,H_{F}^{r})\leq\operatorname{ex}(n,H_{F_{-}}^{r})+O(n^{k})\), which improves the bound given by (1).
Observe that if the graph \(F\) is obtained from a bipartite graph by adding a forest to one part, then the family \(\operatorname{Split}(F)\) contains a forest (obtained by splitting the other part in \(F\)). Therefore, we have the following corollary.
**Corollary 1.2**.: _Let \(n\geq r\geq 3\) be integers and \(F\) be a graph obtained from a bipartite graph by adding a forest to one part. Suppose that \(\chi\colon K_{n}^{r}\to\mathbb{N}\) is a rainbow-\(H_{F}^{r}\)-free coloring. There exists a constant \(C_{F}\) depending only on \(F\) such that every rainbow subgraph \(\mathcal{H}\subseteq K_{n}^{r}\) can be made \(H_{F_{-}}^{r}\)-free by removing at most \(C_{F}\cdot n\) edges. In particular, for all integers \(n\geq r\),_
\[\operatorname{ar}(n,H_{F}^{r})\leq\operatorname{ex}(n,H_{F_{-}}^{r})+C_{F}\cdot n.\]
Corollary 1.2 together with stability theorems in [9, 8, 7, 17, 18] implies [11, Lemma 2.7], [31, Theorem 5], and [22, Lemma 4.2].
Notice that if a \(k\)-graph \(F\) is \(p\)-partite, then \(\operatorname{Split}(F)\) contains a \((p-1)\)-partite \(k\)-graph (obtained by splitting an arbitrary part in \(F\)). Combined with the (hypergraph) Kovari-Sos-Turan theorem [19, 2], we obtain the following corollary.
**Corollary 1.3**.: _Let \(n\geq r>k\geq 2\) be integers and \(F\) be a \((k+1)\)-partite \(k\)-graph. Suppose that \(\chi\colon K_{n}^{r}\to\mathbb{N}\) is a rainbow-\(H_{F}^{r}\)-free coloring. There exists constants \(C_{F},\alpha_{F}>0\) depending only on \(F\) such that every rainbow subgraph \(\mathcal{H}\subseteq K_{n}^{r}\) can be made \(H_{F_{-}}^{r}\)-free by removing at most \(C_{F}\cdot n^{k-\alpha_{F}}\) edges. In particular, for all integers \(n\geq r\),_
\[\operatorname{ar}(n,H_{F}^{r})\leq\operatorname{ex}(n,H_{F_{-}}^{r})+C_{F} \cdot n^{k-\alpha_{F}}.\]
Theorem 1.1 will be proved in Section 2.
### Expansion of graphs
In this subsection, we present an extension of (2) to the expansion of graphs. For convenience, we let \(H_{\ell}^{r}:=H_{K_{\ell}}^{r}\).
Let \(t\geq 1\) be an integer. We use \(F[t]\) to denote the \(t\)**-blowup** of \(F\), i.e. \(F[t]\) is the \(r\)-graph obtained from \(F\) by replacing vertices with disjoint \(t\)-sets, and replacing each edge in \(F\) with the corresponding complete \(r\)-partite \(r\)-graph. Given a family \(\mathcal{F}\) of \(r\)-graphs, we let
\[\mathcal{F}[t]:=\left\{F[t]\colon F\in\mathcal{F}\right\}.\]
Let \(\ell\geq 2\) and \(t\geq 4\) be integers.
* Let \(K_{\ell}^{\alpha}[t]\) denote the graph obtained from the blowup \(K_{r}[t]\) by adding a path of length two into one part (see Figure 2 (a)).
* Let \(K_{\ell}^{\beta}[t]\) denote the graph obtained from the blowup \(K_{r}[t]\) by adding two disjoint edges into one part (see Figure 2 (b)).
* Let \(K_{\ell}^{\gamma}[t]\) denote the graph obtained from the blowup \(K_{r}[t]\) by adding two edges into two different parts (see Figure 2 (c)).
The motivation for these definitions is that if \(F\) is obtained from an edge-critical graph by adding one edge, then \(F\) can be found within one of the previously defined graphs for sufficiently large \(t\).
Given integers \(n\geq r\geq 2\). Let \(V_{1}\sqcup\cdots\sqcup V_{\ell}=[n]\) be a partition such that \(|V_{n}|+1\geq|V_{1}|\geq|V_{2}|\geq\cdots\geq|V_{n}|\). Let \(T_{r}(n,\ell)\) denote the \(r\)-graph whose edge set consists of all \(r\)-subsets of \([n]\) that contain at most one vertex from each \(V_{i}\). Let \(t_{r}(n,\ell)\) denote the number of edges in \(T_{r}(n,\ell)\) and notice that \(t_{r}(n,\ell)\sim\binom{\ell}{r}\left(\frac{n}{r}\right)^{r}\). Extending the classical Turan Theorem to hypergraphs, Mubayi proved in [24] that \(\operatorname{ex}(n,H_{\ell+1}^{r})=(1+o(1))t_{r}(n,\ell)\)
for all \(\ell\geq r\geq 3\). Building on a stability theorem of Mubayi, Pikhurko [28] later proved that for fixed \(\ell\geq r\geq 3\), \(\operatorname{ex}(n,H^{r}_{\ell+1})=t_{r}(n,\ell)\) holds for all sufficiently large \(n\).
The main results in this subsection are as follows.
**Theorem 1.4**.: _Let \(\ell\geq 3\) and \(t\geq 4\) be fixed integers. For all sufficiently large \(n\),_
\[\operatorname{ar}(n,H^{3}_{F})=\left\{\begin{aligned} & t_{3}(n,\ell)+\ell+1,&& \text{if}& F\in\left\{K^{\alpha}_{\ell}[t],\ K^{\beta}_{\ell}[t] \right\},\\ & t_{3}(n,\ell)+2,&&\text{if}& F=K^{\gamma}_{\ell}[t].\end{aligned}\right.\]
The situation for \(r\geq 4\) is simpler.
**Theorem 1.5**.: _Let \(\ell\geq r\geq 4\) and \(t\geq 4\) be fixed integers. For all sufficiently large \(n\),_
\[\operatorname{ar}(n,H^{r}_{F})=t_{r}(n,\ell)+2\quad\text{for all}\quad F\in \left\{K^{\alpha}_{\ell}[t],\ K^{\beta}_{\ell}[t],\ K^{\gamma}_{\ell}[t] \right\}.\]
We would like to remind the reader that the case \(r=2\) can be handled by a result of Jiang-Pikhurko in [14].
Observe that for every \(r\geq 3\), the \(r\)-graph \(H^{r}_{\ell+1}\) is contained in \(K^{\gamma}_{\ell}[4]\). Hence, we obtain the following corollary, which is an extension of (2) to hypergraphs.
**Corollary 1.6**.: _Let \(\ell\geq r\geq 3\) be fixed integers. For all sufficiently large \(n\),_
\[\operatorname{ar}(n,H^{r}_{\ell+1})=t_{r}(n,\ell)+2.\]
Proofs for Theorems 1.4 and 1.5 are presented in Section 3.
## 2 Proof of Theorem 1.1
Proof of Theorem 1.1.: Let \(n\geq r>k\geq 2\) be integers and \(F\) be a \(k\)-graph. Let \(\chi\colon K^{r}_{n}\to\mathbb{N}\) is a rainbow-\(H^{r}_{F}\)-free coloring and \(\mathcal{H}\subseteq K^{r}_{n}\) be a rainbow subgraph. Let \(\mathcal{C}\) be a maximal collection of pairwise edge-disjoint copies of members in \(H^{r}_{F_{-}}\). In other words, members in \(\mathcal{C}\) are pairwise edge-disjoint, and if \(H\subseteq\mathcal{H}\) is a copy of some member in \(H^{r}_{F_{-}}\), then \(H\) contains at least one edge from some member in \(\mathcal{C}\). For convenience, let us assume that
Figure 3: From \(F\) to \(F\vee\{v_{1},v_{2}\}\) and then to \(F^{4}\), where pairs with the same color form a hyperedge.
\(\mathcal{C}=\{Q_{1},\ldots,Q_{m}\}\), where \(m\geq 0\) is an integer and \(Q_{i}\subseteq\mathcal{H}\) is a copy of some member in \(H^{r}_{F_{-}}\) for \(i\in[m]\).
Let \(i\in[m]\) and \(S\subseteq[n]\setminus V(Q_{i})\) be an \((r-k)\)-set. Since \(Q_{i}\) is a copy of some member in \(H^{r}_{F_{-}}\), there exists a \(k\)-set \(e_{i}\subseteq V(Q_{i})\) such that \(Q_{i}\cup\{\{e_{i}\cup S\}\}\) is a copy of \(H^{r}_{F}\). We let \(\mathcal{A}\) be an auxiliary multi-\(k\)-graph whose edge set is the collection of \(e_{i}\) for all \(i\in[m]\). For \(i\in[m]\) let \(\chi(Q_{i}):=\{\chi(e)\colon e\in Q_{i}\}\). Since \(\mathcal{H}\) is rainbow and \(Q_{i}\subseteq\mathcal{H}\) for \(i\in[m]\), we know that
\[\chi(Q_{i})\cap\chi(Q_{j})=\emptyset\quad\text{for all}\quad 1\leq i<j\leq m. \tag{3}\]
**Claim 2.1**.: _For every \(i\in[m]\) and for every \((r-k)\)-set \(S\subseteq[n]\setminus V(Q_{i})\), we have \(\chi(e_{i}\cup S)\in\chi(Q_{i})\)._
Proof.: Suppose to the contrary that \(\chi(e_{i}\cup S)\not\in\chi(Q_{i})\), then the \(r\)-graph \(Q_{i}\cup\{\{e_{i}\cup S\}\}\) would be a rainbow copy of \(H^{r}_{F}\), a contradiction.
**Claim 2.2**.: _The set \(\mathcal{A}\) does not contain multi-sets. In other words, \(e_{i}\neq e_{j}\) for all \(1\leq i<j\leq m\)._
Proof.: Suppose to the contrary that \(e_{i}=e_{j}=:e\) for some \(i\neq j\). Let \(S\subseteq[n]\setminus(V(Q_{i})\cup V(Q_{j}))\) be an \((r-k)\)-set. It follows from Claim 2.1 and (3) that \(\chi(e\cup S)\in\chi(Q_{i})\cap\chi(Q_{j})=\emptyset\), a contradiction.
**Claim 2.3**.: _The \(k\)-graph \(\mathcal{A}\) is \(\operatorname{Split}(F_{+})\)-free. In particular, \(m\leq\operatorname{ex}(n,\operatorname{Split}(F_{+}))\)._
Proof.: Suppose to the contrary that \(\mathcal{A}\) contains some member \(\hat{F}\in\operatorname{Split}(F)\). By the definition of \(\operatorname{Split}(F)\), there exists an independent set \(I=\{v_{1},\ldots,v_{p}\}\) in \(F\) such that \(\hat{F}=F\lor I\), where \(p\geq 0\) is an integer. Let \(d_{i}:=d_{F}(v_{i})\) for \(i\in[p]\), and let \(d:=\sum_{i\in[p]}d_{i}\). Assume that \(\hat{F}=\{f_{1},\ldots,f_{\ell}\}\), where \(\ell:=|\hat{F}|\). Let \(\psi\colon\hat{F}\to\mathcal{A}\) be an embedding. By relabelling members in \(\mathcal{C}\) if necessary, we may assume that \(\psi(f_{i})=e_{i}\) for \(i\in[\ell]\). Let \(U:=\bigcup_{i\in[\ell]}V(Q_{i})\).
* For \(i\in[p]\), choose a \(d_{i}\)-star2\(S_{d_{i}}\) from \(\binom{[n]\setminus U}{r-k}\), and Footnote 2: A \(d\)-star is a collection of \(d\) edges such that there is a unique vertex (called center) that is contained in all edges and two edges intersect only on this vertex.
* for \(j\in[d+1,\ell]\), choose an \((r-k)\)-subset \(e^{\prime}_{j}\) of \([n]\setminus U\)
such that elements in \(\big{\{}S_{d_{1}},\ldots,S_{d_{p}},e^{\prime}_{d+1},\ldots,e^{\prime}_{\ell} \big{\}}\) are pairwise vertex-disjoint. We will use \(\{e_{1},\ldots,e_{\ell}\}\) and \(\big{\{}S_{d_{1}},\ldots,S_{d_{p}},e^{\prime}_{d+1},\ldots,e^{\prime}_{\ell} \big{\}}\) to build a rainbow copy of \(F^{r}\).
By relabelling members in \(\hat{F}\) if necessary, we may assume that
\[\{f_{1},\ldots,f_{d_{1}}\},\ldots,\{f_{d_{1}+\cdots+d_{p-1}+1},\ldots,f_{d}\} \subseteq\hat{F}\]
are edge sets obtained by splitting \(v_{1},\ldots,v_{p}\), respectively. In other words,
\[\{f_{d_{1}+\cdots+d_{i-1}+1},\ldots,f_{d_{1}+\cdots+d_{i}}\}=(F\lor v_{i}) \setminus F\quad\text{for}\quad i\in[p].\]
We further assume that \(S_{d_{i}}=\{e^{\prime}_{d_{1}+\cdots+d_{i-1}+1},\ldots,e^{\prime}_{d_{1}+ \cdots+d_{i}}\}\) for \(i\in[p]\). Now let \(E_{j}:=e_{j}\cup e^{\prime}_{j}\) for \(j\in[\ell]\). It is easy to observe that \(\{E_{1},\ldots,E_{\ell}\}\) is a copy of \(F^{r}\) with the center of \(S_{d_{i}}\) playing the role of \(v_{i}\) for \(i\in[p]\) (see Figure 3). In addition, it follows from Claim 2.1 and (3) that \(\{E_{1},\ldots,E_{\ell}\}\) is rainbow, which contradicts the rainbow-\(H^{r}_{F}\)-freeness of the coloring \(\chi\).
Let \(\mathcal{H}^{\prime}:=\mathcal{H}\setminus\left(\bigcup_{i\in[m]}Q_{i}\right)\). It follows from the maximality of \(\mathcal{C}\) that \(\mathcal{H}^{\prime}\) is \(H^{r}_{F_{-}}\)-free. In addition, it follows from Claim 2.3 that \(|\mathcal{H}^{\prime}|\geq|\mathcal{H}|-|F|\cdot m\geq|\mathcal{H}|-|F|\cdot \mathrm{ex}(n,\mathrm{Split}(F_{+}))\), completing the proof of Theorem 1.1.
## 3 Proofs of Theorems 1.4 and 1.5
We prove Theorems 1.4 and 1.5 in this section. Before that, we prove some useful lemmas.
**Lemma 3.1**.: _Let \(r\geq 2\), \(F\) be an \(r\)-graph, and \(\chi\colon K^{r}_{n}\to\mathbb{N}\) be a rainbow-\(F\)-free coloring. Every rainbow subgraph \(\mathcal{H}\subseteq K^{r}_{n}\) can be made \(F_{-}\)-free by removing \(o(n^{r})\) edges._
Proof.: The lemma follows easily from the Hypergraph Removal Lemma (see e.g. [26, 29, 33, 10]) and the observation of Erdos-Simonovits-Sos [3] that every rainbow subgraph \(\mathcal{H}\subseteq K^{r}_{n}\) is \(F_{-}[2]\)-free.
We also need the following strengthen of [20, Lemma 4.5].
Let \(\mathcal{G}\) be an \(r\)-graph with vertex set \([m]\) and let \(V_{1}\sqcup\cdots\sqcup V_{m}=V\) be a partition of some vertex set \(V\). We use \(\mathcal{G}[V_{1},\ldots,V_{m}]\) to denote the \(r\)-graph obtained by replacing vertex \(i\) in \(\mathcal{G}\) with the set \(V_{i}\) for \(i\in[m]\), and by replacing each edge in \(\mathcal{G}\) with a corresponding complete \(r\)-partite \(r\)-graph. We call \(\mathcal{G}[V_{1},\ldots,V_{m}]\) a **blowup** of \(\mathcal{G}\).
**Lemma 3.2**.: _Fix a real \(\eta\in(0,1)\) and integers \(m,n\geq 1\). Let \(\mathcal{G}\) be an \(r\)-graph with vertex set \([m]\) and let \(\mathcal{H}\) be a further \(r\)-graph with \(v(\mathcal{H})=n\). Consider a vertex partition \(V(\mathcal{H})=\bigcup_{i\in[m]}V_{i}\) and the associated blow-up \(\widehat{\mathcal{G}}:=\mathcal{G}[V_{1},\ldots,V_{m}]\) of \(\mathcal{G}\). Suppose that two sets \(T\subseteq[m]\) and \(S\subseteq V(\mathcal{H})\) have the properties_
1. \(|V^{\prime}_{j}|\geq 2q(|S|+1)|T|\eta^{1/r}n\) _for all_ \(j\in T\)_,_
2. \(|\mathcal{H}[V^{\prime}_{j_{1}},\ldots,V^{\prime}_{j_{r}}]|\geq|\widehat{ \mathcal{G}}[V^{\prime}_{j_{1}},\ldots,V^{\prime}_{j_{r}}]|-\eta n^{r}\) _for all_ \(\{j_{1},\ldots,j_{r}\}\in\binom{T}{r}\)_, and_
3. \(|L_{\mathcal{H}}(v)[V^{\prime}_{j_{1}},\ldots,V^{\prime}_{j_{r-1}}]|\geq|L_{ \widehat{\mathcal{G}}}(v)[V^{\prime}_{j_{1}},\ldots,V^{\prime}_{j_{r-1}}]|- \eta n^{r-1}\) _for all_ \(v\in S\) _and for all_ \(\{j_{1},\ldots,j_{r-1}\}\in\binom{T}{r-1}\)_,_
_where \(V^{\prime}_{i}:=V_{i}\setminus S\) for \(i\in[m]\). Then there exists a selection of \(q\)-set \(U_{i}\subseteq V_{j}\) for all \(j\in[T]\) such that \(U:=\bigcup_{j\in T}U_{j}\) satisfies \(\widehat{\mathcal{G}}[U]\subseteq\mathcal{H}[U]\) and \(L_{\widehat{\mathcal{G}}}(v)[U]\subseteq L_{\mathcal{H}}(v)[U]\) for all \(v\in S\). In particular, if \(\mathcal{H}\subseteq\widehat{\mathcal{G}}\), then \(\widehat{\mathcal{G}}[U]=\mathcal{H}[U]\) and \(L_{\widehat{\mathcal{G}}}(v)[U]=L_{\mathcal{H}}(v)[U]\) for all \(v\in S\)._
Proof.: By shrinking \(V^{\prime}_{j}\) if necessary, we may assume that \(|V^{\prime}_{j}|=n_{1}:=q(|S|+1)|T|\eta^{1/r}n\). Choose for each \(j\in T\) a \(q\)-set \(U_{j}\subseteq V^{\prime}_{j}\) independently and uniformly at random. Let \(U:=\bigcup_{j\in T}U_{j}\). For every \(\{j_{1},\ldots,j_{r}\}\in\mathcal{G}\), let \(\mathbb{P}_{j_{1},\ldots,j_{r}}\) denote the probability that \(\mathcal{H}[U_{j_{1}},\ldots,U_{j_{r}}]\neq\widehat{\mathcal{G}}[U_{j_{1}}, \ldots,U_{j_{r}}]\). Then it follows from Assumption (b) that
\[\mathbb{P}_{j_{1},\ldots,j_{r}}=1-\frac{N\left(K_{q,\ldots,q}, \mathcal{H}[U_{j_{1}},\ldots,U_{j_{r}}]\right)}{N\left(K_{q,\ldots,q},\widehat{ \mathcal{G}}[U_{j_{1}},\ldots,U_{j_{r}}]\right)} \leq\eta n^{r}\left\{\binom{n_{1}-1}{q-1}\right\}^{r}\!\left\{ \binom{n_{1}}{q}\right\}^{r}\] \[=\eta n^{r}\left(\frac{q}{2q(|S|+1)|T|\eta^{1/r}n}\right)^{r}\leq \frac{1}{2|T|^{r}}.\]
For every \(v\in S\) and \(\{j_{1},\ldots,j_{r-1}\}\in L_{\mathcal{G}}(v)\), let \(\mathbb{P}_{v;j_{1},\ldots,j_{r-1}}\) denote the probablity that \(L_{\mathcal{H}}(v)[U_{j_{1}},\ldots,U_{j_{r-1}}]\neq L_{\widehat{\mathcal{G}}} (v)[U_{j_{1}},\ldots,U_{j_{r-1}}]\). Then it follows from Assumption (c) that
\[\mathbb{P}_{v;j_{1},\ldots,j_{r-1}}=1-\frac{N\left(K_{q,\ldots,q},L_{\mathcal{H}}[U_{j_{1}},\ldots,U_{j_{r-1}}]\right)}{N\left(K_{q,\ldots,q},L_ {\widehat{\mathcal{G}}}[U_{j_{1}},\ldots,U_{j_{r-1}}]\right)} \leq\eta n^{r-1}\left\{\binom{n_{1}-1}{q-1}\right\}^{r-1}\!\! \left\{\binom{n_{1}}{q}\right\}^{r-1}\] \[=\eta n^{r-1}\left(\frac{q}{2q(|S|+1)|T|\eta^{1/r}n}\right)^{r-1}\] \[\leq\frac{1}{2(|S|+1)|T|^{r-1}}.\]
Therefore, the probability that \(U\) fails to have the desired properties is at most
\[\binom{|T|}{r}\times\frac{1}{2|T|^{r}}+|S|\binom{|T|}{r-1}\times\frac{1}{2(|S| +1)|T|^{r-1}}\leq\frac{1}{4}+\frac{1}{2}=\frac{3}{4}.\]
So the probability that \(U\) has these properties is positive.
Another ingredient that we need is the following stability result for \(H^{r}_{\ell+1}[t]\)-free \(r\)-graphs.
**Lemma 3.3**.: _Let \(\ell\geq r\geq 3\) and \(t\geq 1\) be fixed integers. For every \(\epsilon>0\) there exist \(\delta>0\) and \(n_{0}\) such that the following holds for all \(n\geq n_{0}\). Suppose that \(\mathcal{H}\) is a \(H^{r}_{\ell+1}[t]\)-free \(r\)-graph with \(n\geq n_{0}\) vertices and at least \(t_{r}(n,\ell)-\delta n^{r}\) edges. Then \(\mathcal{H}\) can be made \(\ell\)-partite by removing at most \(\epsilon n^{r}\) edges._
Proof.: This lemma follows easily from the Hypergraph Removal Lemma and the stability theorem of Mubayi on \(H^{r}_{\ell+1}\)-free \(r\)-graphs [24, Theorem 3].
Now we are ready to prove Theorem 1.4.
Proof of Theorem 1.4.: Fix integers \(\ell\geq 3\) and \(t\geq 4\). The lower bound for the case \(F\in\left\{K^{\beta}_{\ell}[t],\ K^{\gamma}_{\ell}[t]\right\}\) follows the well-known fact
\[\mathrm{ar}(n,F)\geq\mathrm{ex}(n,F_{-})+2\quad\text{holds for all $n\geq 1$ and for all $r$-graphs $F$.}\]
The lower bound for the case \(F=K^{\alpha}_{\ell}\) follows from the following construction.
Fix \(n\geq\ell\geq 3\). Let \(m:=t_{3}(n,\ell)+\ell\). Let \(V_{1}\sqcup\cdots\sqcup V_{\ell}=[n]\) be a partition such that \(|V_{\ell}|+1\geq|V_{1}|\geq|V_{2}|\geq\cdots\geq|V_{\ell}|\).
* For \(i\in[\ell]\) color all triples that contain at least two vertices from \(V_{i}\) using color \(i\), and
* fix an arbitrary rainbow coloring for the rest \(t_{3}(n,\ell)\) triples using colors that were not used in the previous step.
We leave the verification of the rainbow-\(H^{3}_{F}\)-freeness of the coloring defined above to interested readers.
Now we focus on the upper bound. Fix \(F\in\left\{K^{\alpha}_{\ell}[t],\ K^{\beta}_{\ell}[t],\ K^{\gamma}_{\ell}[t]\right\}\). Let \(\delta>0\) be sufficiently small and \(n\) be a sufficiently large. Let \(\chi\colon K^{3}_{n}\to\mathbb{N}\) be a rainbow \(H^{3}_{F}\)-free
coloring, and let \(\mathcal{H}\subseteq K_{n}^{3}\) be a maximum rainbow subgraph. Suppose to the contrary that
\[|\mathcal{H}|\geq\left\{\begin{aligned} & t_{3}(n,\ell)+\ell+1,& \text{if}\quad F\in\left\{K_{\ell}^{\alpha}[t],\ K_{\ell}^{\beta}[t] \right\},\\ & t_{3}(n,\ell)+2,&\text{if}\quad F=K_{\ell}^{ \gamma}[t].\end{aligned}\right.\]
Let \(F_{1}:=K_{\ell}^{+}[t]\), where \(K_{\ell}^{+}[t]\) denote the graph obtained from \(K_{\ell}[t]\) by adding one edge into some part. Note that \(F_{1}\in F_{-}\). Let \(\mathcal{H}^{\prime}\subseteq\mathcal{H}\) be a maximum \(H_{F_{1}}^{3}\)-free subgraph. It follows from Lemma 3.1 that
\[|\mathcal{H}^{\prime}|\geq|\mathcal{H}|-\frac{\delta n^{3}}{100}\geq\binom{ \ell}{3}\left(\frac{n}{\ell}\right)^{3}-\frac{\delta n^{3}}{50}. \tag{4}\]
Let \(V_{1}\sqcup\cdots\sqcup V_{\ell}=[n]\) be partition such that the number of edges in the induced \(\ell\)-partite subgraph3\(\mathcal{H}^{\prime\prime}:=\mathcal{H}[V_{1},\ldots,V_{\ell}]\) is maximized. Since \(H_{F_{1}}^{3}\) is contained in \(H_{\ell+1}^{3}[t]\), it follows from (4) and Lemma 3.3 that
Footnote 3: Here, \(\mathcal{H}[V_{1},\ldots,V_{\ell}]\) consists of all edges in \(\mathcal{H}\) that contain at most one vertex from each \(V_{i}\).
\[|\mathcal{H}^{\prime\prime}|\geq|\mathcal{H}^{\prime}|-\frac{\delta n^{3}}{25} \geq\binom{\ell}{3}\left(\frac{n}{\ell}\right)^{3}-\frac{\delta n^{3}}{18}. \tag{5}\]
Combined with the inequality (see [21, Lemma 2.2])
\[|\mathcal{H}^{\prime\prime}|\leq\sum_{1\leq i<j<k\leq\ell}|V_{i}||V_{j}||V_{k }|\leq\binom{\ell}{3}\left(\frac{n}{\ell}\right)^{3}-\frac{\ell-2}{6\ell}\sum _{i\in[\ell]}\left(|V_{i}|-\frac{n}{\ell}\right)^{2}n,\]
we obtain
\[\frac{n}{\ell}-\sqrt{\delta}n\leq|V_{i}|\leq\frac{n}{\ell}+\sqrt{\delta}n \quad\text{for all}\quad i\in[\ell]. \tag{6}\]
Let \(\mathcal{G}\) denote the complete \(r\)-graph with \(\ell\) vertices, and let \(\widehat{\mathcal{G}}:=\mathcal{G}[V_{1},\ldots,V_{\ell}]\) denote the complete \(\ell\)-partite \(r\)-graph4 with parts \(V_{1},\ldots,V_{\ell}\). Let
Footnote 4: In other words, \(\widehat{\mathcal{G}}\) consists of all \(r\)-subset of \([n]\) that contain at most one vertex from each \(V_{i}\).
\[\mathcal{B}:=\mathcal{H}\setminus\widehat{\mathcal{G}},\quad\text{and}\quad \mathcal{M}:=\widehat{\mathcal{G}}\setminus\mathcal{H}=\widehat{\mathcal{G}} \setminus\mathcal{H}^{\prime\prime}.\]
It follows from (5) that
\[|\mathcal{M}|=|\widehat{\mathcal{G}}|-|\mathcal{H}^{\prime\prime}|\leq\binom{ \ell}{3}\left(\frac{n}{\ell}\right)^{3}-\left(\binom{\ell}{3}\left(\frac{n}{ \ell}\right)^{3}-\frac{\delta n^{3}}{18}\right)\leq\delta n^{3}. \tag{7}\]
For \(i\in[\ell]\) let
\[D_{i}:=\left\{v\in V_{i}\colon|L_{\widehat{\mathcal{G}}}(v)|-|L_{\mathcal{H}^ {\prime\prime}}(v)|\leq 3\delta^{1/3}n^{2}\right\},\quad\text{and}\quad\overline{D}_{i}:= V_{i}\setminus D_{i}.\]
For convenience, let \(D:=\bigcup_{i\in[\ell]}D_{i}\) and \(\overline{D}:=\bigcup_{i\in[\ell]}\overline{D}_{i}\).
**Claim 3.4**.: _We have \(|\mathcal{M}|\geq\delta^{1/3}n^{2}|\overline{D}|\) and \(|\overline{D}|\leq\delta^{2/3}n\)._
Proof.: It follows from the definition that every vertex in \(\overline{D}\) contributes at least \(\delta n^{2}\) elements in \(\mathcal{M}\). Therefore,
\[|\mathcal{M}|\geq\frac{1}{3}\times|\overline{D}|\times 3\delta^{1/3}n^{2}= \delta^{1/3}n^{2}|\overline{D}|.\]
Combined with (7), we obtain \(|\overline{D}|\leq\delta n^{3}/(\delta^{1/3}n^{2})=\delta^{2/3}n\), which completes the proof of Claim 3.4.
The most crucial part in the proof is the following claim.
**Claim 3.5**.: _If \(F\in\left\{K_{\ell}^{\alpha}[t],\ K_{\ell}^{\beta}[t]\right\}\). Then \(\mathcal{B}\) does not contain two edges \(e,e^{\prime}\) such that_
\[\min\left\{|e\cap D_{i}|,\ |e^{\prime}\cap D_{i}|\right\}\geq 2\quad\text{holds for some }i\in[\ell]. \tag{8}\]
_If \(F=K_{\ell}^{\gamma}[t]\), then the set \(\mathcal{B}\) does not contain two edges \(e,e^{\prime}\) such that_
\[\max\left\{|e\cap D_{i}|\colon i\in[\ell]\right\}\geq 2\quad\text{and} \quad\max\left\{|e^{\prime}\cap D_{i}|\colon i\in[\ell]\right\}\geq 2. \tag{9}\]
Proof.: First consider the case \(F=K_{\ell}^{\alpha}[t]\). Suppose to the contrary that there exist two distinct edges \(e,e^{\prime}\in\mathcal{B}\) such that (8) holds. By symmetry, we may assume that \(\min\left\{|e\cap D_{1}|,\ |e^{\prime}\cap D_{1}|\right\}\geq 2\). Choose a \(3\)-set \(f\subseteq D_{1}\) such that \(|f\cap e|=|f\cap e^{\prime}|=1\). By symmetry, we may assume that \(\chi(f)\neq\chi(e)\). Let \(\{v_{1}\}:=f\cap e\). Fix \(v_{2}\in(e\cap D_{1})\setminus\{v_{1}\}\) and \(v_{3}\in f\setminus\{v_{1}\}\). Let \(\mathcal{H}^{\prime}\) be the \(3\)-graph obtained from \(\mathcal{H}\) by removing an edge (if there exists such an edge) with color \(\chi(f)\). Then apply Lemma 3.2 to \(\mathcal{H}^{\prime}\) with \(V_{i}^{\prime}=D_{i}\setminus(e\cup f)\) for \(i\in[\ell]\), \(T=[\ell]\), \(S=\{v_{1},v_{2},v_{3}\}\), and \(q=2\binom{\ell}{2}t^{2}\), we obtain a \(q\)-set \(U_{i}\subseteq D_{i}\setminus(e\cup f)\) for each \(i\in[\ell]\) such that the induced \(\ell\)-partite \(3\)-graph of \(\mathcal{H}^{\prime}\) on \(U_{1},\ldots,U_{\ell}\) is complete, and for every \(v\in\{v_{1},v_{2},v_{3}\}\) the induced \((\ell-1)\)-partite graph of \(L_{\mathcal{H}^{\prime}}(v)\) on \(U_{2},\ldots,U_{\ell}\) is also complete. Note that Lemma 3.2 (a) is guaranteed by (6), Lemma 3.2 (b) is guaranteed by the definition of \(D_{i}\), and Lemma 3.2 (c) is guaranteed by (7). Let \(U:=\{v_{1},v_{2},v_{3}\}\cup\bigcup_{i\in[\ell]}U_{i}\) It is easy to see that the expansion of \(K_{\ell}^{\alpha}[t]\) is contained in \(\{e,f\}\cup\mathcal{H}^{\prime}[U]\). This is a contradiction, since \(\{e,f\}\cup\mathcal{H}^{\prime}[U]\) is rainbow.
The case \(F=K_{\ell}^{\beta}[t]\) can be proved similarly by choosing a \(3\)-set \(f\subseteq D_{1}\) such that \(|f\cap e|=|f\cap e^{\prime}|=0\). So we omit the details here.
Now we consider the case \(F=K_{\ell}^{\gamma}[t]\). Suppose to the contrary that there exist two distinct edges \(e,e^{\prime}\in\mathcal{B}\) such that (9) holds. Let us assume that \(|e\cap D_{i_{1}}|\geq 2\) and \(|e^{\prime}\cap D_{i_{2}}|\geq 2\), where \(i_{1},i_{2}\in[\ell]\). Fix \(\{u_{1},v_{1}\}\subset e\cap D_{i_{1}}\) and \(\{u^{\prime}_{1},v^{\prime}_{1}\}\subset e^{\prime}\cap D_{i_{2}}\). Then apply Lemma 3.2 to \(\mathcal{H}\) with \(V_{i}^{\prime}=D_{i}\setminus(e\cup f)\) for \(i\in[\ell]\), \(T=[\ell]\), \(S=\{u_{1},v_{1},u^{\prime}_{1},v^{\prime}_{1}\}\), and \(q=2\binom{\ell}{2}t^{2}\), we obtain a \(q\)-set \(U_{i}\subseteq D_{i}\setminus(e\cup f)\) for each \(i\in[\ell]\) such that
* the induced \(\ell\)-partite \(3\)-graph of \(\mathcal{H}\) on \(U_{1},\ldots,U_{\ell}\) is complete,
* for every \(v\in\{u_{1},v_{1}\}\) the induced \((\ell-1)\)-partite graph of \(L_{\mathcal{H}}(v)\) on \(\{U_{i}\colon i\in[\ell]\setminus\{i_{1}\}\}\) is complete,
* and for every \(v\in\{u^{\prime}_{1},v^{\prime}_{1}\}\) the induced \((\ell-1)\)-partite graph of \(L_{\mathcal{H}}(v)\) on \(\{U_{i}\colon i\in[\ell]\setminus\{i_{2}\}\}\) is complete.
Choose \(i^{*}\in[\ell]\setminus\{i_{1},i_{2}\}\) and fix a \(3\)-set \(f\subseteq D_{i^{*}}\) such that \(|f\cap U_{i^{*}}|=2\). By symmetry, we may assume that \(\chi(f)\neq\chi(e)\). Let \(\{u_{2},v_{2}\}:=f\cap U_{i^{*}}\). Fix a \(t\)-set \(W_{i}\subseteq U_{i}\) for \(i\in[\ell]\setminus\{i_{1},i^{*}\}\), fix a \(t\)-set \(W_{i_{1}}\subseteq U_{i_{1}}\cup\{u_{1},v_{1}\}\) with \(\{u_{1},v_{1}\}\subseteq W_{i_{1}}\), and fix a \(t\)-set \(W_{i^{*}}\subseteq U_{i^{*}}\) with \(\{u_{2},v_{2}\}\subseteq W_{i^{*}}\). Let \(K\) denote the complete \(\ell\)-partite graph with parts \(W_{1},\ldots,W_{\ell}\). Observe from the choice of \(U_{i}\)'s that every pair in \(K\) is contained in at least \(q=2\binom{\ell}{2}t^{2}\) edges in \(\mathcal{H}\). Since \(\mathcal{H}\) is rainbow, it is easy to greedily extend \(K\) to be a rainbow copy of \(H_{K}^{3}\) and avoid using the color \(\chi(f)\). However, this copy of \(H_{K}^{3}\) together with edges \(e\) and \(f\) is a rainbow copy of \(H_{F}^{3}\), contradicting the rainbow-\(H_{F}^{3}\)-freeness of \(\chi\).
For \(i\in\{0,1,2,3\}\) let
\[\mathcal{B}_{i}:=\left\{e\in\mathcal{B}\colon|e\cap\overline{D}|=i\right\}.\]
**Case 1**: \(F\in\left\{K_{\ell}^{\alpha}[t],\ K_{\ell}^{\beta}[t]\right\}\).
For every triple \(e\in\mathcal{B}_{0}\cup\mathcal{B}_{1}\) we can fix a pair \(e^{\prime}\subseteq e\cap D_{i}\) for some \(i\in[\ell]\). It follows from Claim 3.5 that no two triples will share the same pair and no two pairs lie in the same part. Therefore, \(|\mathcal{B}_{0}|+|\mathcal{B}_{1}|\leq\ell\). Combined with Claim 3.4 and the trivial bound \(|\mathcal{B}_{2}|+|\mathcal{B}_{3}|\leq n|\overline{\mathcal{D}}|^{2}\), we obtain
\[|\mathcal{H}|\leq|\widehat{\mathcal{G}}|+\sum_{i=0}^{3}|\mathcal{ B}|-|\mathcal{M}| \leq t_{3}(n,\ell)+\ell+n|\overline{\mathcal{D}}|^{2}-\delta^{1/3}n^{2}| \overline{\mathcal{D}}|\] \[\leq t_{3}(n,\ell)+\ell-\left(\delta^{2/3}n-\delta^{1/3}n \right)n|\overline{\mathcal{D}}|\leq t_{3}(n,\ell)+\ell,\]
a contradiction.
**Case 2**: \(F=K_{\ell}^{\gamma}[t]\).
Similarly, for every triple \(e\in\mathcal{B}_{0}\cup\mathcal{B}_{1}\) we can fix a pair \(e^{\prime}\subseteq e\cap D_{i}\) for some \(i\in[\ell]\). It follows from Claim 3.5 that no two triples will share the same pair and the number of pairs is at most one. Therefore, \(|\mathcal{B}_{0}|+|\mathcal{B}_{1}|\leq 1\). Similarly, by Claim 3.4 and the trivial bound \(|\mathcal{B}_{2}|+|\mathcal{B}_{3}|\leq n|\overline{\mathcal{D}}|^{2}\), we have
\[|\mathcal{H}|\leq|\widehat{\mathcal{G}}|+\sum_{i=0}^{3}|\mathcal{ B}|-|\mathcal{M}| \leq t_{3}(n,\ell)+1+n|\overline{\mathcal{D}}|^{2}-\delta^{1/3}n^{2}| \overline{\mathcal{D}}|\] \[\leq t_{3}(n,\ell)+1-\left(\delta^{2/3}n-\delta^{1/3}n\right)n| \overline{\mathcal{D}}|\leq t_{3}(n,\ell)+1,\]
a contradiction.
The proof for Theorem 1.5 is similar to the proof of Theorem 1.4. So we omit the detail and only sketch the proof for the most crucial claim.
**Claim 3.6**.: _The set \(\mathcal{B}\) does not contain two edges \(e,e^{\prime}\) such that_
\[\max\left\{|e\cap D_{i}|\colon i\in[\ell]\right\}\geq 2\quad\text{and} \quad\max\left\{|e^{\prime}\cap D_{i}|\colon i\in[\ell]\right\}\geq 2. \tag{10}\]
Proof.: Suppose to the contrary that there exist two distinct edges \(e,e^{\prime}\in\mathcal{B}\) such that (10) holds. Let us assume that \(|e\cap D_{i_{1}}|\geq 2\) and \(|e^{\prime}\cap D_{i_{2}}|\geq 2\), where \(i_{1},i_{2}\in[\ell]\).
If \(F=K_{\ell}^{\alpha}[t]\), then we choose an \(r\)-set \(f\subseteq D_{i_{1}}\cup D_{i_{2}}\) such that \(|f\cap e|=|f\cap e^{\prime}|=1\) and such that \(\min\left\{|f\cap D_{i_{1}}|,\ |f\cap D_{i_{2}}|\right\}\geq 2\). Since \(r\geq 4\), such \(f\) exists.
If \(F=K_{\ell}^{\beta}[t]\), then we choose an \(r\)-set \(f\subseteq D_{i_{1}}\cup D_{i_{2}}\) such that \(|f\cap e|=|f\cap e^{\prime}|=0\) and such that \(\min\left\{|f\cap D_{i_{1}}|,\ |f\cap D_{i_{2}}|\right\}\geq 2\). Since \(r\geq 4\), such \(f\) exists.
The case \(F=K_{\ell}^{\gamma}[t]\) can be handled using the same way as in the proof of Claim 3.5.
The rest part is similar to the proof of Claim 3.5, so we omit the details.
## 4 Concluding remarks
\(\bullet\) Given an \(r\)-graph \(F\) and an integer \(1\leq k<r\), we say an edge \(e\in F\) is \(k\)**-pendant** if \(e\) contains a \(k\)-subset \(e^{\prime}\) such that
\[e^{\prime}\cap f=\emptyset\quad\text{for all}\quad f\in F\setminus\{e\}.\]
For convenience, let \(F_{k-}\) denote the family of \(r\)-graphs that can be obtained from \(F\) by removing one \(k\)-pendant edge, i.e.
\[F_{k-}:=\{F\setminus\{e\}\colon e\in F\text{ is $k$-pendant}\}\,.\]
The argument in the proof of Theorem 1.1 (see Claim 2.2) yields the following result.
**Theorem 4.1**.: _Let \(r>k\geq 1\) be integers and \(F\) be an \(r\)-graph. Suppose that \(\chi\colon K_{n}^{r}\to\mathbb{N}\) is a rainbow-\(F\)-free coloring. Then every rainbow subgraph \(\mathcal{H}\subseteq K_{n}^{r}\) can be made \(F_{k-}\)-free by removing at most \((|F|-1)\binom{n}{k}\) edges. In particular, for all integers \(n\geq r\),_
\[\operatorname{ar}(n,F)\leq\operatorname{ex}(n,F_{k-})+(|F|-1)\binom{n}{k}.\]
\(\bullet\) The following question seems interesting, and we are uncertain whether it has already been addressed in the literature.
**Problem 4.2**.: _Let \(r\geq 2\) be an integer. Is it true that for every \(\delta>0\) there exists an \(r\)-graph \(F\) such that_
\[\operatorname{ar}(n,F)-\operatorname{ex}(n,F_{-})=\Omega(n^{r-\delta})? \tag{11}\]
_On the other hand, does there exists an \(r\)-graph \(F\) such that (11) holds for all \(\delta>0\)?_
|
2306.03077 | Modified metrics of acoustic black holes: A review | In this brief review, we will address acoustic black holes arising from
quantum field theory in the Lorentz-violating and non-commutative background.
Thus, we consider canonical acoustic black holes with effective metrics for the
purpose of investigating Hawking radiation and entropy. We show that due to the
generalized uncertainty principle and the modified dispersion relation, the
Hawking temperature is regularized, that is, free from the singularity when the
horizon radius goes to zero. In addition, we also find logarithmic corrections
in the leading order for entropy. | M. A. Anacleto, F. A. Brito, E. Passos | 2023-06-05T17:53:13Z | http://arxiv.org/abs/2306.03077v1 | # Modified metrics of acoustic black holes: A review
###### Abstract
In this brief review, we will address acoustic black holes arising from quantum field theory in the Lorentz-violating and non-commutative background. Thus, we consider canonical acoustic black holes with effective metrics for the purpose of investigating Hawking radiation and entropy. We show that due to the generalized uncertainty principle and the modified dispersion relation, the Hawking temperature is regularized, that is, free from the singularity when the horizon radius goes to zero. In addition, we also find logarithmic corrections in the leading order for entropy.
## I Introduction
Gravitational analogue models are topics of great interest and have been widely studied in the literature due to the possibility of detecting Hawking radiation in the table experiment. In particular, acoustic black holes were proposed by Unruh in 1981 [1; 2] for the purpose of exploring Hawking radiation, as well as investigating other issues to understand quantum gravity effects. It is well known that an acoustic black hole can be generated when fluid motion reaches a speed greater than the local speed of sound. These objects can exhibit properties similar to the laws of thermodynamics of gravitational black holes, such as a Hawking-like temperature and entropy (entanglement entropy). Besides, it has been conjectured that phenomena that are observed in black holes may also occur in acoustic black holes. Furthermore, with the detection of gravitational waves [3; 4] and the capture of the image of a supermassive black hole [5; 6], a window of possibilities in the physics of black holes and also in analogous models was opened. Acoustic black holes have applications in various branches of physics, namely high energy physics, condensed matter, and quantum physics [7; 8]. On the experimental side, Hawking radiation has been successfully measured in the works reported in [9; 10]. And also carried out in other branches of physics [11; 12; 13; 14; 15; 16]. However, in the physics of acoustic black holes, the first experimental measurement of Hawking radiation was devised in the Bose-Einstein condensate [17].
In a recent paper, acoustic black holes embedded in a curved background were constructed by applying relativistic Gross-Pitaevskii and Yang-Mills theories [18]. In [19], an acoustic black hole of a D3-black brane was proposed. On the other hand, relativistic acoustic black holes in Minkowski spacetime were generated from the Abelian Higgs model [20; 21; 22; 23; 24]. Also, relativistic acoustic black holes have emerged from other physical models [25; 26; 27; 28]. In addition, these objects have been used to analyze various physical phenomena, such as superradiance [29; 30; 31; 32; 33], entropy [34; 35; 36; 37; 38], quasinormal modes [39; 40; 41; 42; 43; 44], and as well as, in other models [45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55]. Moreover, in [56], was reported that there is a thermodynamic-like description for acoustic black holes in two dimensions. In this sense, an analogous form of Bekenstein-Hawking entropy (understood as an entanglement entropy) was addressed in [57] by analyzing the Bose-Einstein condensate system. In addition, the dependence of entropy on the area of the event horizon of the acoustic black hole was explored in [58]. Also, in [59], the entanglement entropy of an acoustic black hole was examined.
In this brief review, we are interested in investigating modified acoustic black holes that have been constructed from field theory by considering the Abelian Higgs model in the Lorentz-violating [21] and noncommutative [22] background. To this end, we will explore canonical acoustic black holes with modified metrics to examine the effect of Lorentz symmetry breaking and noncommutativity on Hawking radiation and entropy. In addition, by applying the generalized uncertainty principle and a modified dispersion relation, we show that the Hawking temperature singularity disappears when the horizon radius vanishes. Besides, we also find logarithmic correction terms for entropy. Recently, the stability of the canonical acoustic black hole in the presence of noncommutative effects and minimum length has been addressed by us in [60]. Thus, it was verified that the non-commutativity and the minimum length act as regulators in the Hawking temperature, that is, the singularity is removed. Also, it was shown that for a certain minimum radius the canonical acoustic black hole presents stability.
This brief review is organized as follows. In Sec. II, we briefly review the steps to find the relativistic acoustic black hole metrics. In Sec. III, we briefly review the steps to find the relativistic acoustic black hole modified metrics. In
Sec. IV, wave will focus on canonical acoustic black holes with effective metrics to compute Hawking temperature and entropy. In Sec. V, we will introduce quantum corrections via the generalized uncertainty principle and the modified dispersion relation in the calculation of Hawking temperature and entropy. Finally in Sec. VI we present our final considerations.
## II Acoustic Black Hole
In this section we review the steps to obtain the relativistic acoustic metric from the Lagrangian density of the charged scalar field. Here we will follow the procedure adopted in [20].
### Relativistic Acoustic Metric
In order to determine the relativistic acoustic metric, we start by considering the following Lagrangian density:
\[\mathcal{L}=\partial_{\mu}\phi^{*}\partial^{\mu}\phi+m^{2}|\phi|^{2}-b|\phi|^ {4}. \tag{1}\]
Now, we decompose the scalar field as \(\phi=\sqrt{\rho(x,t)}\exp\left(iS(x,t)\right)\), such that
\[\mathcal{L}=\rho\partial_{\mu}S\partial^{\mu}S+m^{2}\rho-b\rho^{2}+\frac{\rho }{\sqrt{\rho}}(\partial_{\mu}\partial^{\mu})\sqrt{\rho}. \tag{2}\]
Moreover, from the above Lagrangian, we find the equations of motion for \(S\) and \(\rho\) given respectively by
\[\partial_{\mu}\left(\rho\partial^{\mu}S\right)=0, \tag{3}\]
and
\[\frac{1}{\sqrt{\rho}}\partial_{\mu}\partial^{\mu}\sqrt{\rho}+ \partial_{\mu}S\partial^{\mu}S+m^{2}-2b\rho=0, \tag{4}\]
where the Eq. (3) is the continuity equation and Eq. (4) is an equation describing a hydrodynamical fluid, and the term, \(\frac{1}{\sqrt{\rho}}\partial_{\mu}\partial^{\mu}\sqrt{\rho}\), called the quantum potential can be neglected in the hydrodynamic region.
Now, by performing the following perturbations on equations of motion (3) and (4):
\[\rho=\rho_{0}+\epsilon\rho_{1}+\mathcal{O}(\epsilon^{2}), \tag{5}\] \[S=S_{0}+\epsilon\psi+\mathcal{O}(\epsilon^{2}). \tag{6}\]
We obtain
\[\partial_{\mu}\left(\rho_{1}u_{0}^{\mu}+\rho_{0}\partial^{\mu}\psi\right)=0, \tag{7}\]
and
\[u_{0}^{\mu}\partial_{\mu}\psi-b\rho_{1}=0, \tag{8}\]
where we have defined \(u_{0}^{\mu}=\partial^{\mu}S_{0}\). Hence, solving (8) for \(\rho_{1}\) and substituting into (7), we have
\[\partial_{\mu}\left[u_{0}^{\mu}u_{0}^{\nu}+b\rho_{0}g^{\mu\nu}\right]\partial _{\nu}\psi=0. \tag{9}\]
We can also write the above equation as follows:
\[\partial_{t}\left\{\omega_{0}^{2}\left[-1-\frac{b\rho_{0}}{2\omega _{0}^{2}}\right]\partial_{t}\psi-\omega_{0}^{2}\frac{v_{0}^{i}}{\omega_{0}} \partial_{i}\psi\right\}\] \[+\partial_{i}\left\{-\omega_{0}^{2}\frac{v_{0}^{i}}{\omega_{0}} \partial_{t}\psi+\omega_{0}^{2}\left[-\frac{v_{0}^{i}v_{0}^{j}}{\omega_{0}^{2 }}+\frac{b\rho_{0}}{2\omega_{0}^{2}}\delta^{ij}\right]\partial_{j}\psi\right\} =0, \tag{10}\]
where \(\omega_{0}=-\partial^{t}S_{0}\) and \(v_{0}^{i}=\partial_{i}S_{0}\) (the local velocity field). In addition, we define \(c_{s}^{2}=b\rho_{0}/2\omega_{0}^{2}\) to be the speed of sound and \(v^{i}=v_{0}^{i}/\omega_{0}\). However, the equation (II) becomes
\[\partial_{t}\left\{\frac{b\rho_{0}}{2c_{s}^{2}}\left[\left(-1-c_{s}^{2}\right) \partial_{t}\psi-v^{i}\partial_{i}\psi\right]\right\}+\,\partial_{i}\left\{ \frac{b\rho_{0}}{2c_{s}^{2}}\left[-v^{i}\partial_{t}\psi+\left(-v^{i}v^{j}+c_ {s}^{2}\delta^{ij}\right)\partial_{j}\psi\right]\right\}=0. \tag{11}\]
In this way, the above equation can be written as a Klein-Gordon equation in (3+1) dimensional curved space as follows:
\[\frac{1}{\sqrt{-g}}\partial_{\mu}\left(\sqrt{-g}g^{\mu\nu}\partial_{\nu}\right) \psi=0, \tag{12}\]
where
\[\sqrt{-g}g^{\mu\nu}=\frac{b\rho_{0}}{2c_{s}^{2}}\left(\begin{array}{ccc}-1-c_ {s}^{2}&\vdots&-v^{i}\\ \cdots\cdots&\cdot&\cdots\cdots\\ -v^{j}&\vdots&c_{s}^{2}\delta^{ij}-v^{i}v^{j}\end{array}\right). \tag{13}\]
Hence, by determining the inverse of \(g^{\mu\nu}\), we find the relativistic acoustic metric given by
\[g_{\mu\nu}=\frac{b\rho_{0}}{2c_{s}\sqrt{1+c_{s}^{2}-v^{2}}}\left(\begin{array} []{ccc}-c_{s}^{2}+v^{2}&\vdots&-v^{i}\\ \cdots\cdots&\cdot&\cdots\cdots\\ -v^{j}&\vdots&(1+c_{s}^{2})\delta^{ij}\end{array}\right). \tag{14}\]
The metric depends on the density \(\rho_{0}\), the local sound speed in the fluid \(c_{s}\), the velocity of flow \(\vec{v}\). This is the acoustic black hole metric for high \(c_{s}\) and \(\vec{v}\) speeds. Note that, in the non-relativistic limit, up to a overall factor, the metric found by Unruh is obtained.
\[g_{\mu\nu}=\frac{b\rho_{0}}{2c_{s}}\left(\begin{array}{ccc}-c_{s}^{2}+v^{2} &\vdots&-v^{i}\\ \cdots\cdots&\cdot&\cdots\cdots\\ -v^{j}&\vdots&\delta^{ij}\end{array}\right). \tag{15}\]
The relativistic acoustic metric (14) has also been obtained from the Abelian Higgs model [20].
### The Dispersion Relation
Here we aim to examine the dispersion relation. Hence, we will adopt the notation written below
\[\psi\sim\text{Re}\left[e^{i\omega t-i\vec{k}\cdot\vec{x}}\right],\qquad\omega =\frac{\partial\psi}{\partial t},\qquad\vec{k}=\nabla\psi. \tag{16}\]
So we can write the Klein-Gordon equation (11) in terms of momentum and frequency as follows:
\[(1+c_{s}^{2})\,\omega^{2}+2(\vec{v}\cdot\vec{k})\,\omega-\left(c_{s}^{2}-v^{ 2}\right)k^{2}=0. \tag{17}\]
Now, by making \(k^{i}=\delta^{i1}\), we have
\[\omega=\frac{-v_{1}k\,\pm c_{s}k\sqrt{1+c_{s}^{2}-v_{1}^{2}}}{(1+c_{s}^{2})}= \frac{-v_{1}k\,\pm c_{s}k\sqrt{1+(c_{s}-v_{1})(c_{s}+v_{1})}}{(1+c_{s}^{2})}, \tag{18}\]
In the limit of small \(v_{1}\), we find the modified dispersion relation
\[\omega\approx E\left(1+\frac{v_{1}}{2}\right), \tag{19}\]
where \(E=c_{s}\,k\) is the linear dispersion relation.
## III Modified Acoustic Black Hole
In this section we review the derivation of the relativistic acoustic metric from the Abelian Higgs model in the background violating-Lorentz and noncommutative.
### The Lorentz-Violating Model
At this point, we consider the Abelian Higgs model with Lorentz symmetry breaking that has been introduced as a change in the scalar sector of the Lagrangian [61]. Moreover, the relativistic acoustic metric violating Lorentz has been found in [21]. Then, the corresponding Lagrangian for the abelian Higgs model in the Lorentz-violating background is written as follows:
\[\mathcal{L} = -\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+|D_{\mu}\phi|^{2}+m^{2}|\phi|^{2} -b|\phi|^{4}+k^{\mu\nu}D_{\mu}\phi^{*}D_{\nu}\phi, \tag{20}\]
being \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\) the field intensity tensor, \(D_{\mu}\phi=\partial_{\mu}\phi-ieA_{\mu}\phi\) the covariant derivative and \(k^{\mu\nu}\) a constant tensor implementing the Lorentz symmetry breaking, given by [21]
\[k_{\mu\nu}=\left[\begin{array}{cccc}\beta&\alpha&\alpha&\alpha &\alpha\\ \alpha&\beta&\alpha&\alpha\\ \alpha&\alpha&\beta&\alpha\\ \alpha&\alpha&\alpha&\beta\end{array}\right],\quad(\mu,\nu=0,1,2,3), \tag{21}\]
where \(\alpha\) and \(\beta\) are real parameters.
Next, following the steps taken in the previous section to derive the relativistic acoustic metric from quantum field theory, we consider \(\phi=\sqrt{\rho(x,t)}\exp{(iS(x,t))}\) in the Lagrangian above. Thus, we have
\[\mathcal{L} = -\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+\rho\partial_{\mu}S\partial^{ \mu}S-2e\rho A_{\mu}\partial^{\mu}S+e^{2}\rho A_{\mu}A^{\mu}+m^{2}\rho-b\rho^ {2} \tag{22}\] \[+ k^{\mu\nu}\rho(\partial_{\mu}S\partial_{\nu}S-2eA_{\mu}\partial _{\nu}S+e^{2}A_{\mu}A_{\nu})+\frac{\rho}{\sqrt{\rho}}(\partial_{\mu}\tilde{ \partial}^{\mu})\sqrt{\rho},\]
where \(\tilde{\partial}^{\mu}=\partial^{\mu}+k^{\mu\nu}\partial_{\nu}\). The equations of motion for \(S\) and \(\rho\) are:
\[\partial_{\mu}\left[\rho u^{\mu}+\rho k^{\mu\nu}u_{\nu}\right]=0, \tag{23}\]
and
\[\frac{(\partial_{\mu}\tilde{\partial}^{\mu})\sqrt{\rho}}{\sqrt{ \rho}}+u_{\mu}u^{\mu}+k^{\mu\nu}u_{\mu}u_{\nu}+m^{2}-2b\rho=0, \tag{24}\]
where we have defined \(u^{\mu}=\partial^{\mu}S-eA^{\mu}\). Now, by linearizing the equations above around the background \((\rho_{0},S_{0})\), with
\[\rho=\rho_{0}+\epsilon\rho_{1}+\mathcal{O}(\epsilon^{2}), \tag{25}\] \[S=S_{0}+\epsilon\psi+\mathcal{O}(\epsilon^{2}), \tag{26}\]
and keeping the vector field \(A_{\mu}\) unchanged, we have
\[\partial_{\mu}\left[\rho_{1}\left(u_{0}^{\mu}+k^{\mu\nu}u_{0\nu} \right)+\rho_{0}\left(g^{\mu\nu}+k^{\mu\nu}\right)\partial_{\nu}\psi\right]=0, \tag{27}\]
and
\[\left(u_{0}^{\mu}+k^{\mu\nu}u_{0\nu}\right)\partial_{\mu}\psi-b \rho_{1}=0, \tag{28}\]
by solving (28) for \(\rho_{1}\) and replacing into equation (27), we obtain
\[\partial_{\mu}\left[u_{0}^{\mu}u_{0}^{\nu}+k^{\mu\lambda}u_{0\lambda}u_{0}^{ \nu}+u_{0}^{\mu}k^{\nu\lambda}u_{0\lambda}+b\rho_{0}\left(g^{\mu\nu}+k^{\mu \nu}\right)\right]\partial_{\nu}\psi=0. \tag{29}\]
Hence, we find the equation of motion for a linear acoustic disturbance \(\psi\) given by a Klein-Gordon equation in a curved space
\[\frac{1}{\sqrt{-g}}\partial_{\mu}(\sqrt{-g}g^{\mu\nu}\partial_{\nu})\psi=0, \tag{30}\]
where \(g_{\mu\nu}\) is the relativistic acoustic metrics.
For \(\beta\neq 0\) and \(\alpha=0\), we have [21]
\[g_{\mu\nu}\equiv\frac{b\rho_{0}\tilde{\beta}_{-}^{1/2}}{2c_{s}\sqrt{\mathcal{Q}}} \left[\begin{array}{cccc}-\left(\frac{c_{s}^{2}}{\beta_{+}}-\frac{\tilde{ \beta}_{-}}{\beta_{+}}v^{2}\right)&\vdots&-v^{j}\\ \cdots\cdots\cdots\cdots\cdots&\cdot&\cdots\cdots\cdots\cdots\cdots\cdots \cdots\\ -v^{i}&\vdots&f_{\beta}\delta^{ij}+\frac{\tilde{\beta}_{-}}{\beta_{+}}v^{i}v^{ j}\end{array}\right], \tag{31}\]
where \(\mathcal{Q}=1+\frac{c_{s}^{2}}{\beta_{+}}-\frac{\tilde{\beta}_{-}}{\beta_{+}}v ^{2}\) and \(f_{\beta}=\frac{\tilde{\beta}_{+}}{\beta_{-}}+\frac{c_{s}^{2}}{\beta_{-}}- \frac{\tilde{\beta}_{-}}{\beta_{+}}v^{2}\).
The acoustic line element in the Lorentz-violating background can be written as follows
\[ds^{2} = \frac{b\rho_{0}\tilde{\beta}_{-}^{1/2}}{2c_{s}\sqrt{\mathcal{Q}}} \left[-\left(\frac{c_{s}^{2}}{\tilde{\beta}_{+}}-\frac{\tilde{\beta}_{-}}{ \tilde{\beta}_{+}}v^{2}\right)dt^{2}-2\vec{v}\cdot d\vec{x}dt+\frac{\tilde{ \beta}_{-}}{\tilde{\beta}_{+}}(\vec{v}\cdot d\vec{x})^{2}+f_{\beta}d\vec{x}^{2 }\right]. \tag{32}\]
Now changing the time coordinate as \(d\tau=dt+\frac{\tilde{\beta}_{+}\vec{v}\cdot d\vec{x}}{c_{s}^{2}-\beta_{-}v^ {2}}\), we find the acoustic metric in the stationary form
\[ds^{2} = \frac{b\rho_{0}\tilde{\beta}_{-}^{1/2}}{2c_{s}\sqrt{\mathcal{Q}}} \left[-\left(\frac{c_{s}^{2}}{\tilde{\beta}_{+}}-\frac{\tilde{\beta}_{-}}{ \tilde{\beta}_{+}}v^{2}\right)d\tau^{2}+\mathcal{F}\left(\frac{\tilde{\beta}_ {-}v^{i}v^{j}}{c_{s}^{2}-\tilde{\beta}_{-}v^{2}}+\frac{f_{\beta}}{\mathcal{F} }\delta^{ij}\right)dx^{i}dx^{j}\right]\!\!. \tag{33}\]
where \(\mathcal{F}=\left(\frac{\tilde{\beta}_{+}}{\beta_{-}}+\frac{c_{s}^{2}}{\beta _{+}}-\frac{\tilde{\beta}_{-}}{\beta_{+}}v^{2}\right)\). For \(\tilde{\beta}=1\) we recover the result found in Ref. [20].
Next, for \(\beta=0\) and \(\alpha\neq 0\), we have [21]
\[g_{\mu\nu}\equiv\frac{b\rho_{0}}{2c_{s}\sqrt{f}}\left[\begin{array}{ccc}g_{ tt}&\vdots&g_{tj}\\ \cdots&\cdot&\cdots\\ g_{it}&\vdots&g_{ij}\end{array}\right], \tag{34}\]
where
\[g_{tt} = -[(1+\alpha)c_{s}^{2}-v^{2}+\alpha^{2}(1-v)^{2}], \tag{35}\] \[g_{tj} = -(1-\vec{\alpha}\cdot\vec{v})v^{j},\] (36) \[g_{it} = -(1-\vec{\alpha}\cdot\vec{v})v^{i},\] (37) \[g_{ij} = \left[(1-\vec{\alpha}\cdot\vec{v})^{2}+c_{s}^{2}-v^{2}\right] \delta^{ij}+v^{i}v^{j},\] (38) \[f = (1+\alpha)[(1-\vec{\alpha}\cdot\vec{v})^{2}+c_{s}^{2}]-v^{2}+ \alpha^{2}(1-v)^{2}\left[1+(1-\vec{\alpha}\cdot\vec{v})^{2}c_{s}^{-2}\right]. \tag{39}\]
Thus, the acoustic line element in the Lorentz-violating background can be written as
\[ds^{2} = \frac{b\rho_{0}}{2c_{s}\sqrt{f}}\left[g_{tt}dt^{2}-2(1-\vec{ \alpha}\cdot\vec{v})(\vec{v}\cdot d\vec{x})dt+(\vec{v}\cdot d\vec{x})^{2}+f_{ \alpha}d\vec{x}^{2}\right], \tag{40}\]
where \(f_{\alpha}=(1-\vec{\alpha}\cdot\vec{v})^{2}+c_{s}^{2}-v^{2}\). Now changing the time coordinate as
\[d\tau=dt+\frac{(1-\vec{\alpha}\cdot\vec{v})(\vec{v}\cdot d\vec{x})}{[(1+ \alpha)c_{s}^{2}-v^{2}+\alpha^{2}(1-v)^{2}]}, \tag{41}\]
we find the acoustic metric in the stationary form
\[ds^{2}=\frac{b\rho_{0}}{2c_{s}\sqrt{f}}\left[g_{tt}d\tau^{2}+\Lambda\left( \frac{-v^{i}v^{j}}{g_{tt}}+\frac{f_{\alpha}\delta^{ij}}{\Lambda}\right)dx^{i} dx^{j}\right]\!. \tag{42}\]
where \(\Lambda=(1-\vec{\alpha}\cdot\vec{v})^{2}-g_{tt}\). For \(\alpha=0\), the result found in [20] is recovered.
### Noncommutative Acoustic Black Hole
The metric of a noncommutative canonical acoustic black hole has been found by us in [22]. Here, starting from the noncommutative Abelian Higgs model, we briefly review the steps to generate the relativistic acoustic metric in the
noncommutative background. Thus, the Lagrangian of the Abelian Higgs model in the noncommutative background is given by [62]
\[\hat{\mathcal{L}} = -\frac{\kappa_{+}}{4}F_{\mu\nu}F^{\mu\nu}+\kappa_{-}\left(|D_{\mu} \phi|^{2}+m^{2}|\phi|^{2}-b|\phi|^{4}\right)+\frac{1}{2}\theta^{\alpha\beta}F_{ \alpha\mu}\left[(D_{\beta}\phi)^{\dagger}D^{\mu}\phi+(D^{\mu}\phi)^{\dagger}D_ {\beta}\phi\right], \tag{43}\]
being \(\kappa_{\pm}=1\pm\theta^{\mu\nu}F_{\mu\nu}/2\), \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\) the field intensity tensor and \(D_{\mu}\phi=\partial_{\mu}\phi-ieA_{\mu}\phi\) the covariant derivative. The parameter \(\theta^{\alpha\beta}\) is a constant, real-valued antisymmetric \(D\times D\)- matrix in \(D\)-dimensional spacetime with dimensions of length squared.
Now, we use \(\phi=\sqrt{\rho(x,t)}\exp{(iS(x,t))}\) in the above Lagrangian, such that [22].
\[\mathcal{L} = -\frac{\kappa_{+}}{4}F_{\mu\nu}F^{\mu\nu}+\rho\bar{g}^{\mu\nu} \mathcal{D}_{\mu}S\mathcal{D}_{\nu}S+\tilde{\theta}m^{2}\rho-\tilde{\theta}b \rho^{2}+\frac{\rho}{\sqrt{\rho}}\bar{g}^{\mu\nu}\partial_{\mu}\partial_{\nu} \sqrt{\rho}, \tag{44}\]
where \(\mathcal{D}_{\mu}=\partial_{\mu}-eA_{\mu}/S\), \(\bar{g}^{\mu\nu}=\tilde{\theta}g^{\mu\nu}+\Theta^{\mu\nu}\), \(\tilde{\theta}=(1+\vec{\theta}\cdot\vec{B})\), \(\vec{B}=\nabla\times\vec{A}\) and \(\Theta^{\mu\nu}=\theta^{\alpha\mu}F_{\alpha}^{\ \nu}\). In our analysis we consider the case where there is no noncommutativity between space and time, that is \(\theta^{0i}=0\) and use \(\theta^{ij}=\varepsilon^{ijk}\theta^{k}\), \(F^{i0}=E^{i}\) and \(F^{ij}=\varepsilon^{ijk}B^{k}\).
In the sequence we obtain the equations of motion for \(S\) and \(\rho\) as follows:
\[\partial_{\mu}\left[\tilde{\theta}\rho u^{\mu}+\rho\tilde{\Theta}^{\mu\nu}u_ {\nu}\right]=0, \tag{45}\]
and
\[\frac{1}{\sqrt{\rho}}\bar{g}^{\mu\nu}\partial_{\mu}\partial_{\nu}\sqrt{\rho}+ \bar{g}^{\mu\nu}u_{\mu}u_{\nu}+\tilde{\theta}m^{2}-2\tilde{\theta}b\rho=0, \tag{46}\]
where \(\tilde{\Theta}^{\mu\nu}=(\Theta^{\mu\nu}+\Theta^{\nu\mu})/2\). Hence, by linearizing the equations of motion around the background \((\rho_{0},S_{0})\), with \(\rho=\rho_{0}+\rho_{1}\), \(S=S_{0}+\psi\) and keeping the vector potential \(A_{\mu}\) unchanged, such that
\[\partial_{\mu}\left[\rho_{1}\bar{g}^{\mu\nu}u_{0\nu}+\rho_{0}\left(g^{\mu\nu} +\tilde{\Theta}^{\mu\nu}\right)\partial_{\nu}\psi\right]=0, \tag{47}\]
and
\[\left(\tilde{\theta}u_{0}^{\mu}+\tilde{\Theta}^{\mu\nu}u_{0\nu}\right) \partial_{\mu}\psi-b\tilde{\theta}\rho_{1}=0. \tag{48}\]
Then, by manipulating the above equations, we obtain the equation of motion for a linear acoustic disturbance \(\psi\) in the form
\[\frac{1}{\sqrt{-g}}\partial_{\mu}(\sqrt{-g}g^{\mu\nu}\partial_{\nu})\psi=0, \tag{49}\]
where \(g_{\mu\nu}=\frac{b\rho_{0}}{2c_{s}\sqrt{J}}\tilde{g}_{\mu\nu}\) is the relativistic acoustic metric with noncommutative corrections in (3+1) dimensions and with \(\tilde{g}_{\mu\nu}\) given in the form [22]
\[\tilde{g}_{tt} = -[(1-3\vec{\theta}\cdot\vec{B})c_{s}^{2}-(1+3\vec{\theta}\cdot \vec{B})v^{2}+2(\vec{\theta}\cdot\vec{v})(\vec{B}\cdot\vec{v})-(\vec{\theta} \times\vec{E})\cdot\vec{v}], \tag{50}\] \[\tilde{g}_{tj} = -\frac{1}{2}(\vec{\theta}\times\vec{E})^{j}(c_{s}^{2}+1)-\left[2 (1+2\vec{\theta}\cdot\vec{B})-(\vec{\theta}\times\vec{E})\cdot\vec{v}\right] \frac{v^{j}}{2}+\frac{B^{j}}{2}(\vec{\theta}\cdot\vec{v})+\frac{\theta^{j}}{2}( \vec{B}\cdot\vec{v}),\] (51) \[\tilde{g}_{it} = -\frac{1}{2}(\vec{\theta}\times\vec{E})^{i}(c_{s}^{2}+1)-\left[2( 1+2\vec{\theta}\cdot\vec{B})-(\vec{\theta}\times\vec{E})\cdot\vec{v}\right] \frac{v^{i}}{2}+\frac{B^{i}}{2}(\vec{\theta}\cdot\vec{v})+\frac{\theta^{i}}{2}( \vec{B}\cdot\vec{v}),\] (52) \[\tilde{g}_{ij} = [(1+\vec{\theta}\cdot\vec{B})(1+c_{s}^{2})-(1+\vec{\theta}\cdot \vec{B})v^{2}-(\vec{\theta}\times\vec{E})\cdot\vec{v}]\delta^{ij}+(1+\vec{ \theta}\cdot\vec{B})v^{i}v^{j}.\] (53) \[f = [(1-2\vec{\theta}\cdot\vec{B})(1+c_{s}^{2})-(1+4\vec{\theta}\cdot \vec{B})v^{2}]-3(\vec{\theta}\times\vec{E})\cdot\vec{v}+2(\vec{B}\cdot\vec{v})( \vec{\theta}\cdot\vec{v}). \tag{54}\]
Setting \(\theta=0\), the acoustic metric above reduces to the acoustic metric obtained in Ref. [20].
## IV Modified canonical acoustic black hole
In this section, we shall address the issue of Hawking temperature in the regime of low velocities for the previous cases with further details. Now we consider an incompressible fluid with spherical symmetry. In this case the density \(\rho\) is a position independent quantity and the continuity equation implies that \(v\sim\frac{1}{r^{2}}\). The sound speed is also a constant. In the following we examine the Hawking radiation and entropy of the usual canonical acoustic black hole, as well as, in the Lorentz-violating and noncommutative background.
### Canonical Acoustic Metric
In this case the line element of the acoustic black hole is given by
\[ds^{2}=-f(v_{r})d\tau^{2}+\frac{c_{s}^{2}}{f(v_{r})}dr^{2}+r^{2}(d\theta^{2}+\sin^ {2}\theta d\phi^{2}), \tag{55}\]
where the metric function, \(f(v_{r})\) takes the form
\[f(v_{r})=c^{2}-v_{r}^{2}\quad\longrightarrow\quad f(r)=c_{s}^{2}\left(1-\frac{ r_{h}^{4}}{r^{4}}\right). \tag{56}\]
Here we have defined \(v_{r}=c_{s}\frac{r_{h}^{2}}{r^{4}}\), being \(r_{h}\) the radius of the event horizon.
In this case we compute the Hawking temperature using the following formula:
\[T_{H}=\frac{f^{\prime}(r_{h})}{4\pi}=\frac{c_{s}^{2}}{\pi r_{h}}. \tag{57}\]
By considering the above result for the Hawking temperature and applying the first law of thermodynamics, we can obtain the entropy (entanglement entropy [38]) of the acoustic black hole as follows
\[S=\int\frac{dE}{T}=\int\frac{dA}{4\pi r_{h}T_{H}}=\frac{A}{4c_{s}^{2}}, \tag{58}\]
being \(A=4\pi r_{h}^{2}\) the horizon area of the canonical acoustic black hole.
### Canonical Acoustic Metric with Lorentz Violation
In the limit \(c_{s}^{2}\ll 1\) and \(v^{2}\ll 1\) can be written as a Schwarzschild metric type. Thus, for \(\beta\neq 0\) and \(\alpha=0\) and up to an irrelevant position-independent factor, we have [21]
\[ds^{2} = -f(v_{r})d\tau^{2}+\frac{c_{s}^{2}}{\sqrt{\tilde{\beta}_{-}\tilde{ \beta}_{+}}f(v_{r})}dr^{2}+\sqrt{\frac{\tilde{\beta_{+}}}{\tilde{\beta}_{-}}}r ^{2}(d\theta^{2}+\sin^{2}\theta d\phi^{2}), \tag{59}\]
where
\[f(v_{r})=\sqrt{\frac{\tilde{\beta_{-}}}{\tilde{\beta}_{+}}}\left[\frac{c_{s}^ {2}-\tilde{\beta}_{-}v_{r}^{2}}{\tilde{\beta}_{+}}\right]\to f(r)=\sqrt{ \frac{\tilde{\beta_{-}}}{\tilde{\beta}_{+}}}\left[\frac{c_{s}^{2}}{\tilde{ \beta}_{+}}\left(1-\tilde{\beta}_{-}\frac{r_{h}^{4}}{r^{4}}\right)\right]. \tag{60}\]
The Hawking temperature is given by
\[T_{H}=\frac{f^{\prime}(r_{h})}{4\pi}=\frac{c_{s}^{2}(1-\beta)^{3/2}}{(1+\beta )^{3/2}\pi r_{h}}=\frac{c_{s}^{2}(1-3\beta)}{\pi r_{h}}. \tag{61}\]
Therefore, the temperature is decreased when we vary the parameter \(\beta\). For \(\beta=0\) the usual result is obtained. Hence, from the above temperature, we have the following result for the entropy of the acoustic black hole in the background violating Lorentz.
\[S=\frac{(1+3\beta)}{4c_{s}^{2}}A. \tag{62}\]
Now, for \(\beta=0\) and \(\alpha\neq 0\), we find for \(\alpha\) sufficiently small we have up to first order
\[f(v_{r})=\frac{\tilde{\alpha}c_{s}^{2}-v_{r}^{2}]}{\sqrt{\tilde{\alpha}(1-2 \alpha v_{r})}}, \tag{63}\]
where \(\tilde{\alpha}=1+\alpha\). For \(v_{r}=c_{s}r_{h}^{2}/r^{2}\) with \(c_{s}=1\), the metric function becomes
\[f(r)\simeq\tilde{\alpha}^{-1/2}\left[\tilde{\alpha}-\frac{r_{h}^{4}}{r^{4}} \left(1+\alpha\frac{r_{h}^{2}}{r^{2}}\right)+\alpha\frac{r_{h}^{2}}{r^{2}} \right]. \tag{64}\]
In the present case there is a richer structure such as charged and rotating black holes. The event horizon of the modified canonical acoustic black hole is obtained from the following equation:
\[\tilde{\alpha}-\frac{r_{h}^{4}}{r^{4}}\left(1+\alpha\frac{r_{h}^{2}}{r^{2}} \right)+\alpha\frac{r_{h}^{2}}{r^{2}}=0, \tag{65}\]
which can also be rewritten in the form
\[r^{6}+\alpha\,r_{h}^{2}\,r^{4}-\tilde{\alpha}^{-1}\,r_{h}^{4}\,r^{2}-\alpha\, r_{h}^{6}=0, \tag{66}\]
we can also write
\[r^{2}(r^{2}-r_{+}^{2})(r^{2}-r_{-}^{2})-\alpha\,r_{h}^{6}=0, \tag{67}\]
where
\[r_{\pm}^{2}=r_{h}^{2}\left(-\frac{\alpha}{2}\pm\frac{1}{\sqrt{ \tilde{\alpha}}}\right). \tag{68}\]
Now, arranging the above equation (67), we have
\[r^{2}=r_{+}^{2}+\frac{\alpha r_{h}^{6}}{r^{2}(r^{2}-r_{-}^{2})}. \tag{69}\]
Therefore, we can find the event horizon by solving the above equation perturbatively. So, up to the first order in \(\alpha\), we obtain
\[\tilde{r}_{+}^{2}\approx r_{+}^{2}+\frac{\alpha r_{h}^{6}}{r_{+}^{2}(r_{+}^{2 }-r_{-}^{2})}=\left(1-\frac{\alpha}{2}\right)r_{h}^{2}+\cdots. \tag{70}\]
Then, we have
\[\tilde{r}_{+}=r_{h}\sqrt{1-\frac{\alpha}{2}}+\cdots. \tag{71}\]
For the Hawking temperature, we obtain
\[T_{H}=\frac{1}{\pi\tilde{r}_{+}}\left(1+\frac{3\alpha}{2}\right). \tag{72}\]
In terms of \(r_{h}\), we have
\[T_{H}=\left(1+\frac{7\alpha}{4}\right)\frac{1}{(\pi r_{h})}. \tag{73}\]
In this situation the temperature is increased when we vary the parameter \(\alpha\). For \(\alpha=0\) one recovers the usual result.
In this case for entropy, we find
\[S=\left(1-\frac{7\alpha}{4}\right)\frac{A}{4}. \tag{74}\]
### Noncommutative Canonical Acoustic Metric
The noncommutative acoustic metric can be written as a Schwarzschild metric type, up to an irrelevant position-independent factor, in the nonrelativistic limit as follows [22],
\[ds^{2} = -\tilde{\mathcal{F}}(v_{r})d\tau^{2}+\frac{[v_{r}^{2}\Gamma+ \Sigma+\tilde{\mathcal{F}}(v_{r})\Lambda]}{\tilde{\mathcal{F}}(v_{r})}dr^{2} +\frac{r^{2}(d\vartheta^{2}+\sin^{2}\vartheta d\phi^{2})}{\sqrt{f}}, \tag{75}\]
where
\[\tilde{\mathcal{F}}(v_{r}) = \frac{\mathcal{F}(v_{r})}{\sqrt{f(v_{r})}}=\frac{1}{\sqrt{f(v_{r})} }\left[(1-3\vec{\theta}\cdot\vec{B})c_{s}^{2}-(1+3\vec{\theta}\cdot\vec{B})v_{r }^{2}-\theta\mathcal{E}_{r}v_{r}+2(\theta_{r}B_{r}v_{r}^{2})\right], \tag{76}\] \[f(v_{r}) = 1-2\vec{\theta}\cdot\vec{B}-3\theta\mathcal{E}_{r}v_{r},\] (77) \[\Lambda(v_{r}) = 1+\vec{\theta}\cdot\vec{B}-\theta\mathcal{E}_{r}v_{r},\] (78) \[\Gamma(v_{r}) = 1+4\vec{\theta}\cdot\vec{B}-2\theta\mathcal{E}_{r}v_{r},\] (79) \[\Sigma(v_{r}) = \left[\theta\mathcal{E}_{r}-(B_{r}v_{r})\theta_{r}-(\theta_{r}v _{r})B_{r}\right]v_{r}, \tag{80}\]
being \(\theta\mathcal{E}_{r}=\theta(\vec{n}\times\vec{E})_{r}\). Now, by applying the relation \(v_{r}=c_{s}\frac{r_{h}^{2}}{r^{2}}\), where \(r_{h}\) is the radius of the event horizon and making \(c_{s}=1\) and so, the metric function of the noncommutative canonical acoustic black hole becomes
\[\tilde{\mathcal{F}}(r)=\left[1-3\vec{\theta}\cdot\vec{B}-(1+3\vec{\theta} \cdot\vec{B}-2\theta_{r}B_{r})\frac{r_{h}^{4}}{r^{4}}-\theta\mathcal{E}_{r} \frac{r_{h}^{2}}{r^{2}}\right]\left[1-2\vec{\theta}\cdot\vec{B}-3\theta \mathcal{E}_{r}\frac{r_{h}^{2}}{r^{2}}\right]^{-1/2}. \tag{81}\]
Next, we will do our analysis considering the pure magnetic sector first and then we will investigate the pure electric sector.
Hence, for \(\theta_{r}=0\), \(\vec{\theta}\cdot\vec{B}=\theta_{3}B_{3}\neq 0\), \(\theta\mathcal{E}_{r}=0\) (or \(E=0\)) with small \(\theta_{3}B_{3}\),
\[T_{H}=\frac{(1+3\theta_{3}B_{3})}{\sqrt{1-2\theta_{3}B_{3}}}\frac{1}{(\pi r_ {h})}=\frac{(1+4\theta_{3}B_{3})}{(\pi r_{h})}. \tag{82}\]
For \(\theta=0\) the usual result is obtained. Here the temperature has its value increased when we vary the parameter \(\theta\).
However, for the temperature in (82) we can find the entropy given by
\[S=\int\frac{dE}{T}=\int\frac{dA}{4\pi r_{h}T_{H}}=\frac{(1-4\theta_{3}B_{3})}{ 4}A, \tag{83}\]
where \(A=4\pi r_{h}^{2}\) is the horizon area of the canonical acoustic black hole.
At this point, we will consider the situation where \(B=0\) and \(\theta\mathcal{E}_{r}\neq 0\). So, from (81), we have
\[\tilde{\mathcal{F}}(r)=\left[1-\frac{r_{h}^{4}}{r^{4}}-\theta \mathcal{E}_{r}\frac{r_{h}^{2}}{r^{2}}\right]\left[1-3\theta\mathcal{E}_{r} \frac{r_{h}^{2}}{r^{2}}\right]^{-1/2}. \tag{84}\]
For this metric the event horizon is obtained by solving the equation below
\[1-\frac{r_{h}^{4}}{r^{4}}-\theta\mathcal{E}_{r}\frac{r_{h}^{2}}{r^{2}}=0, \tag{85}\]
or
\[r^{4}-\theta\mathcal{E}_{r}r_{h}^{2}\,r^{2}-r_{h}^{4}=0. \tag{86}\]
So, solving the above equation, we obtain
\[r_{+}=\left(1+\frac{\theta\mathcal{E}_{r}}{4}\right)r_{h}. \tag{87}\]
For the Hawking temperature, we find
\[T_{H} = \frac{[1-\theta\mathcal{E}_{r}/2]}{\sqrt{1-3\theta\mathcal{E}_{r }}}\frac{1}{\pi r_{+}}=\frac{(1+\theta\mathcal{E}_{r})}{\pi r_{+}}, \tag{88}\] \[= \frac{(1+3\theta\mathcal{E}_{r}/4)}{\pi r_{h}}. \tag{89}\]
We also noticed that the temperature is increased when we vary the \(\theta\) parameter.
For entropy we have
\[S=\left(1-\theta\mathcal{E}_{r}\right)\frac{A}{4}, \tag{90}\]
where \(A=4\pi r_{+}\).
Quantum-corrected Hawking Temperature and Entropy
In this section, we implement quantum corrections in the Hawking temperature and entropy calculation arising from the generalized uncertainty principle and modified dispersion relations.
### Result using GUP
At this point, we introduce quantum corrections via the generalized uncertainty principle (GUP) to determine the Hawking temperature and entropy of the canonical acoustic black hole in the Lorentz-violating and noncommutative background. So, we will adopt the following GUP [63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74]
\[\Delta x\Delta p\geq\frac{\hbar}{2}\left(1-\frac{\lambda l_{p}}{ \hbar}\Delta p+\frac{\lambda^{2}l_{p}^{2}}{\hbar^{2}}(\Delta p)^{2}\right), \tag{91}\]
where \(\alpha\) is a dimensionless positive parameter and \(l_{p}\) is the Planck length.
In sequence, without loss of generality, we will adopt the natural units \(G=c=k_{B}=\hbar=l_{p}=1\) and by assuming that \(\Delta p\sim E\) and following the steps performed in [38] we can obtain the following relation for the corrected energy of the black hole
\[E_{gup}\geq E\left[1-\frac{\lambda}{2(\Delta x)}+\frac{\lambda^{ 2}}{2(\Delta x)^{2}}+\cdots\right]. \tag{92}\]
Thus, applying the tunneling formalism using the Hamilton-Jacobi method, we have the following result for the probability of tunneling with corrected energy \(E_{gup}\) given by
\[\Gamma\simeq\exp[-2\text{Im}(\mathcal{I})]=\exp\left[\frac{-4\pi E _{gup}}{\kappa}\right], \tag{93}\]
where \(\kappa\) is the surface gravity. Comparing with the Boltzmann factor, \(e^{-E/T}\), we obtain the following result for the Hawking temperature with quantum corrections
\[T\leq T_{H}\left[1-\frac{\lambda}{2(\Delta x)}+\frac{\lambda^{2 }}{2(\Delta x)^{2}}+\cdots\right]^{-1}. \tag{94}\]
So, by applying it to temperature (57), we have the following result
\[T=\frac{c_{s}^{2}}{\pi\left[r_{h}-\frac{\lambda}{4}+\frac{\lambda ^{2}}{8r_{h}}+\cdots\right]}. \tag{95}\]
Therefore, when \(r_{h}=0\) the singularity is removed and the temperature is now zero. Next, we analyze the effect of GUP in the Lorentz-violating and noncommutative cases.
For this case we can calculate the entropy which is given by
\[S=\frac{A}{4c_{s}^{2}}-\frac{4\sqrt{\pi}\lambda\sqrt{A}}{4c_{s}^ {2}}+\frac{\pi\lambda^{2}\ln A}{8c_{s}^{2}}+\cdots. \tag{96}\]
So due to the GUP we get a logarithmic correction term for the entropy.
#### v.1.1 Violation-Lorentz Case
In the situation where \(\beta\neq 0\) and \(\alpha=0\), the corrected temperature due to GUP is
\[T=T_{H}\left[1-\frac{\lambda}{4r_{h}}+\frac{\lambda^{2}}{8r_{h} ^{2}}+\cdots\right]^{-1}. \tag{97}\]
where
\[T_{H}=\frac{(1-3\beta)}{\pi r_{h}}. \tag{98}\]
Thus, we have
\[T=\frac{(1-3\beta)}{\pi\left[r_{h}-\frac{\lambda}{4}+\frac{\lambda^{2}}{8r_{h}}+ \cdots\right]}. \tag{99}\]
Note that when \(r_{h}\to 0\) the Hawking temperature tends to zero, \(T\to 0\). In the absence of the GUP the temperature, \(T_{H}\), diverges when \(r_{h}=0\). Therefore, we observe that the GUP has the effect of removing the singularity at \(r_{h}=0\) in the Hawking temperature of the acoustic black hole.
Now computing the entropy, we find
\[S=(1+3\beta)\left[\frac{A}{4}-\frac{4\sqrt{\pi}\lambda\sqrt{A}}{4}+\frac{\pi \lambda^{2}\ln A}{8}+\cdots\right]. \tag{100}\]
For \(\beta=0\) and \(\alpha\neq 0\), we have
\[T=\frac{(1+3\alpha/2)}{\pi\left[\tilde{r}_{+}-\frac{\lambda}{4}+\frac{\lambda ^{2}}{8\tilde{r}_{+}}+\cdots\right]}. \tag{101}\]
In terms of \(r_{h}\), we obtain
\[T=\frac{(1+3\alpha/2)}{\pi\left[r_{h}\left(1-\frac{\alpha}{4} \right)-\frac{\lambda}{4}+\frac{\lambda^{2}}{8r_{h}}\left(1+\frac{\alpha}{4} \right)+\cdots\right]}. \tag{102}\]
In this situation, we can also verify the effect of the GUP on the temperature that goes to zero when \(r_{h}\to 0\) (\(\tilde{r}_{+}\to 0\)). In addition, we note that in both cases the Hawking temperature reaches a maximum value before going to zero, as we can see in Fig. 1. Therefore, presenting a behavior analogous to what happens with the corrected Hawking temperature of the Schwarzschild black hole.
For entropy, we obtain
\[S=\left(1-\frac{3\alpha}{2}\right)\left[\left(1-\frac{\alpha}{4} \right)\frac{A}{4}-\frac{4\sqrt{\pi}\lambda\sqrt{A}}{4}+\left(1+\frac{\alpha} {4}\right)\frac{\pi\lambda^{2}\ln A}{8}+\cdots\right]. \tag{103}\]
Again we find a logarithmic correction term and also the contribution of the \(\alpha\) parameter to the entropy.
Figure 1: The Hawking temperature \(T_{H}\rightarrow\pi\lambda T_{H}\) in function of \(r_{h}/\lambda\). Note that the temperature \(Th1\) (57) diverges to \(r_{h}/\lambda\to 0\) and the temperatures \(Th2\) (95), \(Th3\) (99) and \(Th4\) (102) reach maximum values, and then decreases to zero as \(r_{h}/\lambda\to 0\).
Noncommutative Case
For the magnetic sector, the GUP-corrected Hawking temperature is given by
\[T=\frac{(1+4\theta_{3}B_{3})}{\pi\left[r_{h}-\frac{\lambda}{4}+\frac{\lambda^{2} }{8r_{h}}+\cdots\right]}. \tag{104}\]
Note that, the GUP acts as a temperature regulator by removing the singularity when \(r_{h}=0\). In addition, the temperature goes through a maximum value point before going to zero for \(r_{h}=0\).
In this case entropy is given by
\[S=(1-4\theta_{3}B_{3})\left[\frac{A}{4}-\frac{4\sqrt{\pi}\lambda\sqrt{A}}{4}+ \frac{\pi\lambda^{2}\ln A}{8}+\cdots\right]. \tag{105}\]
Next, for the electrical sector, we find the following GUP-corrected Hawking temperature
\[T=\frac{(1+\theta\mathcal{E}_{r})}{\pi\left[r_{+}-\frac{\lambda}{4}+\frac{ \lambda^{2}}{8r_{+}}+\cdots\right]}. \tag{106}\]
In terms of \(r_{h}\), the temperature becomes
\[T=\frac{(1+\theta\mathcal{E}_{r})}{\pi\left[r_{h}\left(1+\frac{\theta \mathcal{E}_{r}}{4}\right)-\frac{\lambda}{4}+\frac{\lambda^{2}}{8r_{h}}\left( 1-\frac{\theta\mathcal{E}_{r}}{4}\right)+\cdots\right]}. \tag{107}\]
Hence, as has been verified in the violating-Lorentz case, here in both cases the temperature-corrected magnetic and electric sectors have the singularity removed when the horizon radius goes to zero. Also, in this case we can observe that the temperature reaches a maximum value and then goes to zero when the horizon radius is zero.
At this point, when determining the entropy, we have
\[S=(1-\theta\mathcal{E}_{r})\left[\left(1+\frac{\theta\mathcal{E}_{r}}{4} \right)\frac{A}{4}-\frac{4\sqrt{\pi}\lambda\sqrt{A}}{4}+\left(1-\frac{\theta \mathcal{E}_{r}}{4}\right)\frac{\pi\lambda^{2}\ln A}{8}+\cdots\right]. \tag{108}\]
### Result using modified dispersion relation
Near the event horizon the dispersion relation (19) becomes
\[\omega=E\left(1+\frac{a_{0}^{2}}{2r_{h}^{2}}\right), \tag{109}\]
where \(a_{0}\) is a parameter with length dimension. By assuming \(k\sim\Delta k\sim 1/\Delta x=1/r_{h}\), we can write
\[\omega=E\left(1+\frac{a_{0}^{2}\,k^{2}}{2}\right). \tag{110}\]
Thus, in terms of the energy difference, we have
\[\frac{\Delta E}{E}=\frac{\omega-E}{E}=\frac{a_{0}^{2}\,k^{2}}{2}. \tag{111}\]
Next, by using the Rayleigh's formula that relates the phase and group velocities
\[v_{g}=v_{p}+k\frac{dv_{p}}{dk}, \tag{112}\]
where the phase velocity (\(v_{p}\)) and the group velocity (\(v_{g}\)) are given by
\[v_{p}=\frac{\omega}{k}=1+\frac{a_{0}^{2}\,k^{2}}{2}, \tag{113}\]
\[v_{g}=\frac{d\omega}{dk}=1+\frac{3a_{0}^{2}\,k^{2}}{2}. \tag{114}\]
However, we find an expression for the velocity difference as following
\[\frac{v_{g}-v_{p}}{v_{p}}=a_{0}^{2}\,k^{2}, \tag{115}\]
which corresponds to the supersonic case (\(v_{g}>v_{p}\)).
Furthermore, the Hawking temperature (57) can be corrected by applying the dispersion ratio (109), i.e.
\[T_{H}=\frac{c_{s}^{2}}{\pi\left(r_{n}+\frac{a_{0}^{2}}{2r_{h}}\right)}. \tag{116}\]
Note that, the singularity is removed when \(r_{h}=0\) and the temperature vanishes. In addition, the temperature reaches a maximum value before going to zero. as we can see in Fig. 2.
Now, by calculating the entropy, we find
\[S=\frac{A}{4c_{s}^{2}}+\frac{2\pi a_{0}^{2}\ln A}{4c_{s}^{2}}. \tag{117}\]
Here a logarithmic correction term arises in entropy on account of the modified dispersion relation.
In order to correct the Hawking temperature and entropy for the Lorentz-violating and non-commutative cases, we will apply the modified dispersion relations obtained in Refs. [21; 22].
#### iv.1.1 Violation-Lorentz Case
In the situation where \(\beta=0\) and \(\alpha\neq 0\), we have the following dispersion relation:
\[\omega=E\left(1+\frac{\alpha}{2}+\frac{\alpha a_{0}^{2}}{\tilde{r}_{+}^{2}} \right). \tag{118}\]
So for temperature (72), we get
\[T=\frac{(1+3\alpha/2)}{\pi\left[\tilde{r}_{+}+\frac{\alpha}{2}+\frac{\alpha a _{0}^{2}}{\tilde{r}_{+}}\right]}. \tag{119}\]
Furthermore, the result shows that the temperature reaches a maximum point and then goes to zero when the horizon radius is zero. Moreover, entropy is given by
\[S=\left(1-\frac{3\alpha}{2}\right)\left[\left(1-\frac{\alpha}{4}\right)\frac{ A}{4}+\frac{2\sqrt{\pi}\alpha\sqrt{A}}{4}+\frac{4\pi\alpha a_{0}^{2}\ln A}{4} \right]. \tag{120}\]
Again due to the contribution of the modified dispersion relation, a logarithmic correction term arises in the entropy.
#### iv.1.2 Noncommutative Case
At this point we consider the dispersion ratio for the pure electrical sector. So we have
\[\omega=E\left(1+\frac{\theta\mathcal{E}_{1}a_{0}^{2}}{4r_{+}^{2}}\right). \tag{121}\]
For the temperature (88), we find
\[T=\frac{(1+\theta\mathcal{E}_{r})}{\pi\left(r_{+}+\frac{\theta\mathcal{E}_{1} a_{0}^{2}}{4r_{+}}\right)}, \tag{122}\]
which in terms of \(r_{h}\) becomes
\[T=\frac{(1+\theta\mathcal{E}_{r})}{\pi\left[\left(1+\frac{\theta\mathcal{E}_{r}}{ 4}\right)r_{h}+\frac{\theta\mathcal{E}_{1}a_{0}^{2}}{4r_{h}}\right]}. \tag{123}\]
Here, we can see that the temperature goes through a maximum value before going to zero for \(r_{h}=0\).
Hence, the result for entropy is
\[S=(1-\theta\mathcal{E}_{r})\left[\left(1+\frac{\theta\mathcal{E}_{r}}{4} \right)\frac{A}{4}+\frac{\pi\theta\mathcal{E}_{1}a_{0}^{2}\ln A}{4}\right]. \tag{124}\]
In the above equation a logarithmic correction term arises in entropy as a consequence of the noncommutativity effect on the dispersion relation.
## VI Conclusions
In summary, in this work, we have reviewed the steps to generate relativistic acoustic metrics in the Lorentz-violating and noncommutative background. In particular, we have considered the modified canonical acoustic metric due to the contribution of terms violating Lorentz symmetry and noncommutativity; to examine Hawking radiation and entropy. Moreover, we have verified, in the calculation of the Hawking temperature, that due to the presence of the GUP and the modified dispersion relation, the singularity is removed. In addition, we have shown that in these cases, the temperature reaches a maximum value and then vanishes when the horizon radius goes to zero. Furthermore, entropy has been computed, and we show that logarithmic correction terms are generated due to the GUP and also the modified dispersion relation. Therefore, the presented results show a behavior similar to what happens in the case of the Schwarzschild black hole.
###### Acknowledgements.
We would like to thank CNPq, CAPES and CNPq/PRONEX/FAPESQ-PB (Grant nos. 165/2018 and 015/2019), for partial financial support. MAA, FAB and EP acknowledge support from CNPq (Grant nos. 306398/2021-4, 312104/2018-9, 304290/2020-3).
|
2310.11395 | A method for crystallographic mapping of an alpha-beta titanium alloy
with nanometre resolution using scanning precession electron diffraction and
open-source software libraries | An approach for the crystallographic mapping of two-phase alloys on the
nanoscale using a combination of scanned precession electron diffraction and
open-source python libraries is introduced in this paper. This method is
demonstrated using the example of a two-phase alpha / beta titanium alloy. The
data was recorded using a direct electron detector to collect the patterns, and
recently developed algorithms to perform automated indexing and analyse the
crystallography from the results. Very high-quality mapping is achieved at a
3nm step size. The results show the expected Burgers orientation relationships
between the alpha laths and beta matrix, as well as the expected
misorientations between alpha laths. A minor issue was found that one area was
affected by 180{\deg} ambiguities in indexing occur due to this area being
aligned too close to a zone axis of the alpha with 2-fold projection symmetry
(not present in 3D) in the Zero Order Laue Zone, and this should be avoided in
data acquisition in the future. Nevertheless, this study demonstrates a good
workflow for the analysis of nanocrystalline two- or multi-phase materials,
which will be of widespread use in analysing two-phase titanium and other
systems and how they evolve as a function of thermomechanical treatments. | Ian MacLaren, Enrique Frutos-Myro, Steven Zeltmann, Colin Ophus | 2023-10-17T16:59:19Z | http://arxiv.org/abs/2310.11395v3 | A method for crystallographic mapping of an alpha-beta titanium alloy with nanometre resolution using scanning precession electron diffraction and open-source software libraries
###### Abstract
An approach for the crystallographic mapping of two-phase alloys on the nanoscale using a combination of scanned precession electron diffraction and open source python libraries is introduced in this paper. This method is demonstrated using the example of a two-phase \(\alpha\) / \(\beta\) titanium alloy. The data was recorded using a direct electron detector to collect the patterns, and recently developed algorithms to then perform automated indexing and to analyse the crystallography from the results. Very high-quality mapping is achieved at a 3nm step size. The results show the expected Burgers orientation relationships between the \(\alpha\) laths and \(\beta\) matrix, as well as the expected misorientations between \(\alpha\) laths. It is found that 180deg ambiguities in indexing occur due to acquisition having been performed too close to a high symmetry zone axis of the beta with 2-fold projection symmetry (not present in 3D) in the Zero Order Laue Zone for some patterns and that this should be avoided in data acquisition in the future. Nevertheless, this study demonstrates a good workflow for the analysis of nanocrystalline two-phase or multiphase materials, which will be of widespread use in analysing two-phase titanium and other systems and how they evolve as a function of thermomechanical treatments.
## Introduction
Since its introduction in the 1990s[1, 2], EBSD has been hugely important in providing a method for crystallographic orientation mapping of materials. For example, in recent years crystallographic orientations in titanium alloys have been studied using electron backscatter diffraction (EBSD) in the SEM, which is very good at picking up the crystal orientation of the different \(\alpha\) (hexagonal) and \(\beta\) (body-centred cubic) areas in these complex microstructures and allowing detailed study of microscale or submicron crystallography as a result of various processes. Moreover, this has been used to study laths of \(\alpha\) produced by precipitation from \(\beta\) in different two-phase alloys (especially Ti-6Al-4V, but also Ti-6426 and others) and produced under different sequences of thermomechanical treatments. A particular area of interest has been what selection of variants is produced after any particular treatment[3, 4, 5, 6, 7, 8, 9].
However, EBSD does suffer from a lack of spatial resolution, whereby the resolution in the lateral direction may well be rather better (just a few nm) than the inclined direction (with the sample usually tilted at ~55deg to the beam)[10, 11]. As such, reliably resolving finer features of a few tens of nm in size is difficult and orientation-dependent. Such nanostructures are, however, well within the range where scanned electron nanodiffraction can produce distinct diffraction patterns with a resolution down to a small number of
nanometres (the exact spot size depends on probe convergence angle via the Abbe criterion, as normal). In recent years, scanned electron nanodiffraction, especially with the addition of precession (to form scanning precession electron diffraction, SPED) has been widely used for crystal orientation mapping[12], usually using some form of template matching algorithm. In this case, a large databank of possible diffraction patterns for the structure of interest are calculated to cover all crystallographically distinct orientations (with a spacing in orientation space to be determined by the user, using a trade-off between accuracy and calculation time / memory requirements). The experimental patterns are then compared with this databank one-by-one and the best correlation recorded in each case, which are then turned into a map of orientations. This has recently been used in the analysis of some of the nanoscale detail of twinning and deformation in Ti-Mo alloys[13, 14] and in pure Ti[15].
In recent years, the related techniques of scanned diffraction (perhaps more focussed on the diffraction patterns, and concerning well-separated diffraction spots from low convergence angle beams) and 4DSTEM (perhaps more focused on images calculated from the diffraction data, from higher convergence angle beams with overlapping diffraction disks) have had a huge boom in application[16]. The simple reason for this is the introduction of fast detectors, especially direct electron detectors, into electron microscopes and their integration into the acquisition control systems, allowing easy and fast acquisition of 4D datasets of a diffraction pattern at every point on a scan of a reasonable number of pixels[17]. In the field of orientation mapping, this has been successfully used with scanning electron nanodiffraction, and has been shown to give noticeable advantages over older indirect detectors in SPED[18]. Alongside this, there has been a significant growth in production of Open Source software for working with 4DSTEM and electron microscopy data[17], with a major advantage that all operations are transparent and can be examined in the source code. Also, these codes are optimised for use with the cleaner data coming from more recent detectors. And when run in a notebook format, the notebook can be archived and made accessible to readers of a published study showing an exact documentation of what steps were applied in processing the dataset, the exact parameter choices and so on (something that is not often done when using GUI driven software). Recently, a python library was introduced for working with 4DSTEM and nanodiffraction data, named _py4DSTEM[19]_. In the case of nanodiffraction, it can reduce spot diffraction patterns to lists of diffraction spot positions (so-called "points lists") and then perform calculations with these (such as strain). This has recently been applied to crystal orientation mapping with encouraging results so far, although the initial examples were on materials containing just one crystalline phase[20]. In this paper, we demonstrate the use of this tool on a two-phase titanium alloy, which is an ideal test case for nanoscale mapping of a relatively complex structure. It is shown that this produces robust and reliable orientation mapping results. Additionally, these results are then analysed crystallographically using another Open Source python library, _orix[21]_, and shown to match the expectations of crystallographic theory.
## Methods
A TIMETAL 550 (3-5% AI, 3-5% Mo, 1.5-2.5% Sn as main alloying elements, balance Ti) sample was prepared for TEM observation using a standard FIB liftout process using a FEI Nova Nanolab 200. Scanning precession electron diffraction was performed using a JEOL ARM200F scanning transmission electron microscope, equipped with a Nanomegas Astar
scanning precession electron diffraction system controlled using TopSpin software. The smallest possible spot size and smallest condenser aperture (10 \(\upmu\)m) were used to bring the probe current low enough to allow unsaturated data to be collected. Data was recorded using a Quantum Detectors MerlinEM direct electron detector, as detailed and used in our previous publications[18, 22]. A precession angle of 0.5deg was used and the probe size was estimated at a diameter of 2.5-3 nm based on previous measurements for this condenser aperture; a 3 nm step size was used in data acquisition. The raw data was exported from the _.app5_ file format used by TopSpin to an _hdf5_ file using a suitable function in the _fpd_ python library[23] ([https://fpdpy.gitlab.io/fpd/](https://fpdpy.gitlab.io/fpd/)). Automated crystallographic orientation mapping was performed using the _py4dstem_ python library[19, 20], [https://github.com/py4dstem/py4DSTEM](https://github.com/py4dstem/py4DSTEM), specifically using version 0.13.10, with the default correlation weighting scheme. Mapping was performed of the crystal orientations as two separate scans: firstly fitting to the \(\alpha\) phase, and then to the \(\beta\) phase, using the following cif files from the data of [24] for \(\alpha\)-Ti and [25] for V-stabilised \(\beta\)-Ti (.cif files from the Inorganic Crystal Structure Database, as provided by the Physical Science Data-science Service). Each fit was then exported to a.ang file of Euler angles for each scan position for each phase. The two _.ang_ files were compared with a simple python script that determines the highest correlation index for each scan position and then writes a new.ang file with both phases included, with the phase with the highest correlation index chosen for each scan position (provided in the open data deposit associated with this paper). Analysis of the crystallographic orientation data in the.ang files was performed with the _orix_ python library (version 0.10.2), [https://orix.readthedocs.io/en/stable/](https://orix.readthedocs.io/en/stable/). Raw data and python notebooks are provided in an Open Data deposit associated with this paper showing the exact parameter choices made in each step of the analysis.
## Results
Figure 1 shows an overall view of the sample, and the specific area scanned using SPED. Figure 1a is a synthetic ADF survey image of the sample with the scan area for the higher resolution scan shown on there by a rectangle. The TIMETAL-550 material consists mainly of primary \(\alpha\) with veins of residual \(\beta\) between, which itself then contains some rather fine \(\alpha\) laths (the Nb content of which was also mapped chemically in our previous work[26]) and the chosen scan area covered one of these \(\beta\) veins with \(\alpha\) laths. Figure 1b then shows a synthetic ADF image at higher resolution (pixel size of 3 nm) of this scan area, calculated using the integration of the intensity in the annulus shown overlaid on the average diffraction pattern (of the whole scan) shown in Figure 1c. This clearly shows laths which are a little darker against a slightly brighter background, which all makes sense, as the \(\alpha\) laths are rich in Ti and Al (relatively light elements) and the \(\beta\) between them is enriched in rather higher-\(Z\) Nb, which will scatter more electrons to higher angles. The laths are arranged in a number of orientations to the beam, but the diffraction pattern itself seems to be close to a \(\langle 111\rangle_{\beta}\) direction. Figure 1d is a roughly hexagonal diffraction pattern which would fit expectations for \(\langle 111\rangle_{\beta}\). Figure 1e shows the appearance of additional diffraction spots in one direction in a manner characteristic of a \(\langle 2\overline{110}\rangle_{\alpha}\) direction, where we see \(000n_{\alpha}\) and \(hkin\) spots, where \(n\) is odd, many of which only appear due to double diffraction (although kinematically forbidden), and this pattern can only be from \(\alpha\)-Ti. Figure 1f is also hexagonal but is a little distorted from a regular hexagon in a way that turns out to match \(\alpha\)-Ti better than \(\beta\)-Ti. This matching is shown in figure 2.
Figure 2 shows an example of indexing for the points list derived from the diffraction pattern shown in Figure 1f, showing an overlay of the best fit pattern and the actual detected disks. Figure 2a shows the fit for \(\alpha\)-Ti and Figure 2b shows the fit for \(\beta\)-Ti, both with the correlation indices printed. In this case, the correlation index is significantly better for \(\alpha\)-Ti: although both phases can produce patterns that are qualitatively similar, the best fit \(\beta\)-Ti one is still noticeably distorted from the actual observed diffraction spot positions. Thus, it is clear that choosing the best fit based on correlation index will work well to discriminate the two phases in most cases. After comparison of the two files of fits to \(\alpha\)- and \(\beta\)-Ti, 10817 pixels in the scan fitted better to \(\alpha\) and 8283 to \(\beta\). The scan area was cropped slightly before further analysis as some very thin material or possible vacuum was found in the bottom left corner of a scan (which gave very ambiguous and low quality
Figure 1: Overview of the area mapped with SPED and the basic data arising from this: a) General large area ADF view of an area containing mostly primary \(\alpha\) grains with a thin strand of mixed \(\alpha\) and \(\beta\) running through the middle and another \(\alpha\) and \(\beta\) area in lower left that was mapped – the mapping box is shown; b) average diffraction pattern from the whole SPED dataset with the area used for calculating an ADF image highlighted in red; c) calculated virtual ADF image with three colour-coded points shown from whence individual diffraction patterns were extracted for display and diffraction spot detection; d) diffraction patterns with detected diffraction spots circled; d) was subsequently indexed as \(\beta\) and e) and f) were subsequently indexed as \(\alpha\).
results) by removing the leftmost and lowest 20 pixels giving a final analysed area with 6984 pixels of \(\alpha\) and 6696 of \(\beta\).
Figure 3 shows plots of crystallographic data calculated from the combined orientation dataset. Figures 3a-f are inverse pole figure (IPF) maps for the two phases. The \(\alpha\) maps in Figure 3a-c show a number of different orientations but direct analysis of the exact orientations or their relationships to one another is not obvious from simply looking at the IPF maps. Nevertheless, it is clear that fine latths are well detected, down to areas 3 pixels wide, and thus less than 10 nm across. The \(\beta\) maps in Figure 3d-f all show a consistent orientation in this phase, which is perhaps unsurprising, suggesting that this was all a single \(\beta\) crystal prior to the precipitation of the \(\alpha\) latths (additionally, if the \(\beta\) map alone is examined, i.e. indexing everything as if it were \(\beta\) [shown in Figure S1 in the supplemental information], then the \(\beta\) orientation appears consistent across the scan area, also suggesting that all the \(\alpha\)-latths originated from a single beta grain). To make the relationships between the \(\alpha\)-latths and the \(\beta\) matrix clearer, there are two useful tools that can be used. Firstly, cluster analysis[21] was used on the dataset of all \(\alpha\) orientations to determine clusters of similar orientations (within 15\({}^{\circ}\)) by cross-comparing all relative orientations, after applications of symmetry rotations to reduce them into the symmetry reduced zone for HCP. (This is a far larger misorientation range than the accuracy of ACOM itself, which should be "1", although maybe better with precession[20], and allows for a degree of sample bending or local deviation from any theoretical relationship). This produced 8 clusters containing 6915 orientations in total (2138, 730, 732, 610, 861, 1638, 153, and 53 in each cluster from 1-8), as well as a very small number of 69 outliers (not plotted). The average misorientation for each cluster could then be calculated, based on a misorientation from a given reference (one of the clusters). And each cluster was then
Figure 2: Indexing of the pattern (blue circles) shown in Figure 1f with the two phases: a) \(\alpha\)-Ti; b) \(\beta\)-Ti (black crosses). The sizes of blue circles and black crosses is proportional to diffraction intensities. The correlation index is shown in both cases, and the fit is clearly rather better for the \(\alpha\)-Ti. Note that the diffraction to scan rotation (of 34\({}^{\circ}\) counter clockwise) has been applied to align this with the axes of the images so the orientations of the patterns differ from 1f.
assigned a colour and this colour plotted back into the map, which is shown in Figure 3g, which now makes it straightforward to see which laths have the same crystal orientation.
Secondly, plotting the orientation data to pole figures is very helpful and this is done in Figure 3h and 3i. This shows that the \([0001]_{\alpha}\) poles of the laths are all aligned along one of the \(\langle 110\rangle_{\beta}\) poles, exactly as you would expect from the Burgers transformation (\([11\overline{2}0]_{\alpha}\parallel\langle 111\rangle_{\beta},(0001)_{\alpha}\parallel \langle 110\rangle_{\beta}\). Moreover, many \(\langle 2\overline{110}\rangle_{\alpha}\) directions are shown to align close to \(\langle 111\rangle_{\beta}\) directions, or be a few degrees off in Figure 3i (much as expected, e.g. in Figure 4 in Wang _et al._[4]). Fairly obviously, not all 12 possible variants of the Burgers orientation relationship occur in this area. It should be noted that clusters 1 and 2, 3 and 4, and 5 and 6 all seem to occur in the same laths and seem to be related to each other. 3 and 4 have **c**-axes on opposite sides of the same stereogram with **a**-axes in the same places, and this just suggests a small tilt of the same orientation, possibly resulting from sample bending. 6 only occurs in one lath at the bottom and is mostly supplanted by 5, and all in laths with similar orientation. The points on the pole figures coincide and again this seems like slightly tilted versions of the same orientation. The only laths where there seems some confusion in orientation is those labelled 1/2 (gold/orange) where both orientations are apparently present. The pole figure shows that these have two \([0001]_{\alpha}\) directions symmetric about the centre, suggesting an ambiguity of how to assess where this **c**-direction points. Looking at Figure 1f reveals why this is, since the pattern is fairly close to having 2-fold symmetry (along a direction that does not have true 2-fold symmetry, just in the ZOLZ[27]) which leads to this ambiguity. As such, it is clear in hindsight that mapping orientations close to a high symmetry direction was a mistake and tilting off-axis a little would have broken this symmetry and may have led to more unambiguous results. Nevertheless, we can thus identify what appear to be 5 distinct lath orientations.
Nevertheless, there is yet more information in the data, if one considers the relative orientations between the different variants, which are listed in Table 1 (calculated using average orientation for each cluster). These have been rationalised to one of the indices to match the calculations of the possible relative orientations between different alpha latths resulting from the Burgers transformation, as presented by Wang _et al.[4]_. These show that all the different clusters are related by relative orientations very close to these ideal orientation relationships.
Figure 3: Mapping of alpha and beta orientations in the scanned area as shown in Figure 1c): a)-c \(\alpha\)-Ti inverse pole figures for the x, y and z directions, with the colour key at the side; d)-f) \(\beta\)-Ti inverse pole figures for the x, y and z directions, with the colour key at the side; g) map of the distribution of the 8 distinct identified orientation clusters in the same colours as the pole figures; h) pole figure overlay the \([0001]_{\alpha}\) directions (coloured hexagons) and \((110)_{\beta}\) directions (empty diamonds) with a colour key at the side i); pole figure overlaying the \((2\overline{11}0)_{\alpha}\) directions (coloured diamonds) and \((111)_{\beta}\) directions (empty triangles) with a colour key at the side.
Orientations 1, 4/5 and 6/7 all have **c**-axes in the plane of the image, and roughly at 60\({}^{\circ}\) to each other and are related by roughly 60\({}^{\circ}\) rotations about something close to a \(\langle 2\overline{110}\rangle_{\alpha}\) direction (the one pointing vertical in this dataset, centre of the pole figure in Figure 3i) (type 2 orientation relationship of [4]). Moreover, the fact that there is a recurrence of three variants all related by rotations of 60\({}^{\circ}\) about one common **a**-direction in a so-called "triangle" configuration is something previously noted with coarser laths mapped with EBSD and well explained by [4] by consideration of shape strains for different combinations of laths.
Orientations 2 and 3, as said above, are the results of an ambiguous indexing and one of these two must be correct, but it is hard to be certain which from the data. Nevertheless, either gives a rotation from the orientation 1 (red in Figure 1g) of something close to the predicted 63.26\({}^{\circ}\) about a \(\langle 10\overline{553}\rangle_{\alpha}\) direction (type 4 of [4]). There seems no ambiguity about 8, which is just a single, small lath in the lower left, but that, too, corresponds to something close to this misorientation. This should be a frequently observed lath misorientation according to that work.
Overall, this demonstrates that this is a useful workflow for performing automated crystallographic orientation mapping (ACOM) using scanning precession electron diffraction data in a complex two-phase microstructure and for performing quantitative crystallographic analysis on the results. Moreover, this works well at a mapping step size of 3 nm and reveals fine laths below 10 nm wide at their thinnest points, far beyond the spatial resolution limitations of EBSD. As such, this provides a valuable complement to EBSD (which will continue to be important for large sample areas) for studying finer laths and other nano-resolved crystallography. The fact that the orientation relationships here are expected to conform to a well-known one helps in assessing whether the ACOM is working correctly and producing plausible results. It would appear that going too close to a major zone axis in either phase could lead to 180\({}^{\circ}\) ambiguities in indexing and that the best solution would be to tilt a little away from any zone axis for data collection in future. It is also anticipated that going to shorter camera lengths and thus collecting higher diffracted angles would give more chances to break any such ambiguities, provided there is sufficient signal to noise ratio at the higher angles to still detect spots reliably. In that respect, the use of a direct electron counting detector, as in this work, is important, since we showed previously that the noise floor in older CCDs tends to obscure higher angle information [18]. In terms of the method, it would be possible to include comparison of the correlation indices and the writing of a single _.ang_ file from _py4DSTEM_ in the future. It may even be
\begin{table}
\begin{tabular}{||c c c c c c||} \hline Orientation & 0 & 0 & 0 & 0 & 0 \\ \hline
1 & 0 & 0 & 0 & 0 & 0 \\
2 & -4.952 & -5.048 & 10.000 & -2.524 & 66.3 \\ \hline
3 & -4.830 & 10.000 & -5.170 & 2.647 & 66.6 \\
4 & -2.016 & 1.000 & 1.016 & -0.080 & 60.3 \\
5 & -2.386 & 1.000 & 1.386 & -0.101 & 60.8 \\
6 & -2.371 & 1.371 & 1.000 & -0.069 & 59.3 \\
7 & -2.152 & 1.152 & 1.000 & -0.025 & 58.9 \\
8 & **4.878** & **10.000** & -5.122 & -3.320 & 64.4 \\ \hline \end{tabular}
\end{table}
Table 1: Axis angle pairs (\([uvtw]/\theta\)) for the average orientations of each orientation cluster characterising their misorientation from orientation 1. All have been normalised to the smallest \(u\), \(v\), or \(t\) index for comparison with the calculations of [4].
possible to consider cases where overlapped patterns are found as a linear combination of the two and then showing the proportions in each pixel, but those are innovations that can be explored in the future.
In the immediate future, this method will now be applied to studying the variant selection in the processing of two-phase titanium alloys and it is anticipated that more results will be published in due course, and in microstructures rather more complex that the one presented in this publication, although all the data acquisition, indexing, analysis and presentation methods developed here will be equally useful in that more complex situation.
## Conclusions
Scanning precession electron diffraction has been used for automated crystal orientation mapping in a TIMETAL-550 alloy containing veins of \(\beta\) between large primary \(\alpha\) grains, these veins then containing smaller laths of secondary or tertiary \(\alpha\). All data processing, both of orientation determination and then of representation and quantitative analysis was performed with Open-Source python libraries. Highly reliable indexing was achieved of both phases (excepting a few minor ambiguities) allowing the construction of good quality orientation maps for both phases, allowing the mapping of various \(\alpha\) laths that have formed within a single beta grain. The orientations could be clustered into groups of similar orientations, allowing the classification into just eight orientation clusters. Further analysis showed that three pairs of clusters related to similar orientations meaning that just five distinct crystal orientations for laths were present in the area analysed. Further analysis shows that all these orientations approximately follow the Burgers orientation relationship to the \(\beta\) and show relative orientations to each other close to expected ones. Three of the clusters follow expectations for a triangle of orientations related by ~60\({}^{\circ}\) rotations about a common \([11\overline{2}0]_{\alpha}\) axis, as has frequently been observed in other work; which would correspond to a group of orientations with a low total shape strain. This work demonstrates a good analysis pathway for automated crystal orientation mapping of two-phase materials using transparent, Open-Source tools, and this will be applicable for more complex microstructures in titanium or other two-phase or multi-phase alloys.
## Acknowledgements
We wish to acknowledge the use of the EPSRC funded Physical Sciences Data-science Service hosted by the University of Southampton and STFC under grant number EP/S020357/1 to allow access to the ICSD and the download of CIF files for appropriate structures. EFM is grateful to the EPSRC and TIMET for an industrial PhD studentship (EPSRC funding under EP/R512266/1). Helpful discussions and code upgrades with _orix_ from Dr Patrick Harrison (SIMaP laboratory, Grenoble, France) and Mr Hakon Wiik Anes (NTNU, Norway) are gratefully acknowledged. Work at the Molecular Foundry was supported by the Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.
## Data Deposit
The raw data for the maps presented herein together with the Jupyter notebooks used for the processing is available for download at [https://doi.org/10.5525/gla.researchdata.1514](https://doi.org/10.5525/gla.researchdata.1514).
## References
* [1] Wright, S. I. & Adams, B. L. (1992) Automatic analysis of electron backscatter diffraction patterns. _Metall. Trans. A, 23_, 759-767.
* the Emergence of a New Microscopy. _Metall. Trans. A, 24_, 819-831.
* [3] Gey, N. & Humbert, M. (2002) Characterization of the variant selection occurring during the \(\alpha\)\(\rightarrow\)\(\beta\)\(\rightarrow\)\(\alpha\) phase transformations of a cold rolled titanium sheet. _Acta Mater., 50_, 277-287.
* [4] Wang, S. C., Aindow, M. & Starink, M. J. (2003) Effect of self-accommodation on \(\alpha\)/\(\alpha\) boundary populations in pure titanium. _Acta Mater., 51_, 2485-2503.
* [5] Bhattacharyya, D., Viswanathan, G. B., Denkenberger, R., Furrer, D. & Fraser, H. L. (2003) The role of crystallographic and geometrical relationships between \(\alpha\) and \(\beta\) phases in an \(\alpha\)/\(\beta\) titanium alloy. _Acta Mater., 51_, 4679-4691.
* [6] Beladi, H., Chao, Q. & Rohrer, G. S. (2014) Variant selection and intervariant crystallographic planes distribution in martensite in a Ti-6Al-4V alloy. _Acta Mater., 80_, 478-489.
* [7] Balachandran, S., Kashiwar, A., Choudhury, A., Banerjee, D., Shi, R. & Wang, Y. (2016) On variant distribution and coarsening behavior of the \(\alpha\) phase in a metastable \(\beta\) titanium alloy. _Acta Mater., 106_, 374-387.
* [8] Zhang, Y. J., Miyamoto, G., Shinbo, K. & Furuhara, T. (2017) Quantitative measurements of phase equilibria at migrating alpha/gamma interface and dispersion of VC interphase precipitates: Evaluation of driving force for interphase precipitation. _Acta Mater., 128_, 166-175.
* [9] Chen, Y., Kou, H., Cheng, L., Hua, K., Sun, L., Lu, Y. & Bouzy, E. (2019) Crystallography of phase transformation during quenching from \(\beta\) phase field of a V-rich TiAl alloy. _J. Mater. Sci., 54_, 1844-1856.
* [10] Farooq, M. U., Villaurrutia, R., MacLaren, I., Burnett, T. L., Comyn, T. P., Bell, A. J., Kungl, H. & Hoffmann, M. J. (2008) Electron backscatter diffraction mapping of herringbone domain structures in tetragonal piezoelectrics. _J. Appl. Phys., 104_.
* [11] Farooq, M. U., Villaurrutia, R., MacLaren, I., Kungl, H., Hoffmann, M. J., Fundenberger, J. J. & Bouzy, E. (2008) Using EBSD and TEM-Kikuchi patterns to study local crystallography at the domain boundaries of lead zirconate titanate. _J. Microsc.-Oxf., 230_, 445-454.
* [12] Rauch, E. F., Portillo, J., Nicolopoulos, S., Bultreys, D., Rouvimov, S. & Moeck, P. (2010) Automated nanocrystal orientation and phase mapping in the transmission electron microscope on the basis of precession electron diffraction. _Z. Krist.-Cryst. Mater., 225_, 103-109.
* [13] Lee, H. J., Kim, J. H., Park, C. H., Hong, J.-K., Yeom, J.-T., Lee, T. & Lee, S. W. (2023) Twinning-induced Plasticity Mechanism of \(\alpha\)"-martensitic Titanium Alloy. _Acta Mater., 248_, 118763.
* 12 wt % Mo alloy. _Acta Mater., 235_, 118088.
* [15] Ma, X., Zhao, D., Yadav, S., Sagapuram, D. & Xie, K. Y. (2020) Grain-subdivision-dominated microstructure evolution in shear bands at high rates. _Materials Research Letters, 8_, 328-334.
* [16] Ophus, C. (2019) Four-Dimensional Scanning Transmission Electron Microscopy (4D-STEM): From Scanning Nanodiffraction to Ptychography and Beyond. _Microsc. Microanal., 25_, 563-582.
* [17] MacLaren, I., Macgregor, T. A., Allen, C. S. & Kirkland, A. I. (2020) Detectors--The ongoing revolution in scanning transmission electron microscopy and why this important to material characterization. _APL Mater., 8_, 110901.
* [18] MacLaren, I., Frutos-Myro, E., McGrouther, D., McFadzean, S., Weiss, J. K., Cosart, D., Portillo, J., Robins, A., Nicolopoulos, S., Nebot del Busto, E. & Skogeby, R. (2020) A Comparison of a Direct Electron Detector and a High-Speed Video Camera for a Scanning Precession Electron Diffraction Phase and Orientation Mapping. _Microsc. Microanal._, 1110-1116.
* [19] Savitzky, B. H., Zeltmann, S. E., Hughes, L. A., Brown, H. G., Zhao, S., Pelz, P. M., Pekin, T. C., Barnard, E. S., Donohue, J., Rangel DaCosta, L., Kennedy, E., Xie, Y., Janish, M. T., Schneider, M. M., Herring, P., Gopal, C., Anapolsky, A., Dhall, R., Bustillo, K. C., Ercius, P., Scott, M. C., Ciston, J., Minor, A. M. & Ophus, C. (2021) py4DSTEM: A Software Package for Four-Dimensional Scanning Transmission Electron Microscopy Data Analysis. _Microsc. Microanal., 27_, 712-743.
* [20] Ophus, C., Zeltmann, S. E., Bruefach, A., Rakowski, A., Savitzky, B. H., Minor, A. M. & Scott, M. C. (2022) Automated Crystal Orientation Mapping in py4DSTEM using Sparse Correlation Matching. _Microsc. Microanal., 28_, 390-403.
* [21] Johnstone, D. N., Martineau, B. H., Crout, P., Midgley, P. A. & Eggeman, A. S. (2020) Density-based clustering of crystal (mis)orientations and the orix Python library. _J Appl Crystallogr, 53_, 1293-1298.
* [22] McCartan, S. J., Calisir, I., Paterson, G. W., Webster, R. W. H., Macgregor, T. A., Hall, D. A. & MacLaren, I. (2021) Correlative chemical and structural nanocharacterization of a pseudo-binary 0.75Bi(FeO.97Ti0.03)03-0.25BaTiO3 ceramic. _J. Am. Ceram. Soc., 104_, 2388-2397.
* [23] Paterson, G. W., Webster, R. W. H., Ross, A., Paton, K. A., Macgregor, T. A., McGrouther, D., MacLaren, I. & Nord, M. (2020) Fast Pixelated Detectors in Scanning Transmission Electron Microscopy. Part II: Post-Acquisition Data Processing, Visualization, and Structural Characterization. _Microsc. Microanal., 26_, 944-963.
* [24] Sabeena, M., Murugesan, S., Anees, P., Mohandas, E. & Vijayalakshmi, M. (2017) Crystal structure and bonding characteristics of transformation products of bcc \(\beta\) in Ti-Mo alloys. _J Alloy Compd, 705_, 769-781.
* [25] Suwarno, S., Solberg, J. K., Maehlen, J. P., Krogh, B. & Yartys, V. A. (2012) Influence of Cr on the hydrogen storage properties of Ti-rich Ti-V-Cr alloys. _International Journal of Hydrogen Energy, 37_, 7624-7628.
* Performing EELS at higher energy losses at both 80 and 200 kV. In: _Advances in Imaging and Electron Physics_ (eds. P. W. Hawkes & M. Hytch). Elsevier.
* [27] Buxton, B. F., Eades, J. A., Steeds, J. W. & Rackham, G. M. (1976) Symmetry of electron-diffraction zone axis patterns. _Philos. Trans. R. Soc. A-Math. Phys. Eng. Sci., 281_, 171-+.
* [
A method for crystallographic mapping of an alpha-beta titanium alloy with nanometre resolution using scanning precession electron diffraction and open-source software libraries - Supplemental Information
Ian MacLaren1, Enrique Frutos-Myro1,2, Steven Zeltmann3,4, Colin Ophus3
Footnote 1: The \(\beta\)-Ti \(\beta\)- |
2305.03527 | ResQNets: A Residual Approach for Mitigating Barren Plateaus in Quantum
Neural Networks | The barren plateau problem in quantum neural networks (QNNs) is a significant
challenge that hinders the practical success of QNNs. In this paper, we
introduce residual quantum neural networks (ResQNets) as a solution to address
this problem. ResQNets are inspired by classical residual neural networks and
involve splitting the conventional QNN architecture into multiple quantum
nodes, each containing its own parameterized quantum circuit, and introducing
residual connections between these nodes. Our study demonstrates the efficacy
of ResQNets by comparing their performance with that of conventional QNNs and
plain quantum neural networks (PlainQNets) through multiple training
experiments and analyzing the cost function landscapes. Our results show that
the incorporation of residual connections results in improved training
performance. Therefore, we conclude that ResQNets offer a promising solution to
overcome the barren plateau problem in QNNs and provide a potential direction
for future research in the field of quantum machine learning. | Muhammad Kashif, Saif Al-kuwari | 2023-05-05T13:33:43Z | http://arxiv.org/abs/2305.03527v1 | # ResQNets: A Residual Approach for Mitigating Barren Plateaus in Quantum Neural Networks
###### Abstract
The barren plateau problem in quantum neural networks (QNNs) is a significant challenge that hinders the practical success of QNNs. In this paper, we introduce residual quantum neural networks (ResQNets) as a solution to address this problem. ResQNets are inspired by classical residual neural networks and involve splitting the conventional QNN architecture into multiple quantum nodes, each containing its own parameterized quantum circuit, and introducing residual connections between these nodes. Our study demonstrates the efficacy of ResQNets by comparing their performance with that of conventional QNNs and plain quantum neural networks (PlainQNets) through multiple training experiments and analyzing the cost function landscapes. Our results show that the incorporation of residual connections results in improved training performance. Therefore, we conclude that ResQNets offer a promising solution to overcome the barren plateau problem in QNNs and provide a potential direction for future research in the field of quantum machine learning.
## 1 Introduction
The Noisy Intermediate-Scale Quantum (NISQ) devices are a new generation of quantum computers capable of executing quantum algorithms. However, NISQ devices still suffer from significant errors and limitations in terms of the number of qubits and coherence time [1]. Despite these limitations, NISQ devices are an important stepping stone towards developing fault-tolerant quantum computers, as they provide a platform for exploring and evaluating basic quantum algorithms and applications [2]. Research in the NISQ era is focused on developing algorithms and techniques that are resilient to noise and errors, and can run effectively on NISQ devices [3]. This includes algorithms for quantum error correction [4], quantum optimization [5], and quantum machine learning (QML)[6].
QML is an interdisciplinary field that combines the concepts and techniques from quantum computing and machine learning (ML). It aims to leverage the unique properties of quantum systems, such as superposition, entanglement, and interference, to develop new algorithms and approaches for solving complex machine learning problems [7]. QML is increasingly becoming an exciting application in the NISQ era [2]. The anticipation here is that the quantum models (by exploiting the exponentially large Hilbert space) would achieve a computational advantage over their classical counterparts [8, 9], particularly for quantum datasets [10, 11, 12]. With continued advancements in quantum hardware [13], development of new quantum algorithms [14], quantum error correction and fault tolerance [15], the future of QML is bright, and it is likely to play a significantly important role in the field of machine learning. A wide range of ML algorithms are being explored in the quantum realm, including
quantum neural networks (QNNs) [16], quantum support vector machines [17, 18], quantum principal component analysis [19], and quantum reinforcement learning [20]. These approaches were shown to be effective in various domains, such as image classification [21], natural language processing [22], and recommendation systems [23].
QNNs is a promising area of research that aims to combine the power of quantum computing and neural networks to solve complex computational problems [24, 25]. Unlike classical neural networks, QNNs use quantum-inspired representations and operations for encoding and processing data [26, 27, 28, 29]. This allows for the exploration of exponential solution space and the exploitation of quantum parallelism, potentially leading to faster and more accurate results [7]. QNNs can be considered as a subclass of variational quantum algorithms, which aim to optimize parameters (\(\theta\)) of a parameterized quantum circuit (PQC) 1\(U(\theta)\) to minimize the cost function \(\mathcal{C}\). PQC utilizes tunable parameters to optimize quantum algorithms through classical computation. One example of a QNN architecture is the quantum Boltzmann machine [30, 31], which uses quantum circuits to model complex probability distributions and perform unsupervised learning tasks. In addition to unsupervised learning, QNNs have shown potential in various applications such as quantum feature detection [18], quantum data compression and denoising[32, 33], and quantum reinforcement learning [34]. QNNs can also be used for quantum-enhanced image recognition [6, 35] and quantum molecular simulations [36].
Footnote 1: we will use the terms “PQC” and “quantum layers” interchangeably
However, despite their potential, QNNs are still in the early stages of development and face several technical and practical challenges. In particular, training and optimizing the parameters in QNNs pose significant challenges. To address these challenges, the research community has been developing the quantum landscape theory [37] that explores the properties of cost function landscapes in QML systems. Consequently, interesting results have been obtained in the study of QNN's training landscapes, including the occurrence of barren plateaus (BP) [38], the presence of sub-optimal local minima [39], and the impact of noise on cost function landscapes [40, 41, 42, 43]. These findings provide important insights into the properties of QNNs and their training dynamics, and can inform the development of new algorithms and strategies for training and optimizing QNNs.
In particular, the BP problem refers to a phenomenon in which the circuit's expressiveness, as measured by its ability to approximate a target unitary operation, is severely limited as the number of qubits in the circuit increases [38], which is mainly due to vanishing gradients in the parameter space. The phenomenon of BP in QNNs is a significant challenge that impedes the advancement and widespread implementation of QNNs. To mitigate the BP, various strategies have been proposed, including the use of clever parameter initialization techniques [44], pre-training [45], examination of the dependence on the cost function [46, 47], implementation of layer-wise training of QNNs [48], and initialization parameters drawn from the beta distribution[49]. These solutions aim to overcome the limitations posed by the BP in QNNs and facilitate the full realization of their potential. However, it is important to note that the solution that works best for one QNN architecture may not work for another, as the BP problem can be highly dependent on the specific problem being solved and the quantum architecture being used.
Contribution.In this paper, we propose a novel solution for mitigating the issue of barren plateaus (BP) in quantum neural networks (QNNs). Our approach is based on the concept of residual neural networks, which were previously introduced as a means to overcome the vanishing gradient problem in classical neural networks. For this, we present residual quantum neural networks (ResQNets) by incorporating residual connections between two quantum layers of varying depths. Our findings indicate that ResQNets greatly enhance the training of QNNs compared to plain QNNs (PlainQNets). To validate our proposed ResQNets, we perform comparisons of their cost function landscapes and training performance with that of PlainQNets. Our experimental results demonstrate that the residual connections in ResQNets effectively mitigate the adverse effects of BP and result in improved overall training performance.
Organization.The rest of the paper is organized as follows: Section 2 provides an overview of both classical and quantum residual neural networks and motivates their application. Section 3 discusses parameterized quantum circuits and elaborate on how can multiple PQCs be cascaded. This section also introduces the residual approach in cascaded PQCs. The methodology we adopt in this paper while conducting the various experiments is provided in Section 4. Section 5 presents the results we obtained on both the simulation environment and real quantum devices. Finally, the paper concludes in section 6 with a few concluding remarks and pointers to possible extensions to this work.
## 2 Residual Neural Networks
Residual Neural Networks (ResNets) are a type of deep neural network architecture that aims to improve the training process by addressing the vanishing gradient problem. The basic idea behind ResNets is to introduce residual connections between layers in the network, allowing for easier optimization as the network gets deeper. The residual connections allow the network to learn residual mapping rather than trying to fit the target function directly. This helps prevent the vanishing gradient problem, where the gradients in the backpropagation process become very small, making it difficult to update the parameters effectively. ResNets were first introduced in [50], where the authors showed that ResNets outperformed traditional deep neural networks on benchmark image recognition tasks and demonstrated that ResNets could accommodate significantly deeper architectures than previous networks without sacrificing accuracy.
The residual connections in ResNets have been shown to be effective for training very deep neural networks, with hundreds or even thousands of layers. This has drastically improved the performance in several computer vision and natural language processing tasks. A typical structure of a residual block in ResNets is depicted in Figure 0(a).
Given an input feature map \(x\), the basic building block of a ResNet can be defined as:
\[H(x)=F(x,W_{i})+x\]
where \(H(x)\) is the output of the block, \(F\) is a non-linear function represented by a series of neuron and activation layers with parameters \(W_{i}\), and \(x\) is the input feature map that is added back to the output (the residual connection). The model is trained to learn the function \(F\) such that it approximates the residual mapping \(y-x\), where \(y\) is the desired output. By introducing residual connections, ResNets can address the vanishing gradient problem in deep neural networks, allowing for deeper architectures to be trained effectively.
In this paper, we introduce the quantum counterpart of ResQNet, namely residual quantum neural network (ResQNet), a QNN architecture combining the principles of classical ResNets with QNNs. The basic idea is to add a residual connection between the output of one layer of quantum operations and the input of the next layer. This helps to mitigate the vanishing gradient problem, a.k.a BP, which is a major challenge in QNNs and arises as the number of qubits in the systems increases. Figure 0(b) directs how ResQNets works compared to ResNets.
In ResQNets, the residual connection is represented mathematically as:
Figure 1: Residual block structure
\[\psi_{\rm out}(\theta)=\psi(\theta)+U(\theta)\psi(\theta)\]
where \(\psi(\theta)\) is the input to the quantum circuit, \(U(\theta)\) is the unitary operation defined by the PQC, and \(\psi_{\rm out}(\theta)\) is the output.
## 3 Parameterized Quantum Circuits
QNN is a type of Parameterized Quantum Circuit (PQC), which is a quantum circuit that has tunable parameters that can be optimized to perform specific tasks. In a QNN, the parameters are typically optimized using classical optimization algorithms to learn a target function or perform a specific task. The PQC architecture of a QNN allows for the representation and manipulation of quantum data in a manner that can be used for various applications, such as QML and quantum control. The mathematical derivation of PQC involves the representation of quantum states and gates as matrices and the composition of these matrices to form the overall unitary operator for the circuit.
A quantum state can be represented by a column vector in a Hilbert space, where the elements of the vector are complex numbers that satisfy the normalization constraint:
\[\left|\psi\right\rangle=\left[\alpha\ \beta\right],\quad\left|\alpha\right|^{2 }+\left|\beta\right|^{2}=1\]
A quantum gate is represented by a unitary matrix, which preserves the norm of the vector, i.e., the inner product of the transformed vector with itself is equal to the inner product of the original vector with itself:
\[U^{\dagger}U=UU^{\dagger}=I\]
where \(U^{\dagger}\) is the conjugate transpose of \(U\) and \(I\) is the identity matrix. A PQC can be modeled as a sequence of gates, each represented by a unitary matrix based on classical parameters. The overall unitary operator of the circuit can be obtained by composing the matrices of the individual gates in the correct order:
\[U_{\rm circuit}=U_{n}(\theta_{n})\cdots U_{2}(\theta_{2})U_{1}(\theta_{1})\]
where \(U_{i}(\theta_{i})\) is the unitary matrix representing the \(i\)-th gate and \(\theta_{i}\) is a classical parameter.
The final quantum state after applying the PQC to an initial state can be obtained by matrix-vector multiplication:
\[\left|\psi_{\rm final}\right\rangle=U_{\rm circuit}\left|\psi_{\rm initial}\right\rangle\]
The parameters \(\theta_{1},\ldots,\theta_{n}\) can be optimized using classical optimization algorithms to achieve a desired quantum state or to maximize an objective function such as the expected value of a measurement outcome. The optimization problem can be written as:
\[\theta^{*}=\arg\max_{\theta}\left|\left\langle\psi_{\rm desired}\right|U_{ \rm circuit}(\theta)\left|\psi_{\rm initial}\right\rangle\right|^{2}\]
Solving this optimization problem provides the optimal set of parameters \(\theta^{*}\) that produce the desired outcome.
### Cascading PQCs
In the proposed ResQNets, we encapsulate PQC/QNNs into a quantum node (QN), and arrange multiple QNs in a series, such that the output from one QN serves as the input for the next. This structure enables us to introduce the residual learning approach in a manner that allows the PQCs to work together to achieve the desired outcome. The process of cascading PQCs involves feeding the
output of each PQC into the input of the next, creating a layered structure where each layer represents a single PQC. In this case, each PQC can build upon the outputs of the previous ones, leading to a more complex and sophisticated computation. To ensure that the overall computation remains stable, the residual learning approach is employed, where the output of each PQC is combined with the input of the next in a specified manner.
We now present the mathematical formulation for connecting multiple PQCs in sequence. We will refer to each PQC as \(U_{i}\) where \(i\) denotes the QN it is encapsulated in.
#### 3.1.1 2-Cascaded PQC
Consider two PQCs denoted as \(U_{1}(\theta_{1})\) and \(U_{2}(\theta_{2})\), where \(\theta_{1}\) and \(\theta_{2}\) are classical parameters. The first PQC \(U_{1}(\theta_{1})\) is applied to an initial quantum state \(\ket{\psi_{\text{initial}}}\) to obtain an intermediate quantum state \(\ket{\psi_{\text{intermediate}}}\):
\[\ket{\psi_{\text{intermediate}}}=U_{1}(\theta_{1})\ket{\psi_{\text{initial}}}\]
The second PQC \(U_{2}(\theta_{2})\) is then applied to the intermediate state \(\ket{\psi_{\text{intermediate}}}\) to obtain the final quantum state \(\ket{\psi_{\text{final}}}\):
\[\ket{\psi_{\text{final}}}=U_{2}(\theta_{2})\ket{\psi_{\text{intermediate}}}\]
The overall unitary operator of the two cascaded PQCs can be obtained by composing the matrices of the individual PQCs in the correct order:
\[U_{\text{circuit}}=U_{2}(\theta_{2})U_{1}(\theta_{1})\]
The final quantum state after applying the two cascaded PQCs to an initial state can be obtained by matrix-vector multiplication:
\[\ket{\psi_{\text{final}}}=U_{\text{circuit}}\ket{\psi_{\text{initial}}}\]
The parameters \(\theta_{1}\) and \(\theta_{2}\) can be optimized using classical optimization algorithms to achieve a desired quantum state or to maximize an objective function such as the expected value of a measurement outcome. The optimization problem can be written as:
\[\theta_{1},\theta_{2}^{=}\arg\max_{\theta_{1},\theta_{2}}\abs{\bra{\psi_{ \text{desired}}}U_{\text{circuit}}(\theta_{1},\theta_{2})\ket{\psi_{\text{ initial}}}}^{2}=\arg\max_{\theta_{1},\theta_{2}}\abs{\bra{\psi_{\text{desired}}}U_{2}( \theta_{2})U_{1}(\theta_{1})\ket{\psi_{\text{initial}}}}^{2}\]
Solving this optimization problem returns the optimal set of parameters \((\theta_{1},\theta_{2})\) that produce the desired outcome.
#### 3.1.2 \(n\)-Cascaded PQCs
Similarly, for \(n\) cascaded PQCs, where each PQC takes the output of the previous one as its input, the intermediate states can be described as follows:
\[\ket{\psi_{\text{intermediate},i}}=U_{i}(\theta_{i})\ket{\psi_{\text{intermediate },i-1}}\]
where \(i=1,2,\cdots,n\) and \(\ket{\psi_{\text{intermediate},0}}=\ket{\psi_{\text{initial}}}\). The overall unitary operator of the \(n\) cascaded PQCs can be obtained by composing the matrices of the individual PQCs in the correct order:
\[U_{\text{circuit}}=U_{n}(\theta_{n})\cdots U_{2}(\theta_{2})U_{1}(\theta_{1})\]
The final quantum state after applying the \(n\) cascaded PQCs to an initial state can be obtained by matrix-vector multiplication:
\[\left|\psi_{\text{final}}\right\rangle=U_{\text{circuit}}\left|\psi_{\text{ initial}}\right\rangle\]
The parameters \(\theta_{1},\theta_{2},\cdots,\theta_{n}\) can be optimized using classical optimization algorithms to achieve a desired quantum state or to maximize an objective function such as the expected value of a measurement outcome. The optimization problem can be written as:
\[\theta_{1},\theta_{2},\cdots,\theta_{n} =\arg\max_{\theta_{1},\theta_{2},\cdots,\theta_{n}}\left|\left\langle \psi_{\text{desired}}\right|U_{\text{circuit}}(\theta_{1},\theta_{2},\cdots, \theta_{n})\left|\psi_{\text{initial}}\right\rangle\right|^{2}\] \[=\arg\max_{\theta_{1},\theta_{2},\cdots,\theta_{n}}\left|\left\langle \psi_{\text{desired}}\right|U_{n}(\theta_{n})\cdots U_{2}(\theta_{2})U_{1}( \theta_{1})\left|\psi_{\text{initial}}\right\rangle\right|^{2}\]
Solving this optimization problem returns the optimal set of parameters \((\theta_{1},\theta_{2},\cdots,\theta_{n})\) that produce the desired outcome.
### Residual PQCs
We now introduce residual blocks in the cascaded PQCs encapsulated in QNs which we call ResQNets. In ResQNets, the output of the previous PQC is added to its input and fed as an input to the next PQC. The residual block is inserted to facilitate efficient information flow and improved performance. The primary objective of incorporating residual blocks in QNNs here is to overcome the difficulties associated with BP and thereby improve the learning process. Furthermore, the proposed method aims to harness the strengths of both residual learning and quantum computing to tackle complex problems more effectively.
To mathematically formulate our proposed ResQNets, we start by considering the case of two PQCs, and extend the approach to the general case of cascading \(n\) PQCs with \(n\) residual blocks. We will refer to each PQC as \(U_{i}\) where \(i\) denotes the QN it is encapsulated in.
#### 3.2.1 1-Residual Block
ResQNet with a single residual block contains a maximum of two PQCs of arbitrary depth enclosed in two separate QNs. The first QN serves as a residual block whose input is added to its output before passing it as input to the PQC in the next QN. For the mathematical formulation of such a setting, consider two PQCs, denoted as \(U_{1}(\theta_{1})\) and \(U_{2}(\theta_{2})\), where \(\theta_{1}\) and \(\theta_{2}\) are classical parameters. The first PQC \(U_{1}(\theta_{1})\) is applied to an initial quantum state \(\left|\psi_{\text{initial}}\right\rangle\) to obtain an intermediate quantum state \(\left|\psi_{\text{intermediate}}\right\rangle\):
\[\left|\psi_{\text{intermediate}}\right\rangle=U_{1}(\theta_{1})\left|\psi_{ \text{initial}}\right\rangle\]
In this case, the input of the second PQC \(U_{2}(\theta_{2})\) is not just the intermediate state \(\left|\psi_{\text{intermediate}}\right\rangle\), but the sum of the initial state \(\left|\psi_{\text{initial}}\right\rangle\) and the intermediate state \(\left|\psi_{\text{intermediate}}\right\rangle\):
\[\left|\psi_{\text{input}}\right\rangle=\left|\psi_{\text{initial}}\right\rangle +\left|\psi_{\text{intermediate}}\right\rangle\]
The second PQC \(U_{2}(\theta_{2})\) is then applied to the input state \(\left|\psi_{\text{input}}\right\rangle\) to obtain the final quantum state \(\left|\psi_{\text{final}}\right\rangle\):
\[\left|\psi_{\text{final}}\right\rangle=U_{2}(\theta_{2})\left|\psi_{\text{ input}}\right\rangle\]
The overall unitary operator of the two cascaded PQCs can be obtained by composing the matrices of the individual PQCs in the correct order:
\[U_{\text{circuit}}=U_{2}(\theta_{2})U_{1}(\theta_{1})\]
The final quantum state, after applying the two cascaded PQCs to an initial state, can be obtained by matrix-vector multiplication:
\[\left|\psi_{\text{final}}\right\rangle=U_{\text{circuit}}\left|\psi_{\text{ initial}}\right\rangle\]
\[\left|\psi_{\text{final}}\right\rangle=U_{\text{circuit}}\left(\left|\psi_{ \text{initial}}\right\rangle+\left|\psi_{\text{intermediate}}\right\rangle\right)\]
\[\left|\psi_{\text{final}}\right\rangle=U_{2}(\theta_{2})\left(\left|\psi_{ \text{initial}}\right\rangle+U_{1}(\theta_{1})\left|\psi_{\text{initial}}\right\rangle)\]
The parameters \(\theta_{1}\) and \(\theta_{2}\) can be optimized using classical optimization algorithms to achieve a desired quantum state or to maximize an objective function such as the expected value of a measurement outcome.
#### 3.2.2 2-Residual blocks
In ResQNets with two residual blocks, up to three PQCs can be incorporated within three QNs. There are three potential configurations for the residual blocks in this setup:
1. utilizing only the first QN as a residual block,
2. combining the first two QNs to form a single residual block,
3. utilizing both the first and second QNs individually as separate residual blocks.
For our mathematical formulation, only the third configuration will be considered since it is the general setting for the case of two residual blocks; other configurations effectively contain a single residual block, which has already been mathematically derived in section 3.2.1. However, we will conduct experiments examining all three configurations to determine which configuration performs the best.
Let \(U_{1}(\theta_{1})\), \(U_{2}(\theta_{2})\), and \(U_{3}(\theta_{3})\) be PQCs enclosed in three QNs, where \(\theta_{1}\), \(\theta_{2}\), and \(\theta_{3}\) are classical parameters. The first PQC \(U_{1}(\theta_{1})\) takes an initial quantum state \(\left|\psi_{\text{initial}}\right\rangle\) as its input and produces an intermediate quantum state \(\left|\psi_{\text{intermediate}}\right\rangle\):
\[\left|\psi_{\text{intermediate}}\right\rangle=U_{1}(\theta_{1})\left|\psi_{ \text{initial}}\right\rangle\]
The second PQC \(U_{2}(\theta_{2})\) takes the sum of the initial state \(\left|\psi_{\text{initial}}\right\rangle\) and the intermediate state \(\left|\psi_{\text{intermediate}}\right\rangle\) as its input and produces another intermediate quantum state \(\left|\psi_{\text{intermediate}}^{\prime}\right\rangle\):
\[\left|\psi_{\text{input}}\right\rangle=\left|\psi_{\text{initial}}\right\rangle +\left|\psi_{\text{intermediate}}\right\rangle\]
\[\left|\psi_{\text{intermediate}}^{\prime}\right\rangle=U_{2}(\theta_{2}) \left|\psi_{\text{input}}\right\rangle\]
Finally, the third PQC \(U_{3}(\theta_{3})\) takes the sum of the input \(\left|\psi_{\text{input}}\right\rangle\) and the intermediate state \(\left|\psi_{\text{intermediate}}^{\prime}\right\rangle\) as its input and produces the final quantum state \(\left|\psi_{\text{final}}\right\rangle\):
\[\left|\psi_{\text{input}}^{\prime}\right\rangle=\left|\psi_{\text{input}} \right\rangle+\left|\psi_{\text{intermediate}}^{\prime}\right\rangle\]
\[\left|\psi_{\text{final}}\right\rangle=U_{3}(\theta_{3})\left|\psi_{\text{ input}}^{\prime}\right\rangle\]
The overall unitary operator of the three cascaded PQCs can be obtained by composing the matrices of the individual PQCs in the correct order:
\[U_{\text{circuit}}=U_{3}(\theta_{3})U_{2}(\theta_{2})U_{1}(\theta_{1})\]
The final quantum state after applying the three cascaded PQCs to an initial state can be obtained by matrix-vector multiplication:
\[\left|\psi_{\text{final}}\right\rangle=U_{\text{circuit}}\left|\psi_{\text{ initial}}\right\rangle=U_{3}(\theta_{3})U_{2}(\theta_{2})U_{1}(\theta_{1})\left|\psi_{ \text{initial}}\right\rangle\]
#### 3.2.3 \(n\) Residual Blocks
In the case of \(n\) PQCs enclosed within \(n\) QNs, there are multiple potential configurations for the residual blocks. The mathematical formulation considered here assumes that each of the \(n\) QNs is used as a separate residual block. However, the formulation can be adapted to account for alternative configurations of residual blocks, as needed. For \(n\) PQCs, the ResQNet can be represented as:
\[\left|\psi_{\text{intermediate}}^{(1)}\right\rangle=U_{1}(\theta_{1}) \left|\psi_{\text{initial}}\right\rangle\] \[\left|\psi_{\text{input}}^{(1)}\right\rangle=\left|\psi_{\text{ initial}}\right\rangle+\left|\psi_{\text{intermediate}}^{(1)}\right\rangle\] \[\left|\psi_{\text{intermediate}}^{(2)}\right\rangle=U_{2}(\theta_ {2})\left|\psi_{\text{input}}^{(1)}\right\rangle\] \[\left|\psi_{\text{input}}^{(2)}\right\rangle=\left|\psi_{\text{ input}}^{(1)}\right\rangle+\left|\psi_{\text{intermediate}}^{(2)}\right\rangle\] \[\vdots\]
\[\left|\psi_{\text{intermediate}}^{(n-1)}\right\rangle=U_{n-1}(\theta_{n-1}) \left|\psi_{\text{input}}^{(n-2)}\right\rangle\] \[\left|\psi_{\text{input}}^{(n-1)}\right\rangle=\left|\psi_{\text {input}}^{(n-2)}\right\rangle+\left|\psi_{\text{intermediate}}^{(n-1)}\right\rangle\] \[\left|\psi_{\text{final}}\right\rangle=U_{n}(\theta_{n})\left| \psi_{\text{input}}^{(n-1)}\right\rangle\]
And the overall unitary operator of the \(n\) cascaded PQCs is:
\[U_{\text{circuit}}=U_{n}(\theta_{n})U_{n-1}(\theta_{n-1})\cdots U_{2}(\theta _{2})U_{1}(\theta_{1})\]
The equation can be written in a summation form as follows:
\[\left|\psi_{\text{final}}\right\rangle=U_{n}(\theta_{n})\left(\left|\psi_{ \text{initial}}\right\rangle+\sum_{k=1}^{n-1}U_{k}(\theta_{k})\left|\psi_{ \text{input}}^{(k-1)}\right\rangle\right)\]
where \(\left|\psi_{\text{input}}^{(k-1)}\right\rangle=\left|\psi_{\text{input}}^{( k-2)}\right\rangle+\left|\psi_{\text{intermediate}}^{(k-1)}\right\rangle\) and \(\left|\psi_{\text{intermediate}}^{(k)}\right\rangle=U_{k}(\theta_{k})\left| \psi_{\text{input}}^{(k-1)}\right\rangle\).
Given a set of \(n\) PQCs, \(U_{1}(\theta_{1}),U_{2}(\theta_{2}),\ldots,U_{n}(\theta_{n})\) and an initial quantum state \(\left|\psi_{\text{initial}}\right\rangle\), the objective is to find the set of classical parameters \(\boldsymbol{\theta}=\theta_{1},\theta_{2},\ldots,\theta_{n}\) that maximizes (or minimizes) some cost function \(C(\boldsymbol{\theta})\) associated with the final quantum state \(\left|\psi_{\text{final}}\right\rangle\) produced by the cascaded PQCs. The optimization problem can be formulated as:
\[\boldsymbol{\theta}^{\star}=\arg\max_{\boldsymbol{\theta}}C(\boldsymbol{ \theta})\]
or
\[\boldsymbol{\theta}^{\star}=\arg\min_{\boldsymbol{\theta}}C(\boldsymbol{ \theta})\]
where \(\boldsymbol{\theta}^{\star}\) represents the optimal set of classical parameters that maximizes (or minimizes) the cost function. Note that the cost function \(C(\boldsymbol{\theta})\) can be defined based on the desired behavior of the quantum circuit and can be calculated from the final quantum state \(\left|\psi_{\text{final}}\right\rangle\).
Methodology
In classical NNs, residual neural networks (ResNets) were proposed to overcome the problem of vanishing gradients and were very useful for enabling deep learning in classical machine learning. In this paper, we propose a Residual Quantum Neural Networks (ResQNets), to enable deep learning in QNNs by mitigating the effect of BP as a function of the number of layers.
The conventional approach to constructing QNNs contains an arbitrarily deep PQC, which takes some input and yields some output. Such an architecture typically has a single QN, as depicted in Figure 1(a). In this paper, we refer to this traditional QNN architecture as "Simple PlainQNet".
To construct our proposed ResQNets, we need to further split the traditional QNN architecture into two QNs, where every QN contains arbitrary deep quantum layers. Since our proposed ResQNets contain at least two QNs and the traditional way of constructing QNNs contains a single QN, we construct a slightly modified version of simple PlainQNet, which we call "PlainQNet" and includes two or more QNs, with each QN containing PQCs of arbitrary depth, as shown in Figure 1(b). In PlainQNets, the output of the previous QN is fed to the next QN. The purpose of constructing PlainQNet is to have a fair comparison with our proposed ResQNets because ResQNets need two or more QNs to work. An example of ResQNet architecture with two QNs is shown in Figure 1(c). The PlainQNet architecture is similar to general QNN split into two QNs, whereas in the case of ResQNet, the first QN serves as the residual block, i.e., the input of the first QN is added to its output and then passed as input to the second QN.
It should be noted that ResQNets can comprise multiple QNs with various arrangements of residual blocks. For instance, the ResNet from Figure 1(c) can be extended to have three QNs, in which case three potential configurations can be employed. These include having the first and second QNs acting as individual residual blocks, combining the first and second QNs to serve as a single residual block, and only the first QN functioning as the residual block. The possibility of these three configurations has been taken into consideration. We also consider the case of three QNs with these configurations.
Figure 2: QNN architecture used in this paper (a) Simple PlainQNet (b) PlainQNet and (c) ResQNet
### Quantum Layers Design
For the design of quantum layers, we use a periodic structure containing two single-qubit unitaries (\(RX\) and \(RY\)) per qubit. These unitaries are randomly initialized in the range \([0,\pi]\). Furthermore, a two-qubit gate, i.e., \(CNOT\)-gate is used to entangle qubits, and every qubit is entangled with its neighboring qubit. Figure 3 shows the example design of the quantum layers we used (5 qubits). All the QNs in our experiments have the same quantum layers design.
### Depth of Quantum Layers
The impact of the quantum layer depth in examining the existence of BP in the cost function landscape of a QNN is significant. Effective depth (the longest path within the quantum circuit until the measurement) is crucial in this regard. For convenience, We introduce two depth parameters: layer depth (\(D_{L}\)) and effective depth (\(D_{E}\)). The layer depth \(D_{L}\) refers to the combined number of repetitions of the quantum layer illustrated in Figure 3 in both QNs, while the effective depth \(D_{E}\) represents the overall depth. For our quantum layers design, the following equation can be used to calculate the effective depth.
\[\textit{Total Effective Depth}=D_{E}=4\times D_{L}+k \tag{1}\]
where \(k=2,3,4,5\ldots\) for \(5,6,7,8\ldots\) qubits, respectively. Since the quantum layers are split into two separate QNs, and the depth per QN can be crucial to achieving better performance, it is important to calculate \(D_{E}\) of each QN individually and then add them to obtain the final \(D_{E}\). Failure to calculate the depth in each QN separately could result in an effective depth different from the sum of the effective depths of each QN, i.e., \(D_{L}/QN1+D_{L}/QN2\neq D_{E}\). For example, with \(D_{L}=2\), the total effective depth would be 10 without considering the splitting into two QNs. However, if \(D_{L}\) is split into two QNs with \(D_{L}/QN=1\), the effective depth would be 12. A modified version of Equation 1 should be used to calculate the \(D_{E}\) per QN, as described below.
\[\textit{Effective Depth per QN}=D_{E}/QN=4\times D_{L}/QN+k \tag{2}\]
### Depth Distribution per QN
As previously discussed, ResQNets and PlainQNets consist of multiple QNs, which results in different depth splits for a given depth of quantum layers. According to the definition of BP, the gradient vanishes as a function of the number of qubits; hence, we fix the depth of quantum layers to \(D_{L}=6\), and only vary the number of qubits. Table 1 summarizes the different depth per QN combinations
Figure 3: Quantum Layers Design
for \(D_{L}=6\), and all these depth combinations are tested for different numbers of qubits. Column 3 of Table 1 represents the depth split in the form of ordered pairs (we refer to this form in the rest of the paper whenever we discuss depth split per QN). For instance, \((1,5)\) denotes \(D_{L}=1\) in the first QN and \(D_{L}=5\) in the second QN. The depth per QN combination can be extended to more than two QNs in a similar manner.
### Cost Function Definition
For training our proposed ResQNet, we consider a simple example of learning the identity gate. In such a scenario a natural cost function would be the difference of 1 minus the probability of measuring an all-zero state, which can be described by the following equation.
\[C=\left\langle\psi(\theta)\right|(I-\left|0\right\rangle\left\langle 0\right|) \left|\psi(\theta)\right\rangle=1-p_{\left|0\right\rangle}\]
We consider the global cost function setting, i.e., we measure all the qubits in the network. Therefore, the above cost function definition will be applied across all the qubits according to the following equation.
\[C=\left\langle\psi(\theta)\right|(I-\left|00\ldots 0\right\rangle\left\langle 0 0\ldots 0\right|)\left|\psi(\theta)\right\rangle=1-p_{\left|00\ldots 0\right\rangle} \tag{3}\]
For cost function optimization, we use Adam optimizer (with a stepsize of 0.1), which is a gradient-based optimization method for optimization problems. The Adam optimizer updates the parameters of a model iteratively based on the gradient of the loss function with respect to the parameters. The Adam optimizer uses an exponentially decaying average of the first and second moments of the gradients to adapt the learning rate for each parameter. Let \(g_{t}\) be the gradient of the loss function with respect to the parameters at iteration \(t\). The first moment, \(m_{t}\), and the second moment, \(v_{t}\), are computed as follows:
\[m_{t} =\beta_{1}m_{t-1}+(1-\beta_{1})g_{t}\] \[v_{t} =\beta_{2}v_{t-1}+(1-\beta_{2})g_{t}^{2}\]
where \(\beta_{1}\) and \(\beta_{2}\) are the decay rates for the first and second moments, respectively. The bias-corrected first moment and second moment are then computed as:
\[\hat{m}_{t} =\frac{m_{t}}{1-\beta_{1}^{t}}\] \[\hat{v}_{t} =\frac{v_{t}}{1-\beta_{2}^{t}}\]
Finally, the parameters are updated using the following equation:
\[\theta_{t+1}=\theta_{t}-\frac{\alpha}{\sqrt{\hat{v}_{t}}+\epsilon}\hat{m}_{t}\]
where \(\alpha\) is the learning rate and \(\epsilon\) is a small constant to prevent division by zero.
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(D_{L}\) in QN-1 & \(D_{L}\) in QN-2 & in-text representation \\ \hline
1 & 5 & (1,5) \\ \hline
5 & 1 & (5,1) \\ \hline
2 & 4 & (2,4) \\ \hline
4 & 2 & (4,2) \\ \hline
3 & 3 & (3,3) \\ \hline \end{tabular}
\end{table}
Table 1: Depth combinations per QN
Results and Discussion
In order to investigate the issue of BP in both PlainQNets and ResQNets, we maintain a constant depth of quantum layers, \(D_{L}=6\), which comprises \(100\) quantum gates and \(60\) parameters. The quantum layer depth distribution is varied among different combinations, as discussed in table 1. The \(D_{E}\) per QN can then be calculated using Equation 2. The performance of both networks is evaluated by comparing their cost function landscapes and training results for the problem specified in Equation 3.
### PlainQNet and Simple PlainQNet
In this paper, we use a minimum of two QNs in constructing ResQNets, while the traditional approach in developing QNNs utilizes a single QN (referred to as "simple PlainQNets" in this paper). To ensure a fair comparison between the performance of QNNs with no residual and ResQNets, we also modify simple PlainQNets with two QNs (referred to as "PlainQNets" in this paper). A preliminary comparison between the performance of these two types of PlainQNets is conducted to verify that the use of two QNs in PlainQNets leads to better or equivalent performance compared to the use of a single QN in simple PlainQNets.
The simple PlainQNets and PlainQNets are compared for \(6\)-qubit and \(7\)-qubit quantum layers with a constant depth of \(D_{L}=6\). In the case of PlainQNets, the depth distribution per QN can vary, but we use the depth combinations of \((5,1)\) and \((4,2)\), where the first entry represents the depth of the first QN and the second entry represents the depth of the second QN, as shown in Table 1. We choose deeper quantum layers on the first QN and relatively shallow depth on the second QN primarily because such a configuration of depths per QN leads to a better performance, which will be discussed in more detail in the subsequent sections. For \(6\)-qubit quantum layers, the effective depth (\(D_{E}\)) for PlainQNets for both depth combinations mentioned above is \(30\) (as defined in Equation 2). The closest possible \(D_{E}\) for simple PlainQNets using the quantum layers considered in this paper (shown in Figure 3) is \(31\) with an overall \(D_{L}\) of \(7\) (as defined in Equation 1), which was used in the comparison. Similarly, for \(7\)-qubit quantum layers, the \(D_{E}\) for PlainQNets is \(32\) for both depth combinations per QN. The closest \(D_{E}\) in the case of simple PlainQNets is obtained for \(D_{L}=7\).
Both PlainQNets and simple PlainQNets are then trained for the prob
Figure 4: Cost vs. iterations of PlainQNets and simple PlainQNets (a) for \(6\) qubits (b) for \(7\) qubits. The parentheses denote the \(D_{L}\) per QN.
3. The training results are displayed in Figure 4. It can be observed that for 6-qubit layers, both PlainQNets and simple PlainQNets exhibit comparable performance. However, when the number of qubits increases to 7, the performance of simple PlainQNets decreases significantly due to BP, while PlainQNets improves. Based on these observations, we can infer that it is appropriate to compare the performance of PlainQNets with that of our proposed ResQNets. Hence, for the remainder of the paper, we will compare the performance of PlainQNets, which are QNNs containing two (or more) QNs, with that of ResQNets.
### ResQNet with shallow width quantum layers
In this section, we perform a comparative analysis of the incidence of BP in both PlainQNets and ResQNets. Both PlainQNets and ResQNets consist of two QNs, with a maximum of one residual block in the case of ResQNets. To facilitate a fair comparison, we consider shallow depth quantum layers with \(D_{L}=6\) and incrementally vary the number of qubits from 6 to 10.
#### 5.2.1 6-Qubit Circuit
In this setting, we experiment with a total of 6 qubits. The cost function landscapes for both PlainQNet and ResQNet were analyzed and compared, as shown in Figure 5. The results demonstrate that a significant portion of the cost function landscapes of the PlainQNet for almost all the depth combinations are flat and have a narrow region containing the global minimum. On the other hand, the cost function landscapes of ResQNets are less flat and have a wider region containing the global minimum, which makes ResQNet more suitable for optimization.
The training of PlainQNets and ResQNets was performed for the problem defined in Equation 3. The results of the training are depicted in Figure 6. When the depth of the second QN is equal to or greater than the depth of the first QN, it was observed that the PlainQNets do not undergo successful training. This can be attributed to the flat cost function landscape, i.e., the BP, as depicted in Figure
Figure 5: Cost function landscapes of PlainQNet (upper panel) and ResQNet (lower panel) for 6 Qubits. The parentheses denotes the \(D_{L}\) per QN.
5. For the similar depth distribution per QN (depth in second QN \(\geq\) depth in first QN), the ResQNets were observed to effectively undergo training. However, they struggled to reach an optimal solution due to the presence of multiple local minima in their cost function landscape. In instances where the depth of the first QN is greater than the second QN, both PlainQNets and ResQNets underwent successful training, but ResQNets outperformed PlainQNets.
#### 5.2.2 8-Qubit Cirucit
We now conduct experiments on both PlainQNets and ResQNets with 8-qubit layers, and examine the cost function landscapes of both PlainQNets and our proposed ResQNets. The overall layer depth is set to 6, and all depth combinations are analyzed. The results presented in Figure 7, reveal that approximately 90% of the cost function landscape for PlainQNets remains flat irrespective of the depth distribution per QN, making them unsuitable for optimization. In contrast, the cost function landscapes of ResQNets are still not flat for all the depth combinations, and thus are more favorable for optimization.
We conduct training experiments for both PlainQNets and ResQNets with 8 qubit quantum layers to solve the problem defined in Equation 3. The training results are presented in Figure 8, which show that as we increase the number of qubits from 6 to 8, the PlainQNets get trapped in the flat cost function landscape (i.e., BP), for all the depth combinations per QN and fail to train effectively for the specified problem.
On the other hand, the ResQNets demonstrate successful training across all the depth combinations, surpassing the performance of PlainQNets. Notice that ResQNets exhibit superior learning outcomes when the depth of the first QN is much greater than that of the second QN (\(D_{EinQN1>>>D_{EinQN2}}\)), such as in the case of \((5,1)\). This is because in such scenarios the cost function landscape has fewer and wider regions leading to the global minimum. Conversely, when the depth of the second QN is equal to or greater than that of the first QN, the cost function landscape is characterized by multiple local minima, making it less suitable for optimization as the optimizer becomes trapped in local minima. This phenomenon can be attributed to the presence of residual blocks in ResQNets. In the case of two QNs, a residual connection is introduced only after the first block. This helps in mitigating the issue of BP. However, if the second QN is deep enough, it can still result in BP. In such scenarios, the cost function landscape still contains multiple local minima and fewer paths to reach the global minimum, which makes the optimization process more prone to becoming stuck in a
Figure 6: Cost vs. iterations of (a) PlainQNets (b) and ResQNets for 6 qubits. The parentheses denote the \(D_{L}\) per QN.
local minimum. Despite this, ResQNets still demonstrate superior training performance compared to PlainQNets.
#### 5.2.3 10-Qubit Circuit
To further expand our study, we increased the number of qubits to 10 and performed the same experiments as with quantum layers of 6 and 8 qubits. The cost function landscapes were then analyzed
Figure 8: Cost vs. iterations of (a) PlainQNets and (b) ResQNets for 8 qubits. The parentheses denote the \(D_{L}\) per QN.
Figure 7: Cost function landscapes of PlainQNet (upper panel) and ResQNet (lower panel) for 8 Qubits. The parentheses denote the \(D_{L}\) per QN.
for both PlainQNets and ResQNets, as shown in Figure 9. Similar to the case of 8 qubit layers, a substantial portion of the cost function landscape of PlainQNets was found to be flat, indicating the presence of BP and making it unsuitable for optimization. Conversely, the cost function landscape of ResQNets remained more favorable for optimization as it was characterized by multiple paths leading to the global minimum, thus avoiding the occurrence of BP.
We subsequently trained the 10 qubit quantum layers to address the problem defined in Equation 3. The results of these experiments are depicted in Figure 10. Our analysis indicates that PlainQNets did not exhibit successful training outcomes for nearly all depth combinations, with the exception of \((4,2)\), which showed considerable performance improvement. When we examined its cost function landscape in Figure 9, we observe that there are one or two narrow regions that contain the solution and may be found by the optimizer. However, these narrow regions are unlikely to be encountered and thus the performance, despite being optimal, is not considered suitable for general optimization problems. Therefore, it can still be concluded that the PlainQNets are severely affected by the problem of BP. On the other hand, ResQNets effectively overcame the issue of BP and exhibited successful training outcomes for all depth combinations. Our observations for 10 qubit quantum layers align with our previous findings for 6 and 8 qubit layers in that ResQNets are more effective when the depth after the residual connection is less. This suggests that a shallower depth of quantum layers after the residual connection in ResQNets is more favorable for optimization and mitig
Figure 9: Cost function landscapes of PlainQNet (upper panel) and ResQNet (lower panel) for 10 Qubits. The parentheses denotes the \(D_{L}\) per QN.
Our results conclusively demonstrate that PlainQNets are heavily impacted by the issue of BP as the number of qubits increases, which significantly hinders their performance and ability to optimize the cost function. The previous results have demonstrated the advantage of our proposed ResQNets over PlainQNets in mitigating the phenomenon of BP. Therefore, in the next section, we will conduct experiments solely with ResQNets.
### ResQNets with wider quantum layers
To analyze the scalability of ResQNets for larger quantum circuits, we consider quantum layers with larger number of qubits, i.e., 15 and 20. The depth of the quantum layers, \(D_{L}\), is kept constant at 6. As the cost function landscapes are known to have a direct impact on the training results, as shown in Section 5.2. Consequently, we only present the training results for the 15 and 20-qubit quantum layers.
#### 5.3.1 15-Qubit Circuit
We train the 15 qubit quantum layers to optimize the problem defined in Equation 3. The training results are shown in Figure 10(a). it can be observed that the ResQNets are effectively trained. Additionally, analogous to the case of shallow width quantum layers, the performance is substantially better when the depth in the first QN (before the residual point) is bigger than the second QN.
#### 5.3.2 20-Qubit Circuit
We now train the ResQNets for 20-qubit layers for the problem defined in Equation 3, with a total layer depth of \(D_{L}=6\). It can be observed that even with 20 qubit layers, the ResQNets are effectively trained, as shown in Figure 10(b). Furthermore, similar to the previously shown results, the ResQNets for 20-qubit layers also perform significantly better when the depth after the residual point (second QN) is lesser than the depth before the residual point (first QN).
Figure 10: Cost vs. iterations of (a) PlainQNets (b) and ResQNets for 10 qubits. The parantheses denotes the \(D_{L}\) per QN.
From the results in Figure 11, it is evident that the ResQNets are capable of working with wider quantum layers. The results demonstrate that analogous to the case of shallow-width quantum layers, the training performance is better with the optimal results being achieved for a larger depth in the first QN and a smaller depth in the second QN.
It should be noted that our experiments are limited by the memory constraints of our local computer and we cannot go beyond 20 qubits. However, based on our findings, we believe that the proposed ResQNets would still train effectively even beyond 20 qubits.
### ResQNets with 3-QN
From the analysis presented in previous sections, it can be observed that the ResQNets consisting of two QNs with a maximum of one residual block can effectively address the problem of BP and significantly improve the training performance of QNNs. In this section, we show that increasing the number of QNs in ResQNets can enhance the performance of ResQNets even further. As discussed in Section 4, for three QNs we can have multiple configurations of residual blocks. We consider all of these configurations for our experiments with 20-qubit quantum layers and a fixed quantum layer depth of \(D_{L}=6\).
The results of the experiments conducted in this section will provide valuable insights into the optimal configuration of residual blocks for ResQNets with three or more QNs.
The cost function landscapes of various residual block configurations in ResQNets with three QNs were analyzed, as presented in Figure 12. The results indicate that the optimal placement of residual blocks has a significant impact on the performance of ResQNets. When the residual block is added after every QN, the cost function landscape quickly flattens irrespective of the depth per QN, suggesting that this configuration leads to equivalent or suboptimal performance compared to PlainQNets, which is not at all suitable for optimization.
On the other hand, when the residual block is added after two QNs, the cost function landscape shows multiple and wider regions containing the global minimum, which makes this configuration more suitable for optimization. Moreover, this configuration exhibits a consistent cost function landscape regardless of the depth per QN combination, implying that this particular residual block arrangement is more robust to BP and supports a wide range of depths and QN combinations.
For the case of adding the residual only after the first QN, with two QNs after the residual block, the results show that the cost function landscape is better than the case of adding the residual block
Figure 11: Cost vs. iterations of ResQNets for (a) 15 qubits and (b) 20 qubits. The parentheses denote the \(D_{L}\) per QN.
Figure 12: Cost function landscapes of ResQNets for 20 Qubits and 3-QNs. Residual after every QN (Top panel), Residual after two QNs (middle panel) and residual only after the first QN (bottom panel). The parentheses denote the \(D_{L}\) per QN and the comma denotes the residual point.
after every QN, but not as good as the case where there is a gap of two QNs while adding the residual.
We then trained ResQNets with three QNs for all the configurations while varying the depth for each QN combination on the problem defined in Equation 3. The training results are shown in Figure 13. These results align with the behavior of the cost function landscape, where the residual block configuration skipping two QNs outperforms other configurations. It can be observed that the residual block configuration after every QN does not train at all, while the residual block configuration after the first QN does converge for all the depth per QN combinations, but with significantly slower convergence compared to the residual block configuration after two QNs.
### 3-QN vs. 2-QN ResQNet
In this section, we compare the performance of ResQNets with 2 and 3-QNs to demonstrate the impact of increasing the number of QNs. The analysis was conducted for 20 qubit layers considering the best-performing depth combinations for both 2 and 3-QNs.
For 2-QNs, the results from Figure 10(b) indicate that the depth combinations of (5,1) and (4,2) performed better than other depth combinations. On the other hand, for three QNs, the results from Figure 12(b) and 12(c) show that the depth combinations of \((4\:1,1)\) and \((4,\:1\:1)\) outperformed other depth combinations. A closer examination of the best-performing depth combinations reveals that the \(D_{L}\) before and after the residual block for the depth per QN combination of \((5,1)\) in 2-QN ResQNet is equivalent to depth per QN combination of \((4\:1,1)\) for 3-QN ResQNet. Similarly, the combination \((4,2)\) in the 2-QN ResQNet is equivalent to \((4,\:1\:1)\) in the 3-QN ResQNet. Despite these similarities, as demonstrated in Figure 14, the ResQNets with 3-QNs exhibit superior performance, as they converge to the optimal solution more efficiently compared to the ResQNets with 2-QNs.
Figure 13: Training results of ResQNets with three QNs with 20 qubit layers. (a) Residual after every QN (b) Residual after two QNs and (c) Residual after the first QN. The parentheses denote the \(D_{L}\) per QN and the comma denotes the residual point.
### Real Quantum Device
The results presented so far were obtained by running ResQNets and PlainQNets on a simulation platform. In this section, we carry out some experiments on real quantum devices. In particular, we trained both ResQNets and PlainQNets with 2-QNs on a 5-qubit quantum layer with 20 epochs using an IBM's quantum device, namely \(ibmq\_lima\). The quantum layers depth was fixed to \(D_{L}=6\) with \(D_{L}=5\) in the first QN, and \(D_{L}=1\) in the second QN. This depth combination was chosen considering all the results discussed previously. We note that due to the limited number of publicly available quantum devices, the queue times for executing the jobs are considerably long. Therefore, to minimize the training time, we chose to reduce the number of epochs for real-device training. We trained both PlainQNets and ResQNets for only 20 epochs on real devices instead of 100 epochs as in the case of simulation. The training results are illustrated in Figure 15.
Figure 14: Training comparison of 2-QN and 3-QN ResQNets for 20 qubit layers. The parentheses denote the \(D_{L}\) per QN and the comma denotes the residual point.
Figure 15: Training comparison ResQNets and PlainQNets on (a) real quantum device and (b) simulator. The values in parentheses denote the depth per QN.
The results presented in Figure 14(a) reveal that ResQNets have been trained successfully on a real device, whereas PlainQNets have not been trained on a real device. The same trend is observed when both networks are executed on the simulator, as depicted in Figure 14(b). However, when both PlainQNets and ResQNets are trained on a real device, a slight fluctuation is observed while approaching the optimal solution due to hardware noise, as compared to the simulation results. Despite the presence of noise, the rate of decrease in the loss value for ResQNets is almost identical for both simulation and real experiments. According to [40], hardware noise can potentially cause BP. However, our results demonstrate that our proposed ResQNets are somewhat resilient against hardware noise, as they achieve similar performance to that of the simulator (though with some fluctuations).
## 6 Conclusion
The problem of barren plateaus (BP) in quantum neural networks (QNNs) is a critical hurdle on the road to the practical realization of QNNs. There have been several attempts to resolve this issue, but the impact of BP can still vary greatly depending on the application and the architecture of quantum layers. Thus, it is essential to have multiple solutions for BP to cover a wide range of problems.
In this paper, we propose residual quantum neural networks (ResQNets) to address the issue of BP in QNNs. Our approach is inspired by classical residual neural networks (ResNets), which were introduced to overcome the vanishing gradients problem in classical neural networks.
In traditional QNNs, a single parameterized quantum circuit (PQC) with arbitrary depth is included within a single quantum node (QN). To create ResQNets, we split the conventional QNN architecture into multiple QNs, each containing its own PQC with varying depths. Splitting the QNNs allows us to introduce the residual connections between the QNs, forming our proposed ResQNets. In simple QNNs without residual connections (referred to as PlainQNets), the output from the previous QN serves as the input to the next. On the other hand, in ResQNets, one or multiple QNs can serve as residual blocks, with the output from a previous residual block being added to its input before it is passed on to the next QN.
In our study, we first demonstrate the efficacy of the proposed splitting of the conventional QNN architecture into multiple QNs (PlainQNets) by comparing their performance to that of conventional QNNs (simple PlainQNets). The comparison results indicated that the PlainQNets have better or equivalent performance to that of conventional QNNs. Subsequently, we compare the performance of PlainQNets with that of our proposed ResQNets through several training experiments. Our analysis of the cost function landscapes for quantum layers of increasing qubits shows that incorporating residual connections results in improved training performance.
Based on our findings, we conclude that the proposed ResQNets provide a promising solution for overcoming the problem of BP in QNNs and offer a potential direction for further research in the field of quantum machine learning. |
2307.02040 | VertiBench: Advancing Feature Distribution Diversity in Vertical
Federated Learning Benchmarks | Vertical Federated Learning (VFL) is a crucial paradigm for training machine
learning models on feature-partitioned, distributed data. However, due to
privacy restrictions, few public real-world VFL datasets exist for algorithm
evaluation, and these represent a limited array of feature distributions.
Existing benchmarks often resort to synthetic datasets, derived from arbitrary
feature splits from a global set, which only capture a subset of feature
distributions, leading to inadequate algorithm performance assessment. This
paper addresses these shortcomings by introducing two key factors affecting VFL
performance - feature importance and feature correlation - and proposing
associated evaluation metrics and dataset splitting methods. Additionally, we
introduce a real VFL dataset to address the deficit in image-image VFL
scenarios. Our comprehensive evaluation of cutting-edge VFL algorithms provides
valuable insights for future research in the field. | Zhaomin Wu, Junyi Hou, Bingsheng He | 2023-07-05T05:55:08Z | http://arxiv.org/abs/2307.02040v3 | # VertiBench: Advancing Feature Distribution Diversity in Vertical Federated Learning Benchmarks
###### Abstract
Vertical Federated Learning (VFL) is a crucial paradigm for training machine learning models on feature-partitioned, distributed data. However, due to privacy restrictions, few public real-world VFL datasets exist for algorithm evaluation, and these represent a limited array of feature distributions. Existing benchmarks often resort to synthetic datasets, derived from arbitrary feature splits from a global set, which only capture a subset of feature distributions, leading to inadequate algorithm performance assessment. This paper addresses these shortcomings by introducing two key factors affecting VFL performance - feature importance and feature correlation - and proposing associated evaluation metrics and dataset splitting methods. Additionally, we introduce a real VFL dataset to address the deficit in image-image VFL scenarios. Our comprehensive evaluation of cutting-edge VFL algorithms provides valuable insights for future research in the field.
## 1 Introduction
The increasing demand for ample, high-quality data for training advanced machine learning models is evident, particularly in the context of large language models [1]. However, real data, often sensitive and distributed across multiple parties, presents a challenge, especially in the face of strict privacy regulations like the GDPR [2]. As a result, federated learning [3] has been highlighted as a promising approach to train machine learning models on distributed data while ensuring privacy. In this study, we consider a broad definition of federated learning [4], encompassing all privacy-preserving collaborative learning paradigms, including assisted learning [5; 6] and split learning [7; 8]. Given the emerging variety of federated learning approaches [4; 9; 10], the importance of comprehensive benchmarks for evaluating these new algorithms is underscored.
The landscape of federated learning benchmarks, featuring contributions such as FedScale [11], MNIST [12], FedEval [13], and NIID-Bench [14], predominantly caters to horizontal federated learning (HFL), wherein each party possesses a subset of instances. In comparison, vertical federated learning (VFL) - where each party holds a subset of features - is notably under-addressed.
The development of real-world VFL benchmarks faces two main hurdles. First, privacy concerns inherent to federated learning inhibit the public sharing of distributed data. Second, the current limited pool of actual VFL datasets, such as those in the OARF benchmark [15], NUS-WIDE [16], and Vehicle [17], may not sufficiently represent the broad range of possible VFL scenarios (Figure 1(c); Section 5). This scarcity and underrepresentation highlight the urgent need for synthetic VFL benchmarks that can facilitate a comprehensive evaluation of VFL algorithms across diverse scenarios.
Existing efforts to construct synthetic VFL benchmarks have struggled to represent the diversity of real-world scenarios. Benchmarks such as OARF [15], FedML [18], and LEAF [19] fabricate vertically partitioned data by randomly assigning an equal number of features to synthetic parties.
Some studies [5; 20; 21] resort to a simplistic approach, manually dividing features without offering a substantial rationale for their choice. Furthermore, these existing benchmarks [15; 18; 19] do not offer a meaningful comparison of cutting-edge VFL algorithms. Hence, it becomes crucial to critically examine the key factors influencing VFL algorithm performance, and thoughtfully design synthetic VFL benchmarks that reflect these considerations.
The task of creating a systematic synthetic VFL benchmark hinges on identifying key factors influencing VFL algorithm performance. Current synthetic benchmarks for non-i.i.d. HFL like NIID-Bench [14] are inapplicable to VFL their assumptions about feature space and instance equality. In particular, HFL benchmarks assume that all parties operate within the same feature space, a presumption that doesn't conform to VFL's distributed feature scenario. Moreover, the instance equality presypposed by NIID-Bench during allocation doesn't apply when dealing with features of varied importance, underscoring the unique challenges in the analysis of synthetic VFL benchmark.
Given these limitations, our study conducts a systematic analysis to identify feature importance and correlation as two crucial determinants of VFL performance. Accordingly, we propose _VertiBench_, a comprehensive VFL benchmark which introduces novel feature-splitting methods for synthetic dataset generation and a new real-world VFL dataset. Our key contributions include: (1) We develop new feature-splitting methods that generate synthetic datasets based on feature importance and correlation, covering a diverse range of VFL scenarios. (2) We introduce a real-world VFL dataset, filling a noted gap in image-image VFL scenarios. (3) We devise methods to quantify the importance and correlation of real-world VFL datasets, allowing them to align with our synthetic datasets. (4) We conduct rigorous benchmarks of advanced VFL algorithms across diverse scenarios, thereby offering valuable insights for future research. For example, we demonstrate the scalability of certain VFL algorithms, challenging prior assumptions about VFL scaling difficulties [15], and emphasize the importance of communication efficiency in VFL, especially for imbalanced distributed datasets.
## 2 Related Work
**Vertical federated learning datasets.** The scarcity and limited range of real-world VFL datasets [15; 16; 17] in benchmarks and studies [18; 19] underscore the need for synthetic VFL datasets capable of depicting a broader spectrum of scenarios. Given VFL's focus on data privacy, obtaining such real datasets is challenging. Synthetic benchmarks [18; 19] and VFL study datasets [5; 7; 22] commonly rely on unexplained random or manual feature splitting, which often represents scenarios of balanced feature importance and high inter-party correlation (Figure 1, 2c). This situation indicates a pressing demand for systematic methods to generate synthetic VFL datasets that accurately reflect a diverse set of scenarios, fostering comprehensive evaluation of VFL algorithms.
**Feature importance.** The Shapley value [23; 24; 25], used to assess party contributions in federated learning [26; 27; 28], has significant computational costs, rendering it unsuitable for guiding feature splitting. Certain methodologies [14; 29] utilize a Dirichlet distribution for global dataset random split, creating imbalanced federated learning datasets. However, they do not consider the partitioning of features of varying importance.
**Feature correlation.** The task of efficiently gauging correlation among two groups of features is challenging despite well-studied individual feature correlation [30; 31]. The Shapley-Taylor index, proposed for evaluating correlation between feature sets [32], is computationally intensive (NP-hard), and unsuitable for high-dimensional datasets. The determinant of the correlation matrix [33] efficiently estimates inter-party correlation but is over-sensitive to linearly correlated features, impeding its use in feature partitioning. A more refined metric - the multi-way correlation coefficient (mcor) [34], addresses this, but like the determinant, it struggles with unequal feature numbers across parties, a typical VFL scenario, due to the assumption of a square correlation matrix.
## 3 VFL Algorithms
This section critically reviews current VFL algorithms, with a focus on accuracy, efficiency, and communication size. VertiBench concentrates on standard supervised learning tasks such as classification and regression within synchronized parties, summarized in Table 1. Notably, this benchmark excludes studies exploring different VFL aspects such as privacy [35], fairness [36], data pricing [37], asynchronization [20; 38; 39], latency [40], and other tasks like unsupervised learning [41], matrix
factorization [42], multi-task learning [8], and coreset construction [43]. While most VFL algorithms presume accurate inter-party data linking, we adopt this approach in VertiBench, despite recent contrary findings [44; 45] that this assumption may not be true. We refer to parties with and without labels as _primary_ and _secondary parties_ respectively.
The existing methods can be bifurcated into two categories: _ensemble-based_ and _split-based_. The distinguishing factor lies in the independent prediction capability of each party. Ensemble-based methods involve parties each maintaining a full model for local feature prediction, with collaborative ensemble methods during training, while split-based methods require each party to hold a partial model forming different inference stages of the full model. Consequently, split-based partial models cannot perform independent inference. For split-based models, our focus is on advanced models such as neural networks (NNs) and gradient boosting decision trees (GBDTs) [56], though VertiBench can accommodate various models [57; 58]. Split-NN-based models are trained by transferring representations and gradients, while split-GBDT-models are trained by transferring gradients and histograms. A more detailed comparison of ensemble-based and split-based algorithms is provided in Appendix B.
## 4 Synthetic VFL Datasets
### Factors that affect VFL performance
Suppose there are \(K\) parties. Denote the data on party \(P_{k}\) as a random vector \(\mathbf{X}_{k}\) (\(1\leq k\leq K\)). Denote the label as a random variable \(y\). A supervised learning algorithm maximizes the likelihood function where hypothesis \(h\) represents models and parameters, i.e., \(L(y|\mathbf{X}_{K},...,\mathbf{X}_{1};h)\).
These supervised learning algorithms estimate the following probability mass function. The proof of Proposition 1 is provided in Appendix A.
**Proposition 1**.: _The probability mass function can be written as_
\[\log\mathcal{P}(y|\mathbf{X}_{K},...,\mathbf{X}_{1})=\sum_{i=1}^{K}\log\frac{ \mathcal{P}(y|\mathbf{X}_{k},...,\mathbf{X}_{1})}{\mathcal{P}(y|\mathbf{X}_{ k-1},...,\mathbf{X}_{1})}+\log\mathcal{P}(y) \tag{1}\]
In VFL, \(\mathcal{P}(y)\) is the same for all the parties. The skewness among \(K\) parties is determined by \(K\) ratios of distributions. Interestingly, this ratio quantifies the divergence between two marginal probability distributions of \(y\) - one inclusive of \(\mathbf{X}_{k}\) and the other exclusive of \(\mathbf{X}_{k}\). Essentially, the ratio estimates the impact on the global distribution when the features of a single party are excluded. This can be interpreted as the **importance** of a given party.
It is important to note that Proposition 1 is applicable regardless of the order of \(\mathbf{X}_{1},\ldots,\mathbf{X}_{k}\). For a more precise evaluation of each party's importance, especially considering the independence among
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Category** & **Model1** & **Algorithm** & **Contribution2** & **Code3** & **Data4** & **Split5** \\ \hline \multirow{2}{*}{**Ensemble-based**} & \multirow{2}{*}{Any} & AL [6] & Accuracy & [46] & Syn & Manual \\ & & GAL [5] & Accuracy & [47] & Syn & Manual \\ \hline \multirow{6}{*}{**Split-based**} & \multirow{4}{*}{NN} & SplitNN [7] & Accuracy & [48] & Syn & N/A \\ & & C-VFL [21] & Communication & [49] & Syn & Manual \\ & & BlindFL [50] & Efficiency & N/A & Syn & Manual \\ \cline{2-6} & \multirow{4}{*}{GBDT} & SecureBoost [22] & Accuracy & [51] & Syn & Manual \\ & & Pivot [52] & Accuracy & [53] & Syn & Manual \\ \cline{1-1} & & FedTree [29] & Accuracy, Efficiency & [54] & Syn & Random \\ \cline{1-1} & & VF2Boost [55] & Efficiency & N/A & Syn & Manual \\ \hline \hline \end{tabular}
* neural network; GBDT
- gradient boosting decision trees; Any
* Privacy is not evaluated in VertiBench, and therefore omitted, despite being a contribution of some studies.
* code not publicly accessible due to commercial issues.
* synthetic datasets partitioned from global datasets.
* features manually split without specific reasons; Random
- features randomly split without explanation; N/A
- no VFL experiments conducted.
\end{table}
Table 1: Summary of existing VFL algorithms
features, Shapley value has proven to be a useful measure. It has been employed to estimate the importance of each party in vertical federated learning scenarios [26; 28].
In another aspect, the ratio \(\frac{\mathcal{P}(y|\mathbf{X}_{k},...,\mathbf{X}_{i})}{\mathcal{P}(y|\mathbf{X} _{k-1},...,\mathbf{X}_{1})}\) is determined by the **correlation** between \(\mathbf{X}_{k}\) and \(\mathbf{X}_{1},\dots,\mathbf{X}_{k-1}\). In other words, the global distribution is affected by the feature correlation between different parties.
In summary, we highlight feature importance and correlation as two crucial factors that could potentially influence the performance of VFL algorithms. We treat importance and correlation as independent variables affecting \(\frac{\mathcal{P}(y|\mathbf{X}_{k},...,\mathbf{X}_{i})}{\mathcal{P}(y|\mathbf{X }_{k-1},...,\mathbf{X}_{1})}\) in our analysis, despite a potential innate correlation between the two. The subsequent sections will introduce our approach to generating synthetic datasets based by these two factors.
### Feature Importance
In light of the computational expense incurred by the Shapley value method, an alternative and more efficient strategy is necessary to perform feature splits based on importance. With all parties exhibiting symmetry in the context of \(\mathbf{X}\), varying the importance among parties essentially translates to varying the variance of the importance among them. Assuming each party \(P_{i}\) possesses an importance factor \(\alpha_{i}>0\), we propose the implementation of the Dirichlet distribution parameterized by \(\{\alpha_{i}\}_{i=1}^{K}\) for feature splitting. This approach ensures two beneficial properties post-split: (1) a larger \(\alpha_{i}\) guarantees a higher expected importance for \(P_{i}\), and (2) a smaller \(\|\{\alpha_{i}\}_{i=1}^{K}\|_{2}\) assures a greater variance in the importance among parties.
More specifically, we propose a feature splitting method based on feature importance. After initializing local datasets for each party, a series of probabilities \(p_{1},\dots,p_{K}\) s.t. \(\sum_{i=1}^{K}p_{i}=1\) is sampled from a Dirichlet distribution \(\text{Dir}(\alpha_{1},\dots,\alpha_{K})\). Each feature is randomly allocated to a party \(P_{k}\), selected based on the probabilities \(p_{k}\). To accommodate algorithms that fail when faced with empty features, we can ensure each party is initially provided with a random feature before the algorithm is set in motion. Detailed formalization of this algorithm can be found in Appendix C.
**Theorem 1**.: _Consider a feature index set \(\mathcal{A}=\{1,2,...,m\}\) and a characteristic function \(v:2^{\mathcal{A}}\rightarrow\mathbb{R}\) such that \(v(\emptyset)=0\). Let \(\phi_{j}(v)\) denote the importance of the \(j\)-th feature on \(v\) such that \(\sum_{j=1}^{m}\phi(j)=v(\mathcal{A})\). Assume that the indices in \(\mathcal{A}\) are randomly distributed to \(K\) parties with probabilities \(r_{1},...,r_{K}\) where \(\sum_{i=1}^{K}r_{i}=1\). Given budgets \(b_{i}=r_{i}v(\mathcal{A})\), let \(Z_{i}\) be the sum of feature importance for party \(i\). Then, we have \(\forall i\in[1,K],\mathbf{E}[Z_{i}]=b_{i}\) and \(\mathbf{E}[Z_{i}]\propto r_{i}\)._
The proof of Theorem 1 is provided in Appendix A. The metric of importance, \(\phi_{j}(v)\), comprises the Shapley value and the recently proposed Shapley-CMI [28]. We assert in Theorem 1 that the expected cumulative importance of each party is proportional to the ratio generated by the Dirichlet distribution. The inherent properties of the Dirichlet distribution ensure that: (1) a larger value of \(\alpha_{i}\) leads to a higher expected value of \(r_{i}\), and (2) a smaller value of \(\|\{\alpha_{i}\}_{i=1}^{K}\|_{2}\) results in a larger variance in \(r_{i}\). Hence, the proposed method naturally aligns with the requirements for feature importance.
### Feature Correlation
In the initial stages of our investigation into feature-split methods based on correlation, we first look at the evaluation of feature correlation. Building upon established methods that utilize a metric grounded in correlation matrices [33; 34], we propose a novel metric to examine the correlation when the parties involved possess unequal numbers of features. Our approach hinges on the use of the standard variance of the singular values of the correlation matrix. This serves as an efficient measure of the overall correlation between two parties. Since the feature-wise correlation is an orthogonal research area, we selected Spearman rank correlation [59] due to its capability to handle non-linear correlation.
To elaborate further, we denote the column-wise correlation matrix between two matrices, \(\mathbf{X}_{i}\) and \(\mathbf{X}_{j}\), as \(\text{cor}(\mathbf{X}_{i},\mathbf{X}_{j})\). As a result, we formally define the correlation between two entities, \(\mathbf{X}_{i}\in\mathbb{R}^{n\times m_{i}}\)
and \(\mathbf{X}_{j}\in\mathbb{R}^{n\times m_{j}}\), in terms of their respective parties as Equation 2.
\[\text{Pcor}(\mathbf{X}_{i},\mathbf{X}_{j}):=\frac{1}{d}\sqrt{\sum_{i=1}^{d}{( \sigma_{i}(\text{cor}(\mathbf{X}_{i},\mathbf{X}_{j}))-\overline{\sigma})^{2}}}, \quad d=\min(m_{i},m_{j}) \tag{2}\]
In this equation, \(\sigma_{i}(\cdot)\) means the \(i\)-th singular value of a matrix, while \(\overline{\sigma}\) stands for their mean value.
\[\text{Icor}(\mathbf{X}_{1},\dots,\mathbf{X}_{K}):=\frac{1}{K(K-1)}\sum_{i=1}^{ K}\sum_{j=1,j\neq i}^{K}\text{Pcor}(\mathbf{X}_{i},\mathbf{X}_{j}) \tag{3}\]
This correlation-based feature-split algorithm, as depicted in Algorithm 1, is meticulously designed to allocate features across multiple parties while taking into account the correlations inherent among the features. The algorithm's operation is premised on the provision of a defined number of features for each party, represented as \(m_{1},\dots,m_{K}\). Commencing with the initialization of a column permutation matrix, denoted as \(\mathbf{P}\), to an identity matrix (line 1), the algorithm proceeds to define a score function, \(f(\mathbf{P};\mathbf{X})\), which represents the overall correlation Icor after the features have undergone permutation by \(\mathbf{P}\) (line 2). Subsequently, the algorithm determines the lower and upper bound of the score function (lines 3-4). This forms the basis for calculating the target correlation \(f^{*}(\mathbf{X};\beta)\), which is a linear interpolation between the lower and upper bounds controlled by the correlation index \(\beta\) (line 5). Next, the algorithm locates the optimal permutation matrix \(\mathbf{P}^{*}\) by solving an permutation-based optimization problem. Notably, we employ the Biased Random-Key Genetic Algorithm (BRKGA) [60] for this purpose. The final step of the algorithm splits the features according to the derived optimal permutation and the pre-set number of features for each party (lines 6-7).
```
Input: Global dataset \(\mathbf{X}\in\mathbb{R}^{n\times m}\), correlation index \(\beta\), number of features \(m_{1},\dots,m_{K}\) Output: Local dataasets \(\mathbf{X}_{1},\dots,\mathbf{X}_{K}\) \(\mathbf{P}\leftarrow\mathbf{I}\); /* Initiate permutation matrix */ \(f(\mathbf{P};\mathbf{X}):=\text{Icor}(\mathbf{X}_{1}^{P},\dots,\mathbf{X}_{K}^{P })\)\(s.t.\)\(\mathbf{X}_{1}^{P},\dots,\mathbf{X}_{K}^{P}\leftarrow\) split features of \(\mathbf{X}\mathbf{P}\) by \(m_{1},\dots,m_{K}\); \(f_{min}(\mathbf{X})=\min_{\mathbf{P}}f(\mathbf{P};\mathbf{X})\); /* Calculate lower bound */ \(f_{max}(\mathbf{X})=\max_{\mathbf{P}}f(\mathbf{P};\mathbf{X})\); /* Calculate upper bound */ \(f^{*}(\mathbf{X};\beta)\leftarrow(1-\beta)f_{min}(\mathbf{X})+\beta f_{max}( \mathbf{X})\); /* Calculate target correlation */ \(\mathbf{P}^{*}\leftarrow\arg\min_{\mathbf{P}}|f(\mathbf{P};\mathbf{X})-f^{*}( \mathbf{X};\beta)|\); /* Find the permutation matrix */ \(\mathbf{X}_{1}^{P},\dots,\mathbf{X}_{K}^{P}\leftarrow\) split features of \(\mathbf{X}\mathbf{P}^{*}\) by \(m_{1},\dots,m_{K}\); return\(\mathbf{X}_{1},\dots,\mathbf{X}_{K}\)
```
**Algorithm 1**Feature Splitting by Correlation
Owing to the fact that the optimization approach requires many invocations of Icor, it is important that this process is conducted with the highest degree of efficiency. For datasets of smaller dimensions, singular values can be directly computed utilizing Singular Value Decomposition (SVD) [61]. However, in the case of high-dimensional datasets, we resort to employing Truncated SVD [62] to estimate the largest top-\(d_{t}\) singular values, with the remaining singular values assumed as zero prior to calculating the standard variance. It is worth noting that we make use of GPU acceleration to expedite the computation of Icor, thereby ensuring that the optimization procedure is as swift and efficient as possible. Our experiments, as presented in Appendix D, validate that both split methods can complete within a reasonable time.
**Empirical Validation.** We conduct extensive experiments to rigorously evaluate the practical performance of our proposed correlation evaluation metric and the correlation-based feature-split algorithm; the details are in Appendix D. Briefly, for the correlation evaluation metric Icor, we observe that Pcor mirrors the behavior of mcor [34] in assessing inner-party correlation and displays a similar trend to mcor for inter-party correlation evaluation. Moreover, we split features of synthetic datasets of different \(\beta\) values using Algorithm 1, contrasting it with a random split. The absolute correlation matrix visualized in Figure 1 suggests that as \(\beta\) increases, so does inter-party correlation. In contrast, random feature splitting does not effectively portray scenarios with low inter-party correlation.
## 5 Real-world VFL Datasets
Real-world VFL datasets, though highly desirable, are limited in scope and type, often encompassing tabular-tabular data, as in Vehicle [17], Movielens [15], Songs [15], and tabular-image data, as in NUS-WIDE [16]. Notably missing are image-image datasets. Addressing this, we introduce a real-world VFL dataset, _Satellite_, adapted from [63], containing 62,832 images across 16 parties, simulating a practical VFL scenario of collaborative location identification via multiple satellites. Further details on Satellite's construction are in Appendix E.
An in-depth analysis of the Satellite dataset, using our proposed metrics and the visualization of absolute correlation matrix (Figure 1(b)), reveals low inter-party correlation (Icor) and high inner-party correlation, similar to NUS-WIDE (Figure 1(a)). These results underscore the fact that random feature splits, which usually result in larger \(\beta\) values (Figures 0(d) and 1(c)), may not truly represent real-world scenarios, reinforcing the need for systematic VFL datasets generation methods.
**Estimating \(\alpha\) and \(\beta\) for real VFL datasets.** In order to align real datasets with the synthetic ones generated by VertiBench, we put forward methods to estimate \(\alpha\) and \(\beta\) for real VFL datasets.
To calculate \(\alpha\), we determine the significance of each party by adding up the Shapley value of its features. We do this efficiently by estimating Shapley values on a select subset. These Shapley values are then normalized and treated as Dirichlet parameters \(\alpha_{i}\) for each party \(P_{i}\), in line with Theorem 1. To approximate the scale of the Dirichlet parameters and align them with the generation of synthetic datasets, we find a symmetric Dirichlet distribution \(\text{Dir}(\overline{\alpha})\) that has the same variance as \(\text{Dir}(\alpha_{1},\dots,\alpha_{K})\), as given in Proposition 2. This value of \(\overline{\alpha}\) reflects the variance in feature importance across parties. The proof is provided in Appendix A.
**Proposition 2**.: _Given a Dirichlet distribution \(\text{Dir}(\alpha_{1},\dots,\alpha_{K})\) with mean variance \(\sigma\), symmetric Dirichlet distribution \(\text{Dir}(\overline{\alpha})\) that has the same mean variance \(\sigma\) if \(\overline{\alpha}=\frac{K-1-K^{2}\sigma}{K^{3}\sigma}\)._
To estimate \(\beta\), we start by computing the potential minimum and maximum values of Icor by shuffling the features among parties, denoted as \(\text{Icor}_{\text{min}}\), \(\text{Icor}_{\text{max}}\). Next, we estimate the Icor of the actual dataset, \(\text{Icor}_{\text{real}}\), and derive the \(\beta\) value using \(\beta=\min\left\{\max\left\{\frac{\text{Icor}_{\text{real}}-\text{Icor}_{ \text{min}}}{\text{Icor}_{\text{max}}-\text{Icor}_{\text{min}}},0\right\},1\right\}\). It is important to note that in real-world scenarios, \(\text{Icor}_{\text{real}}\) might fall slightly outside the range of \(\text{Icor}_{\text{min}},\text{Icor}_{\text{max}}\) due to the constraints of optimization algorithms. To rectify this, we clip the estimated \(\beta\) to ensure \(\beta\in[0,1]\).
Using the estimated \(\alpha\) and \(\beta\), we display the importance and correlation of existing real datasets within the VertiBench-supported range in Figure 1(c). We note that real datasets represent a limited set of VFL scenarios with a large \(\alpha\) and small \(\beta\), indicating a high degree of feature imbalance and low inter-party correlation. Further, conducting random feature splits, as is common in existing VFL experiments, results in a distinct extreme characterized by high values of both \(\alpha\) and \(\beta\). This observation underscores the importance of VertiBench, which can generate a broad range of VFL scenarios for robust evaluation of VFL algorithms.
## 6 Experiment
This section comprehensively benchmarks cutting-edge VFL algorithms. The experimental settings are delineated in Section 6.1, with results for VFL accuracy and communication efficiency presented
Figure 1: Absolute correlation matrix of the global dataset with party boundaries indicated by red lines. Icor means inter-party correlation. (a),(b),(c) - correlation-based split; (d) random split.
in Sections 6.2 and 6.3, respectively. Additional evaluations, including scalability, training time, and performance on real datasets, are discussed in Appendix G. Each experiment elucidates results and provides relevant insights, highlighting (1) the performance-communication tradeoff of NN-based and boosting-based methods, (2) the necessity for advanced communication-efficient algorithms for imbalanced distributed datasets, and (3) the scalability potential of VFL algorithms.
### Experimental Settings
This subsection includes the datasets, evaluated algorithms, and training methodology. Detailed dataset specifications, environments, and hyperparameter settings can be found in Appendix F.
**Datasets.** Our experimental design incorporates seven public datasets, namely covtype [64], msd [65], gisette [66], realsim [67], epsilon [68], letter [69], and radar [70], detailed in Appendix F. The msd dataset is used for regression tasks, while the others cater to classification tasks. Each dataset is partitioned into 80% training and 20% testing instances. The datasets' features are distributed among multiple parties (typically four), split based on feature importance (\(\alpha\)) or correlation (\(\beta\)). In the correlation-based split, each party is assigned an equal number of dataset features.
**Algorithms.** We assess extensive code-available VFL algorithms in our experiments, including split-NN-based (SplitNN [7], C-VFL [21]), split-GBDT-based (FedTree [29], SecureBoost [22], Pivot [52]), and ensemble-based (GAL [5]) algorithms. AL [6] is excluded due to its inferiority to GAL [5]. For fairness, experiments are conducted without encryption or noise addition. In light of the reported minor variations in accuracy and communication (w/o encryption) among split-GBDT-based methods like FedTree, SecureBoost, and Pivot due to precision issues [22; 52], we have elected to use FedTree as a representative in our evaluation of their performance and communication costs.
**Training.** For classification tasks, we use accuracy as the evaluation metric, while regression tasks are evaluated using the Root Mean Square Error (RMSE). To ensure the reliability of our results, we conduct five runs for each algorithm, using seeds ranging from 0 to 4 to randomly split the datasets for each run, and then compute their mean metrics and standard deviation. Detailed hyper-parameter settings for each algorithms are provided in Appendix F.
### VFL Accuracy
In this subsection, we study the performance of VFL algorithms by varying the data split parameters, \(\alpha\) and \(\beta\), and assessing the resulting impact on the accuracy. Our analysis includes a range of algorithm types, namely split-NN-based, split-GBDT-based, and ensemble-based methods. The performance is detailed in Table 2. From our exploration, we can draw three key observations.
**The influence of split parameters \(\alpha\) and \(\beta\) on VFL performance varies significantly with the choice of algorithm and dataset.** The performance of certain algorithms, such as SplitNN and FedTree, remains relatively consistent across different \(\alpha\) and \(\beta\) values. For others, notably C-VFL, these parameter changes can cause substantial variations in performance. For instance, on the epsilon dataset, C-VFL's accuracy fluctuates by up to 12% and 10% when \(\alpha\) and \(\beta\) are adjusted from 0.1 to 100 and from 0 to 1.0, respectively. Despite the potential significant influence of \(\alpha\) and \(\beta\) parameters,
Figure 2: (a), (b) Matrix of feature absolute correlation with party boundaries marked in red; (c) scope coverage of real and randomly split datasets in terms of feature importance and correlation.
their effect on accuracy seems to be contingent upon specific dataset-algorithm combinations. This underlines the importance of extensive evaluations across a broader spectrum of \(\alpha\) and \(\beta\) values, a critical step towards illustrating the robustness of VFL algorithms.
**SplitNN often leads in accuracy across most datasets; however, the performance of split-GBDT-based and ensemble-based methods can vary significantly depending on the dataset.** As anticipated, given its iterative transmission of substantial representations and gradients, SplitNN often outperforms other methods across a majority of datasets. Comparatively, the performance of FedTree and GAL is dataset-dependent. FedTree is well-suited to high-dimensional, smaller datasets like gisette, but struggles with larger datasets like epsilon and covtype. GAL, on the other hand, performs admirably with binary classification and regression tasks, though its performance drops significantly as the number of classes increases, as observed on the covtype and letter dataset.
**The compression of SplitNN-based methods, particularly when employed on imbalanced partitioned datasets, can significantly impact accuracy.** While C-VFL's model structure is akin to SplitNN, the incorporation of compression results in C-VFL having the lowest accuracy among all the tested baselines. This is particularly pronounced in cases of imbalanced importance distribution, i.e., smaller \(\alpha\). For example, when \(\alpha=0.1\), C-VFL's performance on the letter and epsilon datasets is barely superior to random guessing. This highlights a pressing need for further exploration and development of compression methods suited for biased partition scenarios.
### Communication Efficiency
In this subsection, we evaluate VFL algorithms' communication efficiency by analyzing their total communication size within 50 fixed epochs, as shown in Figure 3. Additional communication details, such as the maximum incoming and outgoing communication, are provided in Appendix G.1. Given that FedTree, Pivot, and SecureBoost incur comparable communication costs when excluding encryption overhead, we will utilize FedTree as a representative for the other two for simplicity. Upon examining the figure, two main observations can be drawn.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Dataset Method**} & \multicolumn{4}{c}{**Performance of importance-based split**} & \multicolumn{4}{c}{**Performance of correlation-based split**} \\ \cline{3-10} & & \(\alpha=0.1\) & \(\alpha=1\) & \(\alpha=10\) & \(\alpha=100\) & \(\beta=0\) & \(\beta=0.3\) & \(\beta=0.6\) & \(\beta=1\) \\ \hline \multirow{6}{*}{cotype} & SplitNN & 91.2\(\pm\)0.4\% & 92.1\(\pm\)0.2\% & 92.1\(\pm\)0.3\% & 92.1\(\pm\)0.3\% & 91.2\(\pm\)0.1\% & 91.7\(\pm\)0.3\% & 91.8\(\pm\)0.2\% & 92.1\(\pm\)0.1\% & 92.3\(\pm\)0.2\% \\ & GAL & 66.1\(\pm\)2.9\% & 60.2\(\pm\)1.1\% & 62.1\(\pm\)1.4\% & 61.5\(\pm\)3.9\% & 63.9\(\pm\)1.2\% & 62.5\(\pm\)1.7\% & 61.8\(\pm\)1.6\% & 61.8\(\pm\)1.8\% \\ & FedTree & 77.9\(\pm\)0.1\% & 77.8\(\pm\)0.2\% & 77.8\(\pm\)0.1\% & 77.8\(\pm\)0.1\% & 77.9\(\pm\)0.1\% & 77.9\(\pm\)0.2\% & 77.8\(\pm\)0.1\% & 77.8\(\pm\)0.1\% \\ & C-VFL & 69.4\(\pm\)1.4\% & 14.8\(\pm\)1.0\% & 26.9\(\pm\)2.3\% & 38.1\(\pm\)8.6\% & 53.6\(\pm\)6.8\% & 50.1\(\pm\)8.3\% & 49.0\(\pm\)0.0\% & 49.5\(\pm\)1.0\% \\ \hline \multirow{6}{*}{msd} & SplitNN & 10.1\(\pm\)0.0\% & 10.2\(\pm\)0.0\% & 10.1\(\pm\)0.0\% & 10.1\(\pm\)0.0\% & 10.2\(\pm\)0.0\% & 10.1\(\pm\)0.0\% & 10.2\(\pm\)0.0\% & 10.1\(\pm\)0.0\% \\ & GAL & 12.2\(\pm\)0.0\% & 12.2\(\pm\)0.0\% & 12.2\(\pm\)0.0\% & 12.2\(\pm\)0.0\% & 12.2\(\pm\)0.0\% & 12.2\(\pm\)0.0\% & 12.2\(\pm\)0.0\% & 12.2\(\pm\)0.0\% \\ & FedTree & 10.4\(\pm\)0.0\% & 10.4\(\pm\)0.0\% & 10.4\(\pm\)0.0\% & 10.4\(\pm\)0.0\% & 10.4\(\pm\)0.0\% & 10.4\(\pm\)0.0\% & 10.4\(\pm\)0.0\% & 10.4\(\pm\)0.0\% & 10.4\(\pm\)0.0\% \\ & C-VFL & 12.7\(\pm\)0.0\% & 12.7\(\pm\)0.0\% & 12.7\(\pm\)0.0\% & 12.7\(\pm\)0.0\% & 12.2\(\pm\)0.0\% & 12.2\(\pm\)0.0\% & 12.1\(\pm\)0.1\% & 12.3\(\pm\)0.0\% \\ \hline \multirow{6}{*}{gisette} & SplitNN & 96.8\(\pm\)0.4\% & 96.8\(\pm\)0.3\% & 96.9\(\pm\)0.3\% & 96.9\(\pm\)0.2\% & 97.1\(\pm\)0.2\% & 97.2\(\pm\)0.4\% & 97.0\(\pm\)0.1\% & 96.8\(\pm\)0.4\% \\ & GAL & 94.4\(\pm\)1.4\% & 95.7\(\pm\)0.6\% & 96.2\(\pm\)0.6\% & 96.1\(\pm\)0.6\% & 96.5\(\pm\)0.3\% & 96.9\(\pm\)0.2\% & 96.6\(\pm\)0.3\% & 96.9\(\pm\)0.1\% \\ & FedTree & 97.0\(\pm\)0.3\% & 97.1\(\pm\)0.5\% & 97.1\(\pm\)0.5\% & 97.1\(\pm\)0.3\% & 97.0\(\pm\)0.4\% & 96.9\(\pm\)0.3\% & 97.1\(\pm\)0.2\% & 97.0\(\pm\)0.3\% \\ & C-VFL & 91.0\(\pm\)11.5\% & 97.8\(\pm\)3.4\% & 90.1\(\pm\)1.8\% & 94.0\(\pm\)0.4\% & 90.1\(\pm\)8.8\% & 97.6\(\pm\)4.1\% & 97.6\(\pm\)4.1\% & 93.6\(\pm\)5.2\% \\ \hline \multirow{6}{*}{realism} & SplitNN & 97.1\(\pm\)0.0\% & 97.0\(\pm\)0.0\% & 97.0\(\pm\)0.1\% & 97.1\(\pm\)0.1\% & 97.0\(\pm\)0.0\% & 97.0\(\pm\)0.2\% & 97.0\(\pm\)0.1\% & 97.0\(\pm\)0.1\% \\ & GAI & 92.4\(\pm\)0.7\% & 93.5\(\pm\)2.2\% & 96.1\(\pm\)0.4\% & 96.4\(\pm\)0.1\% & 96.5\(\pm\)0.1\% & 96.5\(\pm\)0.0\% & 96.5\(\pm\)0.0\% & 96.5\(\pm\)0.1\% \\ & FedTree & 85.1\(\pm\)0.2\% & 85.2\(\pm\)0.2\% & 85.1\(\pm\)0.1\% & 85.1\(\pm\)0.2\% & 85.2\(\pm\)0.2\% & 85.2\(\pm\)0.2\% & 85.1\(\pm\)0.2\% & 85.2\(\pm\)0.1\% \\ & C-VFL & 89.6\(\pm\)17.4\% & 88.6\(\pm\)17.6\% & 89.5\(\pm\)17.4\% & 87.8\(\pm\)16.1\% & 89.3\(\pm\)17.2\% & 89.3\(\pm\)17.2\% & 87.8\(\pm\)16.2\% & 89.2\(\pm\)17.0\% \\ \hline \multirow{6}{*}{epsilon} & SplitNN & 86.3\(\pm\)0.1\% & 86.2\(\pm\)0.1\% & 86.3\(\pm\)0.0\% & 86.2\(\pm\)0.0\% & 85.9\(\pm\)0.1\% & 85.9\(\pm\)0.0\% & 86.0\(\pm\)0.1\% & 86.2\(\pm\)0.1\% \\ & GAL & 88.2\(\pm\)0.6\% & 85.9\(\pm\)1.5\% & 85.7\(\pm\)0.4\% & 86.3\(\pm\)0.2\% & 86.1\(\pm\)0.0\% & 86.1\(\pm\)0.4\% & 86.3\(\pm\)0.3\% & 86.4\(\pm\)0.2\% \\ & FedTree & 77.2\(\pm\)0.1\% & 77.3\(\pm\)0.1\% & 77.2\(\pm\)0.0\% & 77.2\(\pm\)0.1\% & 77
**Gradient-boosting algorithms, including GAL and FedTree, generally exhibit smaller communication sizes compared to neural-network-based algorithms like SplitNN and C-VFL,** with the exception of the letter dataset with 26 classes. C-VFL's compression techniques, though limiting its communication size, cannot match the efficiency of GAL and FedTree, even with a significant accuracy trade-off. The higher communication cost in neural networks is due to frequent transmission of gradients and representations, a factor that boosts SplitNN's optimal accuracy.
Additionally, **the efficiency of FedTree and GAL is contingent on the global dataset's size**. The primary distinction between FedTree and GAL lies in the type of information received on the primary party side. GAL collates prediction results from all secondary parties, with the size being proportional to the number of instances. Conversely, FedTree gathers the histogram from all secondary parties, with the size being proportional to the number of features. Consequently, GAL incurs lower communication costs than FedTree on high-dimensional datasets, such as gisette and realsim, while maintaining comparable communication costs on other datasets.
## 7 Conclusion
In this study, we introduce VertiBench, a versatile benchmarking framework for Vertical Federated Learning (VFL). VertiBench facilitates the synthetic generation of diverse VFL datasets from a single global set, thereby enabling a comprehensive performance assessment of VFL algorithms across a wide spectrum of application domains. Our empirical results reveal potential significant variations in algorithm performance under different data partition scenarios, underscoring the importance of our benchmark. Additionally, we contribute a new real-world VFL dataset, addressing a deficit in image-image VFL datasets. This study highlights the necessity of examining VFL algorithms under diverse data distribution conditions, providing a crucial trajectory for future research.
|
2301.10912 | Designing for Cognitive Diversity: Improving the GitHub Experience for
Newcomers | Social coding platforms such as GitHub have become defacto environments for
collaborative programming and open source. When these platforms do not support
specific cognitive styles, they create barriers to programming for some
populations. Research shows that the cognitive styles typically favored by
women are often unsupported, creating barriers to entry for woman newcomers. In
this paper, we use the GenderMag method to evaluate GitHub to find cognitive
style-specific inclusivity bugs. We redesigned the "buggy" GitHub features
through a web browser plugin, which we evaluated through a between-subjects
experiment (n=75). Our results indicate that the changes to the interface
improve users' performance and self-efficacy, mainly for individuals with
cognitive styles more common to women. Our results can inspire designers of
social coding platforms and software engineering tools to produce more
inclusive development environments. | Italo Santos, João Felipe Pimentel, Igor Wiese, Igor Steinmacher, Anita Sarma, Marco A. Gerosa | 2023-01-26T02:55:30Z | http://arxiv.org/abs/2301.10912v2 | # Designing for Cognitive Diversity: Improving the GitHub Experience for Newcomers
###### Abstract
Social coding platforms such as GitHub have become _defacto_ environments for collaborative programming and open source. When these platforms do not support specific cognitive styles, they create barriers to programming for some populations. Research shows that the cognitive styles typically favored by women are often unsupported, creating barriers to entry for woman newcomers. In this paper, we use the GenderMag method to evaluate GitHub to find cognitive style-specific inclusivity bugs. We redesigned the "buggy" GitHub features through a web browser plugin, which we evaluated through a between-subjects experiment (n=75). Our results indicate that the changes to the interface improve users' performance and self-efficacy, mainly for individuals with cognitive styles more common to women. Our results can inspire designers of social coding platforms and software engineering tools to produce more inclusive development environments.
_General Abstract_-Diversity is an important aspect of society. One form of diversity is cognitive diversity--differences in cognitive styles, which helps generate a diversity of thoughts. Unfortunately, software tools often do not support different cognitive styles (e.g., learning styles), disproportionately impacting those whose styles are not supported. These individuals pay a cognitive "tax" each time they use the tools. In this work, we found "inclusivity bugs" in GitHub, a social coding platform. We then redesigned these buggy features and evaluated them with users. Our results show that the redesign makes it easier for the group of individuals whose cognitive styles were unsupported in the original design, with the percentage of completed tasks rising from 67% to 95% for this group.
_keywords: open source, diversity and inclusion, human factors, cognitive styles, human-computer interaction._
## I Introduction
Open Source Software (OSS) projects play an important role in improving inclusion in workforce development, where contributors join projects to learn new skills [1], showcase their skills [2], or improve their career paths [3]. Successful participation in OSS projects also helps newcomers gain visibility among their peers [4, 5], benefits society by developing a product used by many users [6], and improves their chances of achieving professional success [7, 5].
However, newcomers to OSS face several challenges [8], and these challenges differently affect underrepresented populations, including those whose cognitive styles are ill-supported by the project's information landscape [9, 10]. The consequences of these challenges to underrepresented populations may include a steeper learning curve, lack of community support, and obstacles to figuring out how to start contributing, all of which add to the diversity imbalance in OSS [11]. Social diversity has been shown to positively affect productivity, teamwork, and quality of contributions [12, 13]. On the other hand, low diversity has unfortunate effects: (i) OSS projects miss out on the benefits of a more expansive set of contributors and the diversity of thought that these potential contributors could bring; (ii) minorities miss out on the learning and experience opportunities that OSS projects provide; and (iii) minorities miss out on job opportunities when recruiters use OSS contributions to make hiring decisions [14, 15]. Although the lack of diversity in OSS has been well-documented for years, there is limited progress in closing this gap [11, 16, 17].
Past work [9, 10] has shown that the way information is provided in OSS projects (e.g., documentation, issue description) benefits certain cognitive styles (e.g., those who learn by tinkering) over others (e.g., process-oriented learners). The information architecture of OSS project pages (e.g., project description pages and descriptions of issues in the issue tracker) usually appeal to those who have high self-efficacy and are motivated by individual pursuits such as intellectual stimulation, competition, and learning technology for fun. According to Burnett et al. [18], these pursuits cater to characteristics associated with men, which can neglect women and other contributors who may have different motivations and personal characteristics (see also [19, 20]). This lack of support for diverse user characteristics leads to inclusivity bugs [21, 22]--software behaviors that disproportionately disadvantage a particular group of users of that software.
In our study, we investigate inclusivity bugs in the GitHub platform that affect newcomers to this platform. Inclusivity bugs in the platform can have far-reaching impacts on thousands of OSS projects (as of today, more than 200 Million repositories are hosted on GitHub). The following research questions guided our investigation:
[title=Research Questions] What inclusivity bugs does GitHub pose for newcomers trying to make their first contribution?
We analyzed four tasks newcomers often perform to make their first pull request on GitHub and found inclusivity bugs in all of them. We redesigned the impacted interface to address the identified bugs and implemented a browser plugin to change the platform interface based on our redesign (we do not have access to change GitHub itself). We evaluated the original and the redesigned interface through a between-subject user study with 75 participants.
Our main goal is to mitigate cognitive barriers newcomers face due to inclusivity bugs. As we show in this paper, GitHub, a platform newcomers use to contribute to OSS, creates barriers for users with different characteristics, disproportionately impacting those from underrepresented groups. These barriers may discourage newcomers and add to the existing diversity gaps, as these tools and infrastructure are the main channels through which OSS newcomers interact with the community. This paper provides insights into how newcomers' performance can be improved when their cognitive styles are supported. Providing adequate support for diverse cognitive styles can help improve the overall community diversity.
## II Related Work
This section discusses work related to newcomers' onboarding in OSS, diversity and bias in OSS, and cognitive styles.
**Newcomer's Onboarding:** Previous work has investigated OSS contribution challenges [8, 23, 24, 25, 26]. Steinmacher et al. [8] conducted a mixed-method study and identified 58 barriers faced by newcomers. Researchers have also investigated specific types of challenges. For example, toxic environments have been studied in the literature [27, 28, 29], which evidenced situations in which OSS project members were unfriendly, unhelpful, or elitist [30]. Jensen et al. [25] analyzed the speed at which emails sent by newcomers are answered, the role played by gender or nationality in the kinds of answers newcomers receive, and the reception newcomers face. A better understanding of the barriers enables communities and researchers to design and produce tools and conceive strategies to better support newcomers [31]. Our work complements existing literature by focusing on making social coding platforms more inclusive by supporting the onboarding of newcomers with different cognitive styles.
**Diversity/Bias in OSS:** Low diversity in OSS is a concern raised by different studies in the literature when considering gender [11, 22, 27, 32], language [30], and location [30]. Past work has shown that diverse teams are more productive [13]. However, minorities face challenges in becoming a part of an OSS community [11]. Most OSS communities function as meritocracies [33], in which minorities report experiencing "imposter syndrome" [13]. These competitive settings have been known to discourage minorities such as women in OSS [34, 35]. Participant observation of OSS contributors found that "men monopolize code authorship and simultaneously de-legitimize the kinds of social ties necessary to build mechanisms for women's inclusion" [36]. Generally, cultures that describe themselves as meritocracies tend to be male-dominated ones that women experience as unfriendly [37]. In our work, we aim to reduce the bias found in social coding platforms used by a wide range of users to support them regarding their different cognitive styles to interact with OSS projects.
**Cognitive styles:** Research has shown that developers have different cognitive styles [38] and motivation [1], and that cognition plays an essential role in software engineering activities [39]. For example, more women are task-oriented, whereas more men are motivated to learn a new technology for fun [9, 10, 40]. These differences in cognitive styles may negatively impact how women and men contribute to OSS, and it mainly happens when OSS projects and the underlying infrastructure support certain cognitive styles (e.g., selective information processing or learning by tinkering) and impede others (e.g., comprehensive information processing or process-oriented learning). Our work considers a variety of cognitive styles to propose changes to GitHub to support diverse newcomers.
## III Research Method
We followed a three-step method, as illustrated in Figure 1: (1) we conducted a GenderMag analysis, which has been extensively used to detect gender biases in commercial and OSS products [10, 41, 42, 43, 44]; (2) we proposed fixes to the GitHub-related inclusivity bugs and developed a browser plugin to implement these changes in the GitHub interface; and (3) we conducted an experiment to compare the original GitHub interface with the interface enriched by the plugin.
### _Step 1 - Identifying GitHub Inclusivity Bugs_
To identify inclusivity bugs, we used GenderMag [38], a systematic inspection method tool builders can use to evaluate their software for inclusivity bugs. GenderMag is based on research showing that individual differences in cognitive styles (referred to as facets) cluster by gender. The method encapsulates these facets into personas-Abi, Pat, and Tim. Abi and Tim occupy the opposite spectrum of facet values, with the Abi persona aligned with facet values that women tend to favor and Tim embodying facet values typically favored by men. The Pat persona includes a mix of these facet values.
The five facets that GenderMag uses are: (i) **Motivation:** Abis are motivated to use technology for what they can accomplish with it, whereas Tims are often motivated by their enjoyment of technology per se [18, 45, 46]; (ii)
Figure 1: Research method overview.
**Information processing styles:** Absi process new information comprehensively--gathering fairly complete information before proceeding--but Tims use selective information processing--following the first promising information, then backtracking if needed [47, 48]; (iii) **Computer self-efficacy:** relates with a person's confidence about succeeding at a specific task, which influences their use of cognitive strategies, persistence, and strategies for coping with obstacles. Abs have lower computer self-efficacy as compared to their peers; (iv) **Risk aversion:** Absi are risk-averse when trying out new features as compared to Tims [49, 50], which impact their decisions about which feature sets to use; and (v) **Learning: by Process vs. by Tinkering:** Absi prefer process-oriented learning, whereas Tims like to playfully experiment ("tinker") with software features new to them [18, 51, 52]. Each cognitive style has advantages, but either is at a disadvantage when not supported by the software [53].
GenderMag is used by evaluation teams to walk through a use case in the project they are evaluating using Abi, Pat, or Tim personas. At each step of the walkthrough, the team writes down the answers to three questions:
* **SubgoalQ:** Will Abi have formed this subgoal as a step to their overall goal? (Yes/no/maybe, why, facets involved).
* _ActionQ1:_ Will Abi know what to do at this step? (Yes/no/maybe, why, facets involved).
* _ActionQ2:_ If Abi does the right thing, will s/he know s/he did the right thing and is making progress toward their goal? (Yes/no/maybe, why, facets involved).
When the answer to any of these questions is negative, the team identifies a potential bug; if the "why" relates to a particular cognitive style, this shows a disproportionate effect on people who have that cognitive style--i.e., an _inclusivity bug_. Thus, a team's answers to these questions become their inclusivity bug report, which they can process and prioritize the same way they would with any other type of bug report.
We selected the Abi and Tim personas [54] as they represent opposite ends of the GenderMag facet ranges. We customized the persona's profile to represent our target users: newcomers looking to make their first contribution using GitHub and have never performed a pull request (PR) before. We identified four use cases (i.e., edit a file, submit a pull request, fork repository, upload a new file) as described in Table I.
Given these personas and use cases, 6 members of our research group conducted the GenderMag walkthroughs on GitHub-hosted projects using the procedures defined by Burnett et al. [54]. The group had prior training and experience in conducting GenderMag analysis. As a first step, the group identified the subgoals and actions for each use case. We then performed the GenderMag evaluations for each use case by first using the Abi persona and then another set of evaluations with the Tim persona. We identified 12 inclusivity bugs in different parts of the GitHub interface.
### _Step 2 - Fixing GitHub Inclusivity Bugs_
We redesigned the GitHub interface to support the GenderMag facets that were previously unsupported and caused the inclusivity bugs we identified in Step 1. As stated by Guizani et al. [22], the outcomes of GenderMag analysis point not only to inclusivity bugs but also to why the bugs might arise and what specific problem-solving facet(s) are implicated.
As an example of redesign, for UC#1, we identified an issue related to Abi's process-oriented learning style and self-efficacy facets that would affect her ability to edit a file in an OSS project. The redesign focused on Abi's process-oriented learning facet to give explicit guidance on submitting a pull request by leveraging the design principle of "visibility." We did so by: (1) presenting the README file information to users more explicitly through a new tab called home (Figure 2), which highlights the importance of the README file, and (2) including a tooltip to explain that the user can edit the file: _To edit this file, go to the "code" tab above, and select the file you want to edit._ Our proposed solution also addressed Abi's self-efficacy facet by showing that she is on the right track to completing the subgoal (#1.1, make changes to README).
Once our research team agreed with the redesign solutions proposed for each issue identified in Step 1, we started the development of a plugin to change GitHub's interface. The plugin was developed as a Chrome extension to change the original GitHub interface. The plugin is developed in JavaScript and uses the GitHub API to collect data about a user in JSON format. It is available on GitHub1 for anyone interested in using it and making contributions, as well as in the supplementary material2.
Footnote 1: [https://github.com/NAU-OSL/ResearchPlugin](https://github.com/NAU-OSL/ResearchPlugin)
Footnote 2: [https://figshare.com/s/4e7724bde0b1d47ccaeb](https://figshare.com/s/4e7724bde0b1d47ccaeb)
### _Step 3 - Assessing the Inclusivity Bug Fixes_
Finally, we conducted an experiment to evaluate how the modified interface changed the user experience for Abis. Even though inclusivity bugs can be fixed in multiple ways, we
Fig. 2: GitHub interface modified by the developed plugin.
expected that we would reduce an eventual performance gap between Abis and Tims who use the modified interface, since the modifications were supported by the analytic/theory-based method.
We follow the guidelines provided in [55] to report our experiment. We conducted an experiment to _analyze_ how the proposed plugin supports newcomers with different cognitive styles. We compared users using GitHub's original version to users using GitHub plugin, _for the purpose of_ evaluating, _with respect to their_ effectiveness in completing the use cases, _from the point of view of the_ researchers, _in the context of_ the GitHub environment when a newcomer attempts to make their first contribution.
The participants interacted with a copy of a community-based OSS project named JabRef3. Participants completed the four use cases used for finding the bugs (Table I):
Footnote 3: [https://github.com/JabRef/jabref](https://github.com/JabRef/jabref)
* Submit a pull request: The newcomer needs to edit a file in the project and submit the changes via a pull request (PR);
* View changed file. In this task, we asked participants to analyze an open pull request and find which files were changed when this pull request was created;
* Request help to solve the PR. The participant needs to find an experienced project contributor and invite them to work together to solve the pull request; and
* Upload a file: The participant should try to upload a new file to the repository.
We conducted a pilot study with five researchers outside our group to collect feedback about the instruments (questionnaires and use case definitions) and study design. The pilot study helped to improve our instruments. We used an iterative process to apply the necessary changes after each pilot session. This resulted in more detailed scripts and documentation about the use cases. We ran new pilot sessions until we reached a consensus that the instruments were reliable enough to start the actual study. A replication package with this material is available online (see the previous subsection). The replication package also includes the developed GitHub plugin and installation instructions.
We recruited 75 undergraduate students from diverse STEM majors from 5 distinct universities in the US and Brazil. The majority of participants were pursuing Computer Science majors. Our recruiting criteria were students who knew how to program but had never opened a pull request on GitHub, so previous experiences with the interface would not bias them. We opted to recruit undergraduate students for our study because the literature mentions that educators have been using OSS to train students, and these students are potential OSS project contributors [56]. We asked the students if they had previous experience with GitHub and OSS. Some of them responded that they had used GitHub once (Plugin = 10 and Control = 7), but when we questioned about what they had used GitHub for, they said that they just created the account but never contributed to any project, so they fit our criteria (never opened a pull request). We also asked about their experience with OSS, and a few participants answered that they had some experience (Plugin = 4 and Control = 3). When we asked what kind of experience they had, they informed us that they had studied OSS concepts in previous courses in college.
We used a between-subject design to balance participants in the original version (Control group) and GitHub plugin version (Plugin group) by GenderMag facets [57, 44]. We used GenderMag's questionnaire to assess participants' facets with 9-point Likert items [58]. We ended up with different numbers of participants between the two treatments, as some of the participants were a no-show, Table II).
Unfortunately, we had a small sample of women participants (18 vs. 57) due to the gender distribution of students in the classes we recruited from. We attempted to balance the participants in each treatment based on their cognitive facets, achieving an almost equal distribution of Abis (37) and Tims (38) across the treatment groups. Table II presents the participants' characteristics in each group.
In the beginning, we conducted each user session one participant at a time with a facilitator and an observer. The participants were asked to perform the four use cases described in Table I. We collected audio recordings and observation notes from the sessions and qualitatively analyzed participants' data. We conducted those individual sessions with 50% of our participants. Then we decided to optimize the data collection by conducting the experiment with students from two classes where we provided an online questionnaire with all the instructions they had to follow to participate in the experiment. A researcher was present the whole time to assist the students in case they needed help or had any questions.
We performed a quantitative analysis by collecting the percentage of use cases completed by participants in each group and applied a self-efficacy survey to measure newcomers' confidence in using GitHub.
We also administered a questionnaire in which participants provided their self-perception about their ability to complete use cases using GitHub, i.e., self-efficacy to complete specific tasks. The questionnaire was based on the work of Bandura [59] and had 5 items. Participants answered those questions before and after the experiment using a 5-point Likert scale ranging from strongly disagree to strongly agree (with a neutral option). The goal was to capture the students' self-perceived efficacy about the use case before and after they attempted executing it. The items were prefixed with "I am confident that I can:" followed by: (i)...use GitHub to contribute to projects; (ii)...open a pull request using the GitHub web interface; (iii)...change a file and submit the changes to the project using GitHub; (iv)...find someone to
help me using the GitHub web interface; and (v)...submit a new file to a project using GitHub.
In addition to the quantitative analysis, we qualitatively analyzed participants' comments to the open questions of the survey following open coding procedures [60]. We asked participants after each use case to explain any difficulties they experienced in accomplishing the task and what in the interface helped them. Our goals were to understand (i) students' difficulties in using the original and the modified interfaces; and (ii) what in the interfaces helped students the most to complete each use case. The analysis was performed by two authors and validated by a third author. The analysis took around one month.
For our study, we considered the following variables: (i) the **dependent variables** comprise the successful completion of each use case by the participants (Y/N), and (ii) the **independent variables** are the use of the Plugin and the GenderMag facets (whether the participant is Tim- or Abi-like).
## IV Results
### _Discovering and Fixing inclusivity bugs on GitHub_
We answer RQ1 based on the results of the GenderMag evaluation of the GitHub interface, which uncovered 12 inclusivity bugs. Table III summarizes the inclusivity bugs, associated GenderMag facets, and how we fixed them. The fixes leveraged the design principles of visibility and feedback, along with the tenet of clarity of instructions and reduction of information load where appropriate. The specific UI design changes were inspired by successful fixes to inclusivity bugs as compiled in the GenderMag design catalog4. The parts of the GitHub interface where these bugs were found can be accessed in the supplementary material5.
Footnote 4: [https://gendermag.org/dc/](https://gendermag.org/dc/)
Footnote 5: [https://figshare.com/s/4e7724bdc0b1d47ecaeb](https://figshare.com/s/4e7724bdc0b1d47ecaeb)
In UC#1 - Submit pull request, we investigated the GitHub interface that an average user interacts with to edit a file and open a pull request. This use case involved five inclusivity bugs. Among the reported bugs, we found Abi would have difficulty understanding the workflow (what to do next) after the file was edited (Bug #4). Abi
processors and process-oriented learners; in this interface, they would not have all the information needed to complete the task and are unlikely to tinker to figure out how to complete it. To address this bug, we proposed: (1) A progress bar indicating the steps of the workflow (improved feedback), allowing Abis to know upfront the process needed to complete the use case; and (2) a tooltip (improved visibility) to explain what happens when the file is edited to provide additional information if Abis need it (Figure 3). Instructions as a tooltip reduce clutter and do not disadvantage tinkerers like Tim.
UC#2, View changed files, included one inclusivity bug (Bug #6), where Abi has difficulty understanding what to do next after opening the pull request. In GitHub, after a user opens a pull request, they are directed to a different page, which does not inform what can be done on that page. On reaching this page, Abis, who are process-oriented learners with lower self-efficacy, would be lost, not knowing what to do next. They would not know if they were progressing towards their goal and would be unlikely to tinker around to figure out how to close the pull request. Our solution adds a tooltip to the navbar that describes some actions that can be made on the pull request page (improved visibility) (Figure 4).
In UC#3 - Request help to solve the PR, we found 2 inclusivity bugs that could affect users' performance with Abi's cognitive style. The pull request interface is not straightforward. Once the user opens the pull request, it is not clear that it is possible to mention someone in the comment box to ask for help. This lack of information affects users with Abi's facets of learning by process and computer self-efficacy. To address this bug, we included a tooltip in the @ symbol icon to display "_Use @ to mention a contributor to help_," as illustrated in Figure 5.
Moreover, after the mention is made, the GitHub interface does not give any feedback about what happens next, affecting comprehensive information processors such as Abi. This can impair Abis' ability to continue with the pull request given their lower self-efficacy, where such users are likely to blame themselves and quit. Even if they asked for help, they would be unsure if the mentioned developer would receive a notification to help them. To fix this bug, we proposed the addition of a confirmation message (improved feedback) to the top of the page informing that: _The mentioned user will receive a notification and may help you to work on the pull request_, as illustrated in Figure 6.
In UC#4 - Upload a file, to upload an image to an OSS project, the user needs to have push access to it. For this use case, we found 4 inclusivity bugs. The major bug is related to the second subgoal: it is not possible to upload a file because the newcomer does not have a repository fork nor push access to the original repository. The interface only presents the message that the user needs to have push access to the repository but no direction about how to do it. This bug impacts Abi's facets of comprehensive information processing style, risk averseness, and task-oriented motivations. We proposed the following fixes to address this bug: we changed the message to give better feedback informing the user that it is necessary to fork the repository and made the fork button green to highlight that it is enabled on the page. The new message states _In order to upload files, click the fork button in the upper right_ (see Figure 7).
Figure 4: UC#2 / Bugfix #6 - plugin interface: inclusion of tooltip to guide users.
Figure 5: UC#3 / Bugfix #7 - plugin interface: inclusion of tooltip to guide users.
Figure 3: UC#1 / Bugfix #2 - plugin interface: inclusion of progress bar and tooltip.
Figure 6: UC#3 / Bugfix #8 - plugin interface: inclusion of confirmation message to provide feedback to users.
### _Effects of removing GitHub inclusivity bugs_
_Impact on completion rates._ In RQ2, we investigate how our redesign in Step 1 impacted Abis and Tims. Table IV presents the number of participants who correctly completed the tasks, comparing the treatment groups and the different persona facets. We evaluated the effectiveness of both groups in completing the tasks using the _Chi-Square test_ to check the independent relationship between the two categorical variables [61] (see Table V).
For UC#1, there are no statistical differences between Abis and Tims between the treatment groups (Control vs. Plugin). All participants in both treatments had high success rates. Tims had 100% completion rates in both treatments. Abis in the Plugin group performed better than the Control group (94.7% vs. 83.3%), but the difference is not statistically significant. This reflects that UC#1 was a simple enough use case, with the majority of Abis able to overcome the inclusivity bugs (Bug #1 to Bug #5) to complete the task. Recall, inclusivity bugs need not be show stoppers, but they add an additional cognitive tax every time a user faces them.
For UC#2 and UC#3 in the Control group, Abis performed worse than Tims by about 33%, with the difference being statistically significant (p-value \(<0.05\)). However, there is no difference when we compare the Abis and Tims in the Plugin group. Both Abis and Tims have a 100% completion rate for UC#2 and 95% for UC#3. This suggests that our redesign helped Abis overcome barriers to completing these tasks.
All participants struggled to complete UC#4 in the Control group; Abis' completion rate was 27.7% as compared to Tims' 33.3%. The redesign helped both Abis and Tims, with Abis' improvements at 61.7% and Tims' at 66.6%. The improvements in the Plugin group compared with the Control group were statistically significant (p-value \(<\) 0.001). This result highlights that designing an interface to improve the experience of one underserved population can help make the software better for the larger population.
_Impact on self-efficacy._ Figure 8 presents the results of the self-efficacy questionnaire that participants filled out at the beginning ('pre') and end ('post') of the study, disaggregated by treatment groups ('Control' and 'Plugin') and per persona ('Tim' and 'Abi').
At the beginning of the experiment ('pre'), Abis (2.9) had a lower self-efficacy as compared the Tims (3.6). After performing the experiment tasks ('post'), both types of participants gained confidence. Given these participants had never interacted with GitHub to submit a pull request before, it is expected that they were not confident in completing the tasks at the start of the experiment. But, after completing use cases (UC#1 to UC#3), their self-efficacy improved. It is heartening to note that the failure to complete UC#4, the last use case, did not dampen those participants' starting self-efficacy.
The improvement in participants' self-efficacy was larger ('pre' vs. 'post') in the Plugin group for both Abis and Tims. We calculated the Wilcoxon signed-rank test, a frequently used nonparametric test for paired data (e.g., pre- and post-treatment measurements) [62], which indicates that the difference in improvement ('pre' vs. 'post') between the Control and Plugin groups is significant for both types of participants; improve
Fig. 8: Self-efficacy results.
Fig. 7: UC#4 / Bugfix #9 - plugin interface: change of message and color of the fork button.
ment for Abis has p-value \(=0.005\), and Tims has p-value \(<0.001\). We calculated Cliff's delta effect size measure [63] to calculate the magnitude of these differences among the Plugin group. The effect size of improvement for Tims 'pre' vs. 'post' is large (delta = 0.682), as well as for Abis 'pre' vs. 'post' improvements (delta = 0.542).
_Impact of the proposed interface on participant experiences._ The questionnaire that participants filled out after every task (Section III-C) confirms that the control group participants faced more challenges. In the following, we discuss participants' reflections on what their difficulties when performing the experiment tasks and how the Plugin design helped them.
In UC#1, the main challenge Tims faced in the Control group was the difficulty of finding the editor and the README files (Bug #1 of Table III). P44 mentioned "_Finding the README file, definitely, because I didn't know where to look for all these files, I didn't think it would be like in the middle of those files._" The plugin solved this problem by presenting the README file information to users more explicitly (improving visibility) through a new tab called home. None of the participants in the Plugin group mentioned finding the README to be a problem.
The Abis in the Plugin group mentioned that the improved visibility of features in the redesigned interface (button colors, tooltips) helped them complete UC#1. The tooltips allowed comprehensive information processors to gather the necessary information before starting the task. It also improved their self-efficacy by letting participants know they were on the right path. Indeed, P29 mentioned that "_the tooltip guides me into the execution of the task_".
The redesigned interface, however, did not help Abi-like participants in figuring out how to edit the file and save it (Bug #4). P1 said, "_Starting the Edit process was really hard. And once you have a little computer knowledge and you actually get into the Edit tab, you can look at the various files you want to edit and then go through the process_". This comment highlights Abi's risk-averseness when having to use new features.
In UC#2, a difficulty that both Abi- and Tim-like participants faced in the Control group was finding the changed files in the pull request interface (Bug #6). Tims and Abis in the Plugin group mentioned that the changed files in the navigation menu (improved feedback) and the tooltips (improved visibility) helped them to complete the task. This is an example of how a solution designed to help one class of users (Abis) helps a broader population (also Tims).
The main difficulty in UC#3 was finding out how to request help. Some participants reported that their first idea was directly contacting the experienced user. P43 mentioned: "_I thought there would be a way that I could just like leave them a personal message and ask for help rather than posting. It [comment in the interface] looks like a public comment._" Other participants tried to contact the user directly by going to their GitHub profile page and looking for a direct message option, which GitHub does not offer. A majority did not realize they could use the '@' button in the panel to direct their comments to a specific contributor.
Participants in the Plugin group used the tooltip associated with the mention icon ('@') to figure out this feature. The improved visibility of the '@' icon helps process-oriented learners, who would be hesitant to tinker around the interface to find and use the '@' button. With this fix, an Abi participant mentioned that the task was intuitive (P40): "_Once I recognized that I needed to do this task as well, it was pretty intuitive._".
In UC#4, participants in the Control group faced more difficulty figuring out how to obtain push access: one Abi participant and ten Tims mentioned having that difficulty. Only three of these ten participants overcame this challenge and successfully completed the task. None of the participants mentioned this challenge in the Plugin group. Abis in the Plugin group mentioned that the improved visibility afforded by the green fork button and the feedback message was helpful. P26 said: "_interface messages when trying to upload the file helps a lot_". Tims in the Plugin group said the same, exemplified by P5: "_So when I went back, I saw that the fork was highlighted in like the same green color. (...) It really just puts me back in the right direction_".
[MISSING_PAGE_POST]
[Foster Question Question]
[Fosterial Question]
[Fosterial Question]
[Fosterial Question Question]
[Fosterial Question Question]
[Foster Question Question]
[Fosterial Question]
[MISSING_PAGE_POST]
[Fosterial Question Question Question]
[Foster Question Question]
[Fosterial Question Question]
[Fosterial Question Question]
[Foster Question Question]
[Fosterial Question Question]
[Fosterial Question Question]
[Foster Question Question]
[Fosteroster Question Question]
[Fosterial Question Question]
In our study, we investigated to what extent the GitHub interface embedded inclusivity bugs and how these inclusivity bugs impacted users' performance with different cognitive styles (i.e., Abis and Tims). After applying the GenderMag method on GitHub, we found 12 inclusivity bugs that affect the Abi persona.
_Alignment with past research._ Our findings are similar to that of past GenderMag research identifying inclusivity bugs in OSS projects. Padala et al. [9] found Information Processing, Self-efficacy, and Learning Style facets favored by Abi to be the most frequent facets that were not supported by OSS projects, and the lack of support of these facets was instrumental in causing the top reported barriers to contribute. More specifically, they found that: (1) comprehensive information processors would feel disoriented because of insufficient upfront information provided in the project README. In our study, Abi-like participants also reported feeling lost in the Control group; (2) participants with lower computer self-efficacy were worried about completing the task and described a lack of knowledge of the technologies as a reason for it. These findings also appear in our results--participants in the Control group felt scared by the GitHub interface; (3) process-oriented learners were hampered by a lack of clear instructions on how to contribute. We observed that Abi-like participants in the Control group also got stuck completing some of the tasks because of a lack of instructions on how to use many of the GitHub features.
Fixing these inclusivity bugs not only helps Abi-like users, whose facets were used to redesign the software but can also make the software better for the larger population. Vorvoreanu et al. [44] in their work found that a redesign of their software to fix the inclusivity bugs found via GenderMag helped women do better (who had twice the failure rate as men in 'pre-fix' version), removing the gender gap in the 'post-fix' version. Moreover, both men and women participants had fewer failures. We found similar results, where redesigning the GitHub interface to accommodate Abi-like users also helped the Tim-like participants in our study (66.67% improvement among Tims in UC#4).
_Cognitive diversity bugs can become gender-bias bugs._ Past research using GenderMag has shown that the inclusivity bugs created when Abis' cognitive styles are unsupported also become gender-bias bugs because individual differences in how people problem solve cluster by gender [51, 46, 18, 38]. In our data set, we see that the distribution of Tim facets aligned more closely with the distribution of men. We had 63% men who had a majority of Tim facets compared to those who had a majority of Abi facets. In our study, perhaps due to the small sample size and our recruitment pool, we had an equal distribution of Abis and Tims among the women participants.
_The need to make OSS tools and technology inclusive._ OSS has a severe gender diversity imbalance, with the percentage of women ranging around 10%. One of the challenges women face is a lack of sense of belonging, which may make them less inclined to share their opinions with the rest of the team. We noticed such reticence among our women participants in opining about their difficulties. In contrast, the men (the majority comprised of Tim) in the study felt more empowered to talk about the challenges they faced and suggest how they would improve the GitHub interface. One reason for this difference in behavior can be because women tend to have lower computer self-efficacy than men within their peer sets [9]. This can affect their behavior with technology [64, 46, 19, 65], indicating that women feel less comfortable sharing their opinions and are inclined to think that it is their fault for not being able to use a certain technology. By making the OSS tools and technology more inclusive, we can break the barriers that Abi-like users, typically women, face when using the tools and technology, which can add to their feelings of not belonging [66] and impostor syndrome [11].
Making GitHub inclusive of varied cognitive skills is important for OSS to attract newcomers. Making GitHub inclusive will remove additional barriers newcomers face when their cognitive styles are not supported by the tool [9]. When the gap between newcomers' skills and those needed to accomplish the task is too broad, it demotivates newcomers, causing them to drop out [67, 68]. This can particularly impact students who are still developing their skills and have limited time and experience when first contributing to an OSS project.
## VI Implications
_Implications for social coding platforms_. For the designers and developers of GitHub and other social coding platforms, our results highlight the importance of developing software that encompasses the diversity of users. Social coding platforms can insert inclusivity biases that are crosscutting to a large number of projects. Social coding platform designers should consider newcomers' cognitive styles to understand how they process information or use the technology itself and how they can accomplish tasks to help them reach their main goals. A more inclusive design means including more users by making it easier for them to contribute to OSS projects.
_Implications for Maintainers of OSS projects_. Our work reports inclusivity bugs newcomers can face and what part of a task they can get stuck on. Maintainers can use this information to consider how they could mitigate these challenges. One suggestion would be to provide more information in the README/Contributing.md files. We also hope our work can foster and ignite the interest in OSS communities to investigate and remove inclusivity bugs in the different tools and technology they use.
_Implications for newcomers (Abis and Tims)_. Our results are important for newcomers. We showed the difficulties they face, where they struggle most, and how the interface can help them. Abis, who notoriously have low self-efficacy, should be aware that the interface was not designed for their cognitive style, and poor performance is a reflection of the tool failing them and not a reflection on their self-worth or capability. Tims should be aware that developers with diverse cognitive styles exist and respect the differences.
_Implications for educators_. Familiarizing students with the OSS contribution process is becoming more common [69].
Contributing to a real project helps students gain real-life experience and allows them to add this experience to their resume, which aids them in securing jobs. Our results highlight that based on their cognitive styles, some students can face more challenges when interacting with the GitHub platform. Educators should understand those challenges and teach students how to overcome them. They can also explore other ways to facilitate students' learning of the GitHub platform.
## VII Limitations
Our investigation also has threats to validity and limitations. We focused our analysis on finding inclusivity bugs for newcomers based on GenderMag Abi's persona. We followed the guidelines suggested by Hilderbrand et al. [70] and focused on this persona because its facet values tend to be more undersupported in software than the other personas [22, 41]. However, fixing problems from only this persona's perspective could leave non-Abi newcomers less supported. This was a clear trade-off that could impact Tims, for example. However, the results from out experiment that include both Tim and Abi personas, showed that the performance of the Tim participants also improved for some tasks.
Despite our best efforts to recruit women for the experiment, there is a gender imbalance in the sample. At the same time that having more women would be important for the gender balance perspective, we would lose in terms of representativeness of the population of interest. Still, although the number of women is lower than men, we have almost the same amount of Abi (37) and Tim (38) participants. Nevertheless, this paper aims to investigate the cognitive facets, and some men also present facets associated with Abi's persona.
In the GenderMag analysis, we carefully conducted the walkthroughs on GitHub following the procedures described by Burnett et al. [38]. We had different meetings to review the GenderMag analysis and solutions proposed to fix the inclusivity bugs and the members of our research group had previous experience in conducting GenderMag analysis. Another concern is that the GenderMag method only relies on participants' gender, though that is not the case. Vorrovenau et al. [44] states that the keys to more inclusive software lie not in someone's gender but in the facet values themselves. As this answer makes clear, GenderMag can be used to find and fix inclusiveness issues without ever speaking of gender.
We recruited 75 undergraduate students from diverse STEM majors from 5 different universities in the US and Brazil. Most participants were pursuing Computer Science majors. We acknowledge that the sample is not representative of the population under analysis. But, we decided to not seek for generalization, but to understand the phenomenon in a controlled environment that would generate initial evidence to be further investigate. Therefore, future studies may investigate whether newcomers from different countries or with education levels to compare the results.
Regarding the plugin development and evaluation, we ran tests during the development to assert its usability and correctness. However, the plugin could have different behaviors depending on the browser. To mitigate this threat, we made available a pre-configured computer in case the plugin did not behave as we expected during the experiment.
We collected the time participants spent completing the tasks. However, the high number of participants that did not complete the tasks made it hard to compare the time differences between groups. Future studies with larger samples may help to investigate time differences.
Concerning the qualitative analysis, we are aware that data interpretation can lead to bias. To mitigate subjectivity, we employed two researchers who independently coded the answers and conducted meetings to discuss and resolve conflicts. Still, this qualitative piece was important to collect the feedback from the users during their activity. We chose to provide this more subjective understanding to complement and enrich our results, instead of collecting only objective data.
## VIII Conclusion
Making software products usable to people regardless of their differences has practical importance. If a project's development tools or products fail to achieve inclusiveness, not only does its adoption fall but so does the involvement of underrepresented populations in the teams themselves [71, 10]. In this work, we found 12 inclusivity bugs in the GitHub interface for four tasks that are common for OSS newcomers. These bugs mainly affect users with cognitive styles that are more common to women--defined in the Abi persona [38]. We proposed fixes to the inclusivity bugs, implemented them in a plugin that changed the GitHub interface, and evaluated them through a between-subject experiment with 75 newcomers.
We found that Abi participants in the Control group (regular GitHub) underperformed Tim participants in some use cases, with Abis in the Control group completing only 67% of the tasks. Implementing the fixes based on the GenderMag analysis reduced these differences and improved the performance of Abi participants to 95%, indicating that the redesign improved GitHub's usability and learnability. In one of our use cases, both Tim and Abi participants faced challenges, and the bug fixes implemented in the plugin significantly helped both participants (66% improvement). We also noticed an overall increase in the self-efficacy perception for both Abi- and Tim-like participants in the Plugin group, highlighting how solving inclusivity bugs for minorities can also help the majority population.
In future work, we plan to use our results to continue exploring the inclusivity barriers in tools and infrastructure to improve newcomers' performance and make tools and projects more friendly for those who want to engage in OSS projects.
## Acknowledgment
This work is partially supported by the National Science Foundation under grant numbers 1900903, 1901031, 2236198, 2235601, CNPq #313067/2020-1, CNPq/MCTI/FNDCT #408812/2021-4, and MCTIC/CGI/FAPESP #2021/06662-1. We also thank the students for participating in our study and Zachary Spielberger for helping develop the plugin. |
2305.04248 | A nation-wide experiment, part II: the introduction of a
49-Euro-per-month travel pass in Germany -- An empirical study on this fare
innovation | In a response to the 2022 cost-of-living crisis in Europe, the German
government implemented a three-month fuel excise tax cut and a public transport
travel pass for 9 Euro per month valid on all local and regional services.
Following this period, a public debate immediately emerged on a successor to
the so-called "9-Euro-Ticket", leading to the political decision of introducing
a similar ticket priced at 49 Euro per month in May 2023, the so-called
"Deutschlandticket". We observe this introduction of the new public transport
ticket with a sample of 818 participants using a smartphone-based travel diary
with passive tracking and a two-wave survey. The sample comprises 510 remaining
participants of our initial "9-Euro-Ticket study from 2022 and 308 participants
recruited in March and early April 2023. In this report we report on the status
of the panel before the introduction of the "Deutschlandticket". | Allister Loder, Fabienne Cantner, Lennart Adenaw, Markus B. Siewert, Sebastian Goerg, Klaus Bogenberger | 2023-05-07T11:22:12Z | http://arxiv.org/abs/2305.04248v1 | A nation-wide experiment, part II: the introduction of a 49-Euro-per-month travel pass in Germany - An empirical study on this fare innovation
###### Abstract
In a response to the 2022 cost-of-living crisis in Europe, the German government implemented a three-month fuel excise tax cut and a public transport travel pass for 9 Euro per month valid on all local and regional services. Following this period, a public debate immediately emerged on a successor to the so-called "9-Euro-Ticket", leading to the political decision of introducing a similar ticket priced at 49 Euro per month in May 2023, the so-called "Deutschlandticket". We observe this introduction of the new public transport ticket with a sample of 818 participants using a smartphone-based travel diary with passive tracking and a two-wave survey. The sample comprises 510 remaining participants of our initial "9-Euro-Ticket" study from 2022 and 308 participants recruited in March and early April 2023. In this report we report on the status of the panel before the introduction of the "Deutschlandticket".
## 1 Introduction
In a response to the 2022 cost-of-living crisis in Europe, the German government implemented a three-month cut on the fuel excise tax and a public transport travel pass for 9 Euro per month valid on all local and regional services in the summer months of 2022, i.e., June, July, and August. While the reduction in the fuel excise tax has not been experienced by drivers as such due to global market price fluctuations, the so-called "9-Euro-Ticket" impacted travel behavior substantially. This behavioral intervention, which could be one of the largest public transport pricing natural
travel behavior experiments, has been studied by several studies. All of them reported a substantial increase in public transport usage during the validity period of the "9-Euro-Ticket" [1, 2, 3, 4, 5].
During the "9-Euro-Ticket" validity period, the study of the Association of German Transport Companies surveyed more than 200,000 people in Germany [1]. The study reports that around 20 % of all "9-Euro-Ticket" customers were new customers to public transport. Out of all public transport trips in the months of June, July and August 2022, 17 % of trips have been shifted from other transport modes and 10 % of trips have been shifted from the car to public transport, in the countryside even 13 to 16 %. Importantly, 16 % of all trips would not have taken place at all without the "9-Euro-Ticket", i.e., their correspond to induced demand. In addition, trip distances increased by 38 % in the "9-Euro-Ticket" period. Nevertheless, car usage numbers returned to pre-"9-Euro-Ticket" levels in the month after the ticket's [1]. In a Munich-oriented study [5], a modal shift from car to public transport in the order of five percent of the average daily travel distance was observed [4]. Around half the sample did not report any substantial change in car use, while only around 8 % of the sample reported less car and more public transport usage instead. The study also suggests that an activation effect of the "9-Euro-Ticket" exists because around four percent of the sample who did not previously used public transport frequently are now using it on a regular basis. Interestingly, the share of substituted car trips after the 9-Euro-Ticket period did not return to zero levels: around ten percent of the sample still substitutes some car trips. For a nation-wide sample, similar findings were made by [2] who state that the ticket did not lead to a shift in daily mobility, but rather increased leisure travel at the beginning and the end of the ticket's validity period, leaving monetary savings as the main effect of the "9-Euro-Ticket".
With the introduction of the "Deutschlandticket" as the successor ticket in May 2023, priced at 49 Euro per month, the research question becomes how travel behavior will be affected by this similar, but not identical offering. It differs from the "9-Euro-Ticket" in so far as it costs 49 Euro instead of 9 Euro per month, it is not limited to three months, and it is a subscription instead of a monthly ticket.
As with the "9-Euro-Ticket", it can be expected that various research institutions and companies will perform (market) research on this fare innovation and disruption to the German transportation system. We also decided to continue our "9-Euro-Ticket" study "Mobilita.Leben" [6] to cover the first months of the validity period of the successor ticket. In the sequel of our initial study, some study participants decided to continue their participation, while other participants entered the study after the "9-Euro-Ticket" period.
In this report, we detail on the panel's composition and its recruiting in Section 2; report on the most recent study activity in Section 3; present the panel's travel pass ownership before the introduction of the successor ticket and the panel's current intention to subscribe to this ticket in Section 4. Last, we provide an outlook of our study in Section 5.
## 2 Panel and recruiting
The study to observe the travel behavior impacts of the so-called "49-Euro-Ticket" or "Deutschlandticket" (Germany-Ticket) uses a panel that comprises study participants from our previous "9-Euro-Ticket" study [6] as well as participants recruited in the period before the start of "Deutschlandticket" in May 2023. Table 1 shows the time of recruitment. It can be seen that around two-thirds of the panel result from the initial "9-Euro-Ticket" study, while one-third has been recruited in the months before the start of the "Deutschlandticket".
The recruitment for the initial "9-Euro-Ticket" study as well as the sequel study on the "Deutschlandticket" was based on self-enrollment. The study was advertised on social media, in TV shows, and in newspapers. Self-enrollment was possible until April 21, 2023, to ensure that at least one week of travel can be observed using the smartphone-based travel diary.
\begin{table}
\begin{tabular}{l r r} \hline \hline Recruited through & N & Share of participants \\ \hline
9-Euro-Ticket panel & 510 & 62.35 \% \\ Until February 2023 & 32 & 3.91 \% \\ March 2023 & 110 & 13.45 \% \\ April 2023 & 166 & 20.29 \% \\ \hline Total & 818 & 100.00 \% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Panel composition
## 3 Study activity
Out of the 818 registered panel participants, 741 did successfully active the smartphone app (around 90 %), while more than 670 were able to successfully submit travel behavior data (around 83 %). In other words, around 70 participants who did activate the smartphone app either deleted the app already again or their smartphone did not report any trips for technical reasons. Considering the panel dynamics, Figure 1 shows the number of mobile users in April, i.e., the month before the start of the "Deutschlandticket". It can be seen that, the number of mobile users is increasing towards the end of the month which can be explained by the fact that recruitment was ongoing.
Considering the travel behavior in April, Figure 2 shows the average travel distances per day and mobile user. According to the German household travel survey, the average daily travel distance of mobile persons is around 46 kilometers per day [7]. Consequently, the recruited panel is likely more mobile as only few days show a similar average travel distance, while many days, especially during the Easter holidays show substantially larger daily travel distances.
Overall, the observed modal share based kilometers without air travel in April 2023, i.e., the month before the introduction of the "Deutschlandticket", is 52.7 % private transport and 37.8 % public transport. These values are close to the modal share between the values report for Munich (36 % public transport, 56 % private transport) and the greater Munich area (28 % public transport, 55 % private transport) in 2017 [7]. This is not surprising as the Munich metropolitan region is the focus region of our study.
The April questionnaire, i.e., the questionnaire distributed right before the start of the "Deutschlandticket" has been completed by 644 or around 80 % of all registered participants. 632 participants, around 77 % of all registered participants, successfully activated the smartphone app and completed the questionnaire April.
\begin{table}
\begin{tabular}{l r r} \hline \hline & Panel & Germany census \\ \hline less than 15 years & 0.00\% & 14.31\% \\
15 to 20 years & 2.57\% & 4.71\% \\
20 to 25 years & 8.07\% & 5.40\% \\
25 to 30 years & 14.67\% & 5.85\% \\
30 to 35 years & 9.66\% & 6.82\% \\
35 to 40 years & 9.90\% & 6.49\% \\
40 to 45 years & 9.41\% & 6.33\% \\
45 to 50 years & 7.33\% & 5.86\% \\
50 to 55 years & 9.78\% & 7.34\% \\
55 to 60 years & 11.74\% & 8.24\% \\
60 to 65 years & 6.85\% & 7.26\% \\
65 to 70 years & 5.01\% & 5.96\% \\
70 to 75 years & 3.55\% & 5.07\% \\
75 years and more & 1.47\% & 10.36\% \\ \hline Male & 53.06\% & 49.39\% \\ Female & 46.09\% & 50.61\% \\ Diverse & 0.86\% & NA \\ \hline Total & 100.00 \% & 100.00 \% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Socio-demographic characteristics of the panel.
## 4 Travel pass ownership and intentions
In April, 47 % of respondents of our panel have a monthly travel pass, while 53 % have not. Note that our study has a focus on the Munich metropolitan region [6]. Out of the non-travel-pass owners, 16 % already bought the "Deutschlandticket" in April. This increases the total number of monthly travel pass owners in the panel by 18 %. Further 20 % of the entire panel consider subscribing to the "Deutschlandticket". If all of them would eventually subscribe to the "Deutschlandticket", the total number of travel pass owners in the panel would grow by 62 % compared to the April levels.
## 5 Outlook
The presented study will continue until June 2023. It includes observing the travel behavior of the sample for the first two months of the "Deutschlandticket" using the smartphone app and a questionnaire distributed in June, i.e., in the second month of the "Deutschlandticket" validity period. Considering that two-thirds of the sample did already participate in the "9-Euro-Ticket" study, we will be able to compare and explore behavioral differences between the period of the "9-Euro-Ticket" and the "Deutschlandticket".
## Acknowledgement
Allister Loder acknowledges funding by the Bavarian State Ministry of Science and the Arts in the framework of the bidt Graduate Center for Postdocs. Fabienne Cantner acknowledges funding by the Munich Data Science Institute (MDSI) within the scope of its Seed Fund scheme. The authors would like to thank the TUM Think Tank at the Munich School of Politics and Public Policy led by Urs Gasser for their financial and organizational support and the TUM Board
Figure 1: The number of mobile users in the “Mobilitiá.Leben” app per day in April 2023.
of Management for supporting personally the genesis of the project. The authors thank the company MOTIONTAG for their efforts in producing the app at unprecedented speed. Further, the authors would like thank everyone who supported us in recruiting participants, especially Oliver May-Beckmann and Ulrich Meyer from MCube and TUM, respectively.
|
2308.13910 | Exploring Human Crowd Patterns and Categorization in Video Footage for
Enhanced Security and Surveillance using Computer Vision and Machine Learning | Computer vision and machine learning have brought revolutionary shifts in
perception for researchers, scientists, and the general populace. Once thought
to be unattainable, these technologies have achieved the seemingly impossible.
Their exceptional applications in diverse fields like security, agriculture,
and education are a testament to their impact. However, the full potential of
computer vision remains untapped. This paper explores computer vision's
potential in security and surveillance, presenting a novel approach to track
motion in videos. By categorizing motion into Arcs, Lanes,
Converging/Diverging, and Random/Block motions using Motion Information Images
and Blockwise dominant motion data, the paper examines different optical flow
techniques, CNN models, and machine learning models. Successfully achieving its
objectives with promising accuracy, the results can train anomaly-detection
models, provide behavioral insights based on motion, and enhance scene
comprehension. | Afnan Alazbah, Khalid Fakeeh, Osama Rabie | 2023-08-26T16:09:20Z | http://arxiv.org/abs/2308.13910v1 | Exploring Human Crowd Patterns and Categorization in Video Footage for Enhanced Security and Surveillance using Computer Vision and Machine Learning
###### Abstract
Computer vision and machine learning have brought revolutionary shifts in perception for researchers, scientists, and the general populace. Once thought to be unattainable, these technologies have achieved the seemingly impossible. Their exceptional applications in diverse fields like security, agriculture, and education are a testament to their impact. However, the full potential of computer vision remains untapped. This paper explores computer vision's potential in security and surveillance, presenting a novel approach to track motion in videos. By categorizing motion into Arcs, Lanes, Converging/Diverging, and Random/Block motions using Motion Information Images and Blockwise dominant motion data, the paper examines different optical flow techniques, CNN models, and machine learning models. Successfully achieving its objectives with promising accuracy, the results can train anomaly-detection models, provide behavioral insights based on motion, and enhance scene comprehension.
Computer Vision, Convolutional Neural Networks, Motion, Security, Surveillance
## I Introduction
The advancement of technology, including artificial intelligence and robotics, has significantly bolstered the realm of security and surveillance. The amalgamation of machine learning techniques with surveillance practices has emerged as a potent solution to address issues such as crime, illicit activities, and even violent protests. Recent experiences have underscored the value and necessity of automated video surveillance. Thanks to computer vision, a technology that enables computers to interpret visual information like humans, tasks like identifying people in frames, tallying individuals in crowded scenes, spotting unusual behaviors, and analyzing motion in surveillance videos are carried out without human intervention. However, the analysis of crowd motion and detection of abnormal behaviors has always been a challenging endeavor due to the numerous factors influencing individual movement. By deciphering crowd motion, potential incidents of violence, riots, traffic congestion, and stampedes can be averted. In the general context and motivation behind this study, as referenced in [1], the primary goals of automated surveillance video analysis encompass continuous monitoring, reducing labor-intensive tasks, recognizing objects or actions, and comprehending crowd behavior. This paper delves into the identification of diverse forms of crowd motion and tracking abnormal behaviors utilizing Convolutional Neural Networks (CNN). Most of the research in crowd analysis is concentrated on the following aspects: counting the crowd, categorizing crowd types based on density, detecting motion within frames, and classifying various types of motion. Crowd counting plays a pivotal role in ensuring safety and security. It facilitates event planning, traffic management, and gauging capacity in various situations. However, counting densely packed crowds poses a considerable challenge. Research, such as [2], indicates that over 17% of papers on crowd analysis focus on crowd counting. For instance, [3] presents a comprehensive exploration of crowd counting types and diverse algorithms, proposing a novel method utilizing the statistics of spatio-temporal wavelet sub-bands. Another approach [4] employs multiple sources and Markov Random Fields to count individuals in dense crowds. Categorizing crowd types based on density is essential for comprehending the dynamics of motion. According to [5], crowds can be classified into three types: microscopic, mesoscopic, and macroscopic, dependent on crowd density. Microscopic view entails understanding the flow of individuals within a limited frame, while mesoscopic view accounts for a larger crowd, and macroscopic view involves a densely filled frame. The forces at play in these scenarios differ, driving crowd motion. Detecting motion within frames can be achieved by training a model using a CNN architecture or by tracking individual points using optical flow. For instance, [6] utilized Shi-Tomasi Corner Detection and the Lucas-Kanade algorithm to detect crowd motion. Similarly, [7] employed Motion Information Images (MII) to train a CNN model for motion and abnormality detection. Identifying types of motion is invaluable for understanding crowd behavior, event planning, mitigating traffic congestion, and anticipating abnormal motion. Researchers like [8] utilized VGG16 CNN architecture models to classify crowd types as homogenous, heterogeneous, or violent. Additionally, [9] used mathematical concepts like Taylor's theorem and Jacobian matrix to categorize crowd motion into five generic types: Lanes, Arc/Circle, Fountainheads, Bottlenecks, and Blocks. This
paper endeavors to classify various crowd motion types, such as Arcs, Lanes, Converging/Diverging, and Blocks/Random, with a focus on drone footage analysis employing cutting-edge CNN and machine learning techniques. The paper also delves into different optical flow methods, examining their pros and cons. The proposed model aids in understanding scene motion and supports the training of multiple anomaly detection techniques. The paper's objectives include identifying key features in frames for motion tracking, exploring noise reduction options, devising an improved point tracking approach, generating data for anomaly detection, creating datasets for CNN and machine learning models, comparing methodologies, and enhancing the model's performance for dynamic drone footage analysis in security scenarios.
The paper is as follows in the next section we will see the background related to our paper. In Section III, the works that are related to our technique is been presented. In Section IV, the dataset description with preprocessing are presented. In Section V, the experimental approach is been described. In Section VI, the discussion of results are presented and we conclude the paper in Section VII with some conclusion and future works.
## II Background
Computer vision has developed from a multitude of complex theories, algorithms, and models. The primary focus of this paper revolves around video surveillance. This section aids in comprehending the specific technical complexities relevant to this domain.
### _Optical Flow_
Optical flow, which can be regarded as one of the fundamental concepts in the realm of computer vision, plays a pivotal role in deciphering the complex patterns of object movement from one frame to another. This concept finds widespread applications in various fields such as robotics, image processing, motion detection, and object segmentation. In essence, optical flow allows us to uncover the motion patterns hidden within consecutive frames of videos, providing crucial insights into the dynamics of objects over time. When we think of videos, we envision a sequence of images, each potentially unrelated to the others. However, in the real-time scenario, a video captures the sequential changes in pixel values over a specific period. This dynamic nature of videos leads us to explore algorithms that unveil the relationships between pixels in different frames. In a recent study [10], various optical flow algorithms were meticulously examined and evaluated. The findings of this study pinpoint the Lucas-Kanade Algorithm as the most promising one among the eight algorithms assessed. Visualizing optical flow is often represented through diagrams using vectors that indicate the changes from one frame (let's call it Frame F1) to another (Frame F2). Yet, in real-time analysis, it's practical to focus on specific points that yield more meaningful insights. To illustrate, consider the movement of a hand from Frame F1 to Frame F2. This movement might involve changes in hundreds of pixels, many of which could be redundant. Rather than analyzing the entire hand movement, it's more informative to concentrate on the flow of pixels at the corners of the hand. This is where corner detection algorithms come into play. These algorithms are designed to simplify the complexity of analysis while enhancing the performance of optical flow algorithms. Within the context of corner detection, this paper delves into two distinct techniques to determine the optimal approach for the task at hand. The first is the Shi-Tomasi Corner detection method, which bears resemblance to the Harris Corner Detector. This technique is widely employed to identify interest points and feature descriptors in images. Interest points can take the form of corners, edges, and blobs. Importantly, these interest points remain invariant in the face of rotation, translation, intensity changes, and scaling variations. The differentiating factor between Harris and Shi-Tomasi corner detection lies in the computed R value, which is used to identify corners. On the other hand, the Features from Accelerated Segment Test (FAST) Corner detection technique employs an alternative strategy. It predicts not only corners but also edges based on color intensity and a specific threshold.
### _Density-Based Clustering_
Clustering, in simple terms, involves grouping together objects that are alike. This similarity is determined by factors such as shape, angle, size, and position. To make the best use of a computer's memory, which is like its thinking space (CPU/GPU), it's crucial to focus on the most important points for analysis. So, to handle this, we gather the points and arrows according to where they are and where they're headed. This helps us bring similar points and arrows together, which then lets us figure out how a crowd might move. In our paper, we've looked at two kinds of density-based clustering techniques such as DBSCAN and OPTICS.
### _Convolutional Neural Networks_
CNNs are an advanced concept within the realm of neural networks, granting computers the ability to comprehend images and videos. These networks have found extensive applications across various fields, including robotics, face detection, crowd detection, weather analysis, advertising, and environmental studies. At the core of CNNs are individual units called neurons, each equipped with learnable weights. These weights, initially set randomly, can be adjusted through training to develop a model that accurately performs tasks. CNNs consist of three main components: Convolutional Networks (ConvNets), pooling layers, and fully connected layers. ConvNets, also known as convolutional layers, play a crucial role in altering the pixel values of an image through the use of filters. Think of an image as a grid of pixels, and these filters as templates that help modify pixel values using matrix multiplication. These filters are applied across the entire image through a process called striding. Pooling, another essential concept, involves reducing the image's size with the help of specialized filters. There are two primary types of pooling: average pooling and max pooling. In average pooling, the filter computes the average pixel value within a specified region, while in max pooling, only the maximum pixel value is retained. This reduction in image size aids in maintaining important features while reducing computational complexity. Fully connected layers, the third component, resemble traditional neural networks. They take the 1D arrays generated
by the ConvNets as inputs and consist of various hidden layers that are interconnected. The outputs of these layers serve as classification nodes, potentially representing integer labels or one-hot encoded values that predict the class of the input. The SoftMax function is often used to make predictions by selecting the node with the highest probability. Several established CNN architectures are available as modular components. These pre-trained modules can be directly implemented if the inputs and outputs match the module's expectations. The PyTorch library offers a platform to explore and utilize these networks effectively.
This paper focuses on the implementation of three specific CNN architectures: AlexNet, VGG, and ResNet. AlexNet, proposed by Alex Krizhevsky in 2012 [11], marked a significant breakthrough. Its architecture comprises 8 layers, with 5 convolutional layers followed by max pooling, and the final 3 layers being fully connected. Notably, AlexNet utilized the non-saturating ReLU function, which can be replaced with tanh or sigmoid functions for enhanced performance. VGG, introduced by Karen Simonyan in 2015 [12], offers variations based on network depth: VGG11, VGG13, VGG16, and VGG19. This architecture takes a 224*224*3 image as input and subjects it to several convolutional layers followed by max pooling. This paper specifically implements the VGG11 architecture, which deviates from AlexNet's design by altering the placement of max pool layers. In VGG, these layers may appear after a series of 2 or 3 convolutional layers. This strategy helps retain feature information before size reduction via max pooling. Towards the end of the architecture, there are three fully connected layers for classification purposes. ResNet, short for residual neural network, draws inspiration from the brain's pyramidal cells [13]. It utilizes skip connections to allow data to pass through certain layers without modification. The model employs double or triple skips using ReLU activation and batch normalization. The foundational idea behind ResNets is that adding more layers to a CNN won't necessarily reduce errors, but it could increase computational costs. Thus, ResNets propose using skip connections to efficiently reach lower layers when needed, preserving essential feature characteristics.
### _Machine Learning Models_
Machine learning involves different approaches to solving problems. There are three primary types of machine learning algorithms: supervised learning, unsupervised learning, and reinforcement learning. Supervised learning is like learning from past experiences with a bit of guidance. In this approach, models are trained using labeled data, which means data with clear indications of what they represent. The model learns from these labeled examples to make predictions on new, unseen data. The goal here is to check how accurately the model can predict the right outcomes. Unsupervised learning is a bit different. Imagine learning from a bunch of unsorted notes without prior context. Here, the models work on raw data without any labels or indications. These models are fed large amounts of data and they try to uncover patterns, similarities, and structures within the data itself. This helps in understanding relationships and hidden insights within the information. Reinforcement learning, on the other hand, is like learning by trial and error. It's as if a machine is placed in an environment and learns by continuously experimenting and adapting based on the outcomes of those experiments. This approach is often used in training machines to make sequential decisions, like training a robot to navigate through a complex maze. This paper mainly deals with classification models, which are designed to categorize data into specific groups. In the realm of supervised learning, various models are used for classification, each with its own way of making sense of the data. Logistic regression, for instance, estimates discrete values based on the input data. It tries to find a relationship between the input variables and a binary outcome, like whether an email is spam or not. But when you're dealing with more than two categories, you step into the territory of multinomial logistic regression. In this scenario, a linear function is not suitable for the best fit. Instead, the sigmoid function is employed to make predictions as demonstrated in the earlier section. Support Vector Machines (SVM) are like teachers who learn from examples. They plot each data point in a multi-dimensional space and aim to draw a line or boundary that best separates the different classes of data. SVMs are quite effective in classification tasks, especially when the data isn't easily separable. K-Nearest Neighbours (KNN), on the other hand, takes the "birds of a feather flock together" approach. It groups data points based on their proximity in the feature space. The "k" in KNN refers to how many neighboring data points you consider when making a classification decision. Gaussian Naive Bayes is a method that applies the Bayes' theorem. It's like making predictions based on known probabilities and the independence of different factors. It's quite simple yet effective, especially for text-based classifications like email spam detection. Another technique such as Perceptron is like a building block of a simple neural network. It learns to classify data that is linearly separable, which means data that can be separated using a straight line. It's like learning to draw a line that best divides different types of objects. Stochastic Gradient Descent (SGD) is a variation of a well-known optimization technique called Gradient Descent. It's like a more efficient way of figuring out the best fit of a model to the data. Instead of using all the data to adjust the model's parameters, it uses small batches, which speeds up the process, especially for large datasets. Decision Trees are like making a series of yes-or-no questions to classify data. It's as if you're asking a series of questions to reach a conclusion. Each node in the tree represents a question or a decision, and the branches represent the possible answers or outcomes. Random Forest is like a committee of decision trees, each with their own opinions. Each tree in the forest gets a vote, and the majority's vote wins. This approach helps in reducing overfitting and improving the overall accuracy of the model. So, in the world of machine learning, there are various tools and methods, each designed to tackle different types of problems. These techniques help computers learn patterns, make predictions, and ultimately assist us in solving complex tasks.
## III Related Works
Crowd motion analysis involves the integration of computer vision, image processing, and machine learning
methodologies. In this section, an extensive understanding of relevant prior research and cutting-edge approaches is presented by elucidating recent and pioneering scholarly articles in this domain. This encompasses the motivation, introduction, and methodologies employed in these notable papers.
### _Crowd Analysis Using Optical Flows_
The research papers [6] present an effective way to track the movement of crowds in videos. The process of crowd tracking is divided into four steps. To estimate the flow of movement, a technique called KLT feature tracker is used. This tracker combines the Shi-Tomasi corner detection method with the Lucas-Kanade optical flow algorithm. The Shi-Tomasi corner detection helps identify interesting corners within each frame. Nearby pixels around these corners are also considered for tracking. Following this, the points are tracked in the subsequent frames using the Lucas-Kanade algorithm. Besides tracking the movement, the direction and size of the movement vector are also calculated. Once vectors are obtained from frames \(f_{k}\) and \(f_{k^{\prime}+l}\), the frame is divided into multiple blocks for further analysis. The vectors in specific blocks are grouped together based on their direction and size using the DBSCAN algorithm. Points belonging to the same group are represented by a single vector. Individuals leaving or joining the crowd are treated as separate blocks. In another paper [14], a different approach utilizing optical flow is presented to identify the primary motion within crowded scenes. This paper suggests using a combination of the Shi-Tomasi corner detection algorithm and FAST corner detection to identify points of interest in each frame. By keeping track of these points across multiple frames, trajectories are captured. To manage the computational load, new feature points are added every five frames. Feature points close to the old ones are discarded. To gather trajectories of all points, a novel clustering framework known as Longest Common Subsequences (LCSS) is introduced. Using this framework, multiple trajectories are compared to find matching points, and the main path of motion is identified by clustering trajectories.
### _Crowd Counting_
Counting crowds can be quite challenging, and how accurate the models we propose are depends on the specific scene. When dealing with a large crowd density, it becomes really difficult to keep track of everyone for counting purposes. A recent paper [15] suggests two ways to approach crowd counting. The first method involves using detection-based models, where the crowd is tracked by looking at body parts or their overall shape, and the count comes from this tracking process. The second method uses Regression-based models, where the model predicts the crowd count without individually tracking people. This is done by creating a density map and estimating the count from that map. Apart from these two methods, there's another approach based on CNN that also shows promising results. However, in CNN models, there are cases where the system mistakenly identifies various objects as human heads, leading to a significant discrepancy in the count. Paper [15] addresses this by combining a density-based model with CNN to rectify this issue. Another interesting paper [16] falls under the category of detection-based models. In this study, the author designs a template tree with different human postures, angles, and shapes. They use a hierarchical part-template matching algorithm to figure out human shapes and poses by comparing them to local images. Multiple detectors are employed to identify various body shapes. The process involves segmenting based on these detectors and using background subtraction to evaluate the model's performance, yielding promising outcomes. [17] can also be grouped under the regression-based model for crowd counting. This paper suggests segmenting the crowd into clusters based on their motion direction. The count for each direction is estimated using the Gaussian process.
### _Motion Detection and Classification_
To comprehend how crowds behave, it's essential to detect and classify their movements. For instance, we can verify whether vehicles follow the correct path, if people adhere to suggested routes, or if someone enters restricted areas. The ways crowds move can be grouped based on the situation. If we're observing traffic, we might categorize motion as Lanes, Arcs, or Blocks. In the case of people entering or exiting enclosed spaces, we can label it as Bottlenecks or Fountainheads. During protests, we can distinguish between converging and diverging patterns. Understanding the type of crowd motion is crucial for managing various situations. A study by [9] proposes five categories: Lanes, Arcs, Bottlenecks, Fountainheads, and Blocks. Another approach [8] suggests three types using a model called Behavior, Mood, and Organisation (BMO). This helps categorize crowds as Heterogeneous, Homogeneous, or Violent. To implement these models, researchers employed VGG and utilized motion maps and keyframes as inputs. The motion's stability is checked using mathematical methods like the Jacobian matrix and eigenvalues. By identifying crowd types, we gain insights into actions, scenes, and behaviors. This aids in effectively managing various situations involving crowds.
### _Anomaly Detection in the Crowd Motion_
Some very interesting and important papers have been published that focus on detecting anomalies. These papers aim to spot unexpected behaviors within a frame. For instance, [18] discusses three types of anomalies. The first type is Point Anomaly, which detects a single object in the frame showing an unexpected motion or sudden change in its size. The second type is Collective Anomaly, where most objects in the frame experience a sudden shift in their direction and speed. This kind of anomaly is typically observed in situations like riots or explosions. The third type is Contextual Anomaly, which involves identifying unexpectedly shaped objects in the frame. The paper implements its model using footage from stable surveillance cameras and employs techniques like background reduction. Additionally, it introduces a novel approach to gather features such as direction, point changes, and distances. These features are further sorted using a method called k-means clustering and distance calculation to predict whether a motion is expected or unexpected.
## IV Empirical Study
Video datasets can be categorized into three types: object-centric, location-centric, and motion-centric. In object-centric
datasets, the videos contain multiple objects, and the approach involves tracking a specific object, recognizing multiple objects in a frame, or automatically providing captions for the objects. Much of this work is accomplished by training various types of neural networks like CNN and Recurrent Neural Network (RNN). Identifying a specific object often requires techniques like segmenting the object from its surroundings, as done with architectures like DeepLab or FCNNet. The second type, location-centric videos, includes fewer videos in the dataset, but these videos are longer, allowing for a better understanding of the location details and movements within the scene. In these videos, the focus is on comprehending the scene by continuously tracking the movements happening over time. The third category encompasses videos that involve a combination of different types of motions exhibited by multiple objects. In this group, the approach revolves around motion analysis, especially tracking the specific types of motion, detecting anomalies, and predicting potential threats.
### _Datasets and Preprocessing_
This paper focuses on two sets of data: the ViratDataset [19] and the UCF Crowd Dataset [20]. The ViratDataset is centered around specific locations, helping us understand and track movements within a scene. This, in turn, allows us to identify areas where motion is concentrated in the scene. On the other hand, the UCF Crowd Dataset is more focused on motion itself. It helps us track multiple objects and classify different types of motion. The ViratDataset is made up of two scenes, each divided into 61 videos. The UCF Crowd Dataset contains 38 videos, each showcasing various types of motion. In both datasets, the videos are captured either by fixed lens cameras or drones, both of which provide footage from a stable, unchanging perspective. The paper uses two main types of data: MIIs and block-wise dominant motion information. MIIs are images showing the motion of objects in every 5 frames. These images are generated using something called optical flow, and each frame is given a label based on its motion. This helps classify the type of motion in each frame. Imagine these images as pictures where different colors represent different directions of motion. For machine learning models, each video frame is divided into small 8*8 blocks. In each block, the most dominant motion information is determined and labeled. This information is stored in a file where each value is separated by a comma. Just like with MIIs, colors in these images represent different motion directions.
## V Experimental Analysis
The implementation logic of model architecture can be divided into three main parts: data flow, CNN, and machine learning models as shown in Figure 1. The Data Flow aspect involves handling frame size, key frames, block size, various optical flow techniques, noise reduction, generating MII, detecting dominant motion on a block level, and creating input files for the machine learning models. CNNs are used to work with different types of CNN networks and adjust their hyperparameters for optimal performance. Lastly, the machine learning models are responsible for processing blockwise dominant motion data, training these models, and carrying out tests with various classification approaches.
### _Data Flow_
Optical flow is a widely used technique to track the movement of pixels in video footage. It offers several advantages, but understanding how it works is important. In this process, every pixel in a frame is traced across consecutive frames, and the time it takes to compute this depends on the frame size. To manage this, a model suggests reducing the frame size to 224*224 pixels before processing. The model's main goal is to track object motion and classify it. To achieve this, the model needs to track spatio-temporal data, which requires a lot of memory and computation time. To reduce this load, the model tracks object information every 5 frames and stores the necessary data. The direction and magnitude of object motion are calculated using the Lucas-Kanade algorithm, and these values are grouped into 12 classes. Additionally, the magnitude and direction of vectors are computed. Each tracked feature has magnitude, direction, and start/endpoints of a line, grouped into blocks defined by user inputs. The average magnitude and count per direction are calculated in all blocks and stored in CSV files. Noise reduction is applied during optical flow and data storage by setting thresholds and scaling values. Labeling the data is a crucial task done manually due to the absence of annotated files, and motion information images are labeled dynamically based on inputs. Frames are classified into four classes: Arcs, Lanes, Converging/Diverging, and Random/Blocks.
### _Convolutional Neural Networks_
In this paper, motion information from images is used to train CNN models for classifying different types of motion. The training and testing of these models are carried out with the assistance of pre-defined modules provided by the PyTorch library. The output classes are represented using one-hot encoding in the datasets, and the number of output classes predicted by the model is adjusted within the PyTorch module. The implementation of the CNN in this paper is divided into four main sections: datasets, data loader, training, and testing. The input datasets for the CNN are prepared using the datasets module from PyTorch, and they are split into a 70-30 ratio for training and testing, respectively. The use of CUDA parallel processing aids in efficient handling of the data. The data is
Figure 1: Implementation Logic
transformed into tensors, and tasks like resizing and normalization are performed at this stage. The data loader section organizes the data into batches for model input, ensuring efficient processing. Training and testing data are separated and shuffled, and the training begins with the goal of maintaining model quality. During training, a dataset of 168 images is fed through the model, and loss is calculated using various criteria. This loss guides the model's weight adjustments through different optimizers. As the problem involves classification, the testing phase focuses on assessing model accuracy. The testing dataset is employed to evaluate accuracy, which is determined by counting correctly classified images. This process is repeated across multiple CNN models, each with varying hyperparameters such as learning rate, criterion, and optimizer. The models include AlexNet, VGG11, and ResNet101, each exhibiting different convolutional layer configurations. This exploration helps understand the impact of CNN architecture depth. Notably, AlexNet, with its five convolutional layers and a fully connected layer, is the smallest architecture, while VGG11's eight convolutional layers classify it as a deep network. ResNet101, on the other hand, is very deep with multiple layers of convolutional complexity.
### _Machine Learning Models_
In the process, video footage is divided into small blocks of 8*8 frames, and each block contains different motion vectors that represent the motion within them. These motion vectors are transformed into dominant motion data. The resulting data files hold two dominant motion values for each block, making a total of 8*8*2 features. Additionally, a target variable column is added through manual labeling. This dataset, containing 129 features and 2839 frames, is split into a 70-30 ratio for training and testing. The target variable column is the key for classification, and no dimensionality reduction is applied to ensure all blocks are equally important. The data is then trained using the logistic regression algorithm with various hyperparameters, such as the solver algorithm and max_iter (number of iterations). SVM is also employed using different kernel types for classification. The same dataset is used to train a KNN model, Gaussian Naive Bayes, perceptron, and SGD algorithms, evaluating both training and testing accuracy. Finally, decision trees and random forest algorithms are applied with parameter tuning including criteria, maximum depth, and minimum samples for optimal results.
## VI Result Analysis
This section discusses the outcomes of the paper conducted in two distinct environments. The first environment involves constructing datasets from video footage, with setup details in Table I. The second environment is dedicated to training and testing CNN and machine learning models, with setup details in Table II. Various software and libraries were employed, including PyCharm, Python, opencv-python, NumPy, Scikitlearn, Pandas, Matplotlib, and Pillow.
### _Results_
The culmination of this paper is underscored by the comprehensive experimentation and evaluation undertaken to assess the effectiveness of the implemented methodologies across two distinct environments: one dedicated to dataset generation from video footage and the other geared towards training and testing CNN and machine learning models. To ensure the integrity and reproducibility of the outcomes, the environments were meticulously configured with specific software libraries and versions, as detailed in Tables I and II.
In the realm of feature detection techniques for optical flow generation, our experimentation aimed to identify the most effective method for tracking object motion across consecutive frames. The results of these experiments, as summarized in Table III, showcased the comparative mean tracking accuracy of different techniques. The Lucas-Kanade algorithm stood out with an accuracy of 87.4%, demonstrating its ability to robustly track object motion patterns. This superior accuracy made it the preferred choice for generating optical flow data, as it consistently outperformed alternatives like Horn-Schunck, Farneback, and Block Matching. Switching our focus to density-based clustering techniques, our goal was to identify the most suitable approach for grouping similar motion patterns as shown in Table IV. This is particularly important when dealing with diverse motion trajectories in crowded scenes. The Adjusted Rand Index (ARI) was employed as a measure of clustering performance. Table IV indicated that DBSCAN achieved an ARI of 0.58, signifying its effectiveness in grouping motion patterns that shared
similarities. The relatively higher ARI score demonstrated that DBSCAN was able to identify and group motion patterns that corresponded to distinct behaviors within the UCF Crowd Dataset. Thus, we selected DBSCAN as the preferred method for motion pattern grouping. Exploring the influence of block size and magnitude multiplication on motion pattern analysis was aimed at optimizing the granularity of motion information and enhancing image quality as shown in Table V and VI. The Tables V and VI depict how different block sizes and magnitude multiplication factors influenced accuracy and image clarity. Smaller block sizes such as 8*8 provided a balance between granularity and noise, achieving the highest accuracy at 81.7%. Furthermore, applying magnitude multiplication to motion information images improved the clarity of patterns, with a factor of 1.0 leading to a 25.8% increase in image quality. Transitioning to model evaluation, the focus shifted to assessing the performance of CNN architectures and machine learning models as shown in Tables VII and VIII. The CNN models, AlexNet, VGG11, and ResNet101, were evaluated in terms of their testing accuracy. ResNet101 emerged as the top performer with an accuracy of 88.6%, demonstrating its capability to discern and classify various types of motion patterns effectively. On the machine learning front, SVM with an RBF kernel achieved the highest testing accuracy at 84.7%. This model excelled in capturing intricate relationships within the block-wise dominant motion data, showcasing its robustness in classifying different motion patterns.
While the paper has achieved significant milestones, there remains a realm of unexplored possibilities for further enhancement and expansion. One avenue for improvement lies in addressing the assumption that videos are captured from fixed-lens cameras or drones with a fixed point of view. The paper can be extended to accommodate dynamic camera angles and movements, which would increase the model's adaptability to real-world scenarios with varying perspectives. To tackle the challenge of over-capturing crowds in sparsely populated scenes during feature detection, an innovative approach could be the incorporation of object-detection techniques prior to feature detection. By focusing exclusively on human detection, the paper could enhance the accuracy of motion information capture while eliminating unnecessary noise from the background. Moreover, the utilization of MII presents an exciting opportunity. Training a separate model for anomaly detection on MII images could significantly expand the paper's scope, enabling the identification of irregularities not detectable by the existing methodologies.
## VIII Declarations
### **Funding:** No funds, grants, or other support was received.
*Conflict of Interest:** The authors declare that they have no known competing for financial interests or personal relationships that could have appeared to influence the work reported in this paper.
### **Data Availability:** Data will be made on reasonable request.
### **Code Availability:** Code will be made on reasonable request.
|
2307.04425 | Identification of Hemorrhage and Infarct Lesions on Brain CT Images
using Deep Learning | Head Non-contrast computed tomography (NCCT) scan remain the preferred
primary imaging modality due to their widespread availability and speed.
However, the current standard for manual annotations of abnormal brain tissue
on head NCCT scans involves significant disadvantages like lack of cutoff
standardization and degeneration identification. The recent advancement of deep
learning-based computer-aided diagnostic (CAD) models in the multidisciplinary
domain has created vast opportunities in neurological medical imaging.
Significant literature has been published earlier in the automated
identification of brain tissue on different imaging modalities. However,
determining Intracranial hemorrhage (ICH) and infarct can be challenging due to
image texture, volume size, and scan quality variability. This retrospective
validation study evaluated a DL-based algorithm identifying ICH and infarct
from head-NCCT scans. The head-NCCT scans dataset was collected consecutively
from multiple diagnostic imaging centers across India. The study exhibits the
potential and limitations of such DL-based software for introduction in routine
workflow in extensive healthcare facilities. | Arunkumar Govindarajan, Arjun Agarwal, Subhankar Chattoraj, Dennis Robert, Satish Golla, Ujjwal Upadhyay, Swetha Tanamala, Aarthi Govindarajan | 2023-07-10T09:00:12Z | http://arxiv.org/abs/2307.04425v1 | # Identification of Hemorrhage and Infarct Lesions on Brain CT Images using Deep Learning
###### Abstract
Head Non-contrast computed tomography (NCCT) scan remain the preferred primary imaging modality due to their widespread availability and speed. However, the current standard for manual annotations of abnormal brain tissue on head-NCCT scans involves significant disadvantages like _lack of cutoff standardization_ and _degeneration identification_. The recent advancement of deep learning-based computer-aided diagnostic (CAD) models in the multidisciplinary domain has created vast opportunities in neurological medical imaging. Significant literature has been published earlier in the automated identification of brain tissue on different imaging modalities. However, determining Intracranial hemorrhage (ICH) and infarct can be challenging due to image texture, volume size, and scan quality variability. This retrospective validation study evaluated a DL-based algorithm identifying Intracranial hemorrhage (ICH) and infarct from head-NCCT scans. The head-NCCT scans dataset was collected consecutively from multiple diagnostic imaging centers across India. The study exhibits the potential and limitations of such DL-based software for introduction in routine workflow in extensive healthcare facilities.
**Keywords:** Computer-aided-diagnostic solution, Non-contrast CT, Intracranial hemorrhage, Infarcts, Deep learning, Clinical evaluation
## 1 Introduction
In Cognitive Neuroscience, Neuropsychological investigation of stroke patients is widely utilized in advancing our knowledge of brain functions. The considerable insight into the relation of the brain function to its anatomy has been determined via correlation analysis between physical brain damage and impaired behavior [1][2][3]. The stroke topology can be broadly classified into two types: 1) _Intracranial hemorrhage (ICH)_, the rupture blood vessel within the brain which causes bleeding. The common factors related to the cause of ICH are advanced age, heavy alcohol usage, and high blood pressure (hypertension) [4]. As per some recent studies, although ICH accounts for 10-15% of all stroke-related deaths, over the last thirty years, the mortality and morbidity have not changed, particularly in developing countries [5]. 2) _Iscemic stroke or infarct_, is interruption of blood flow due to blood clot. Infact is generally caused by the buildup of plaques (atheosclerosis) over time in the arteries. Globally, over 13.7 million individuals have a stroke each year, of which approximately 70%, i.e., 9.5 million, are infarct [6]. Presently, mapping of the stroke lesion is regularly done using Computed tomography (CT) and magnetic resonance imaging (MRI). The MR (T1-weighted and T2- weighted) anatomical images are acquired as a part of routine practice for stroke patients. In stroke suspected patients with negative CT scans, MRI can also be performed. After the first few hours of onset, the ischemic stroke can be identified using the MRI. Additionally, the differentiation of irreparably damaged brain tissue and the tissue at risk due to infraction can be diagnosed using the MRI. However, CT is the preferred imaging modality over MRI in acute stroke care units and clinical trials due to the reduced exclusion criteria compared to MRI, affordability, speed, and accessibility [7]. In CT, hemorrhage is percieved as the bright region (hyper-dense) exhibiting sharp contrast and infarct as dark region (hypo-dense) depending on the time progressed after the onset.
The manual annotations of abnormal brain tissue by trained neuroradiologists is currently the present standard method for lesion identification [8]. However, the manual annotations of abnormal brain tissue have many disadvantages [9]. 1) _Lack of cutoff standardization:_, There is no standard protocol for explicit cutoff, particularly around the ventricles, to differentiate lesioned and non-lesioned tissues; as a result, this approach produces large variability and lacks reproducibility across operators. 2) _Degeneration identification:_ The stroke-induced degeneration occurring in chronic stroke patients outside the lesion is not captured in the standard manual annotations process, even though a significant clinical impact on patients is caused due to the stroke-induced degeneration. The recent advancement of deep learning based computer aided diagnostic (CAD) models in medical imaging and signal processing can significantly assist in overcoming the existing challenges [10][11][12][13][14]. In addition, the manual editing combined with an automated detection solution of hypo- or hyper-dense regions that remains under operator supervision and can assist in overcoming the present challenges [15]. More recently, a study using large CT datasets to remove the inter-subject variability in brain lesion characterization using an automated approach was proposed [16]. Several state-of-the-art algorithms have been proposed for lesion segmentation in MR images over the past few years, but very few have been developed to address stroke lesions on CT scans. Most of the earlier work published to validate
automated solutions was directed toward identifying ICH. As the ICH appears bright in CT scans, developing an automated solution based on supervised or unsupervised learning algorithm or extracting morphological features from labeled images to differentiate between true lesioned and non-lesioned tissues is less challenging [17][18]. Infarct identification, on the other hand, is a less popular problem statement among researchers compared to ICH detection due to its challenging nature. To address this issue very recently, a rule-based approach based on seeded region-growing algorithms was proposed via extracting hand-crafted features such as relative position for an axis of symmetry, texture, and brightness [19]. However, the primary disadvantage of this study is that the seeded region-growing algorithms may not be able to define the boundaries of the stroke region distinctively.
In this study, we have evaluated an Artificial Intelligence (AI) based automated CAD algorithm based on deep learning, capable of identifying ICH and infarct on Head-Non-contrast Computed Tomography (Head-NCCT) scan. The solution has been earlier validated on detecting ICH on Head-NCCT scan images [14]. The Institutional Review Board (IRB) has approved the proposed retrospective study. We demonstrated the effectiveness and validity of the automated CAD solution in detecting ICH infarct and quantifying infarct on Head-NCCT scan. Our proposed validation will provide a rapid and efficient tool for both research and clinical application. It will assist in the broader adaptation of automated CAD solutions at extensive clinical facilities.
## 2 Material and Methods
The study was a HIPAA-compliant retrospective study with Institutional Review Approval (IRB) from Royal Pune Independent Ethics Committee (RPIEC) (IRB No. RPIEC240123). Informed consent was obtained from all participants. All methods were carried out in accordance with relevant guidelines and regulations.
The primary objective was to evaluate the commercially available deep learning-based algorithm qER (Qure.ai Technologies, Mumbai, India) in terms of Area Under the Receiver Operating Characteristics Curve (AUC) in triaging Head-NCCT scan in detection and quantification of infarcts. It was estimated that a minimum sample of 418 Head-NCCT scans (167 Head-NCCT scans image with radiologist-confirmed infarcts, 251 Head-NCCT scans images without infarcts, 2:3 ratio) would provide a minimum of 80% power to estimate an anticipated AUC of 80% with 7% precision assuming a Type I error rate of 5% [20][21]. The Head-NCCT scans, and their signed-off original radiological report performed from 01-September-2021 to 31-August-2022 were acquired from diagnostic imaging centers across India. A total of 1878 Head-NCCT scan were collected. The original radiological report of these scans was subjected to a manual review by a clinical data abstractor to classify the scans into infarct, and non-infarct reported scans based on the original radiological report. A stratified random sample of 500 Head-NCCT scans stratified by the presence and absence of infarct (based on the original radiological reports) were then selected for independent ground truthing by a radiologist with more than fourteen years of experience. The inclusion criteria were Head-NCCT scans with soft reconstruction kernel covering the complete brain, slice thickness \(\leq 6mm\). The exclusion criteria were Head-NCCT scans with obvious
postoperative defects or from patients who had previously undergone brain surgery, Head-NCCT scans with artifacts such as burr holes, shunts or clips, Head-NCCT scans containing metal artifacts, excessive motion artifacts, Head-NCCT scans containing missing and improperly ordered slices. The ground truther radiologist had access to the original head NCCT scan image but was blinded to the original radiology report. The ground truther reviewed all the Head-NCCT scans and provided segmentation boundaries for infarcts and intracranial hemorrhages. The ground truther radiologist also provided a binary response for the presence or absence of cranial fracture, midline shift, and mass effect. The ground truth output was the reference standard for all downstream statistical analyses, not the original radiological report.
The sensitivity and specificity were estimated based on a default device threshold (available from the manufacturer based on internal testing), and the optimum threshold was based on Youden's index. The 95% confidence intervals for sensitivity and specificity are reported based on exact method [22]. AUC and 95% confidence interval (CI) was estimated based on the empirical method and De Long methodology, respectively [23]. The segmentation provided by the ground truther radiologist was utilized for the quantification analysis of the error in the predicted infarct volume by the DL-based algorithm. Absolute errors in infarct volume estimation in milliliter (mL), and summary statistics of absolute errors were reported. The statistical analyses were performed using RStudio (RStudio version 2022.07.1, R version 4.2.1) and Python version 3.9.7.
## 3 Experimental Results
### Identification of ICH and Infact
The ground truthing was completed for 428, while 22 Head-NCCT scan were excluded due to the inclusion and exclusion criteria mentioned in section 2. A total of 187
\begin{table}
\begin{tabular}{l|l|c|c|c|c} \hline \multicolumn{3}{c|}{**Infact (\%)**} & \multicolumn{4}{c}{**Non-Infact (\%)**} \\ \hline N (\%) & Other subgroups & n \# & N (\%) & Other subgroups & n \# \\ \hline \multirow{4}{*}{**187** (**43.7\%**)} & No other target abnormality* & 170 & & No other target abnormality* & 212 \\ \cline{2-6} & ICH & 7 & **241** & ICH & 14 \\ \cline{2-6} & Cranial Fracture & 5 & **(56.3\%)** & Cranial Fracture & 18 \\ \cline{2-6} & Midline Shift & 2 & & Midline Shift & 2 \\ \cline{2-6} & Mass Effect & 7 & & Mass Effect & 8 \\ \hline \end{tabular}
* Target abnormalities were custom defined and consisted of infarct, intracranial hemorrhage, cranial fracture, midline shift and mass effect.
* Denotes the absolute number of scans in the subgroups, and the numbers will not add up to N because one scan may contain multiple target abnormalities.
\end{table}
Table 1: Distribution of Head-NCCT Scan Images used for the Analysis Stratified by Findings as Determined by Ground Truth.
Head-NCCT scan confirmed (based on ground truth) the presence, while 241 Head-NCCT scan confirmed the absence of any infarcts. This distribution of scans with and without infarcts met the minimum sample size requirements described earlier in 2. In addition, 21 scans with intracranial hemorrhages (ICH) and 23 scans with cranial fractures were present in the sample. A total of 212 (49.5%) of the 428 Head-NCCT scans did not contain any infarcts, intracranial hemorrhages, cranial fracture, midline shift, or mass effect. The distribution of the Head-NCCT scans is shown in Table. 1.
It can be observed from Table. 2 that the DL-based algorithm achieved an AUC of 86.8% (95% CI: 83.4 - 90.2) in detecting scans with the presence of infarcts while the sensitivity and specificity were estimated to be 66.8% (95% CI: 59.6-73.5)and 86.7% (95% CI: 81.8-90.7) respectively at the default threshold. The optimum operating threshold was determined using Youden's index. At this optimum threshold, it was observed that the sensitivity of the DL-based algorithm improved to 80.2% (95% CI: 73.8 - 85.7) without substantial reduction in specificity 80.1% (95% CI: 74.5 - 84.9). For ICH, an AUC of 94.8% (95% CI: 87.4 - 100) was achieved. There was no change in sensitivity compared to the default and optimum threshold, while the specificity increased by 3% using the optimum threshold. In contrast, the sensitivity of cranial fracture compared to the default and optimum threshold, an enhancement of 15.8% was observed while the specificity decreased by 2.7%. In Fig. 1, the AUC-ROC plot for Cranial Fracture, ICH, and Infarct is given.
### Quantification of Infarct Volume
The DL-based algorithm for identifying infarcts produces the infarct volume in mL. A total of 150 true positive scans for which both DL-based algorithms predicted volume and ground truth volume were available for this analysis. The reference standard was radiologist annotations done for each Head-NCCT scan images.
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \hline
**Target** & **AUC (95\% CI)** & \multicolumn{2}{c|}{**Default threshold**} & \multicolumn{2}{c}{**Optimum threshold**} \\ \cline{3-6}
**Abnormality** & & SN (95\% CI) & SP (95\% CI) & SN (95\% CI) & SP (95\% CI) \\ & & (TP/AP) & (TN/AN) & (TP/AP) & (TN/AN) \\ \hline \multirow{3}{*}{Infract}{} & 86.8\% & 66.8\% & **86.7\%** & **80.2\%** & 80.1\% \\ & (83.4-90.2) & (59.6-73.5) & (81.8-90.7) & (73.8-85.7) & (74.5-84.9) \\ & (83.4-90.2) & (125/187) & (209/241) & (150/187) & (193/241) \\ \hline \multirow{3}{*}{\begin{tabular}{l} Intracranial \\ hemorrhage \\ \end{tabular} } & **90.5\%** & 90.5\% & 93.1\% & **90.5\%** & **96.1\%** \\ & (69.6 – 98.8) & (69.6-98.8) & (90.2 – 95.4) & (69.6-98.8) & (93.7 – 97.7) \\ & (19/21) & (379/407) & (19/21) & (391/407) \\ \hline \multirow{3}{*}{
\begin{tabular}{l} Cranial \\ fracture \\ \end{tabular} } & 95.2\% & 78.3\% & **96.8\%** & **91.3\%** & 94.1\% \\ & (90.1 – 100) & (56.3 – 92.5) & (94.6 – 98.3) & (72.0 – 98.9) & (91.3 – 96.2) \\ \cline{1-1} \cline{2-5} & (18/23) & (392/405) & (21/23) & (381/405) \\ \hline \hline \end{tabular}
* \(AP=TP+FN\), \(AP=TN+FP\).
* Default threshold: A threshold determined from internal testing results during model training.
\end{table}
Table 2: Performance Evaluation of DL-based algorithm at Default and Optimum Threshold; AUC: area under the curve; SN: sensitivity; SP: specificity, TP: true positive, TN: true negative, AP\({}^{1}\): all positive and AN\({}^{1}\): all negative
The mean absolute error (MAE) was 4.7 mL for overall scans. Based on ground truth volume, the scans were further divided into two categories - scans with 0 - 5 mL and \(>\) 5 mL infarcts volume, respectively. It can be observed from Table. 3 that the MAE for 0 - 5 mL and \(>\) 5 mL scans were found to be 3.2 mL and 8.557 mL, respectively. In Fig. 2 from the scatter plot of infarct volumes (1), it can be observed that with an increase in infarct volume, there is a positive correlation between DL-based algorithm volume and ground-truth annotated volume. The Bland-Altman plots showing good agreement between the ground truther annotation and predicted volume by the DL-based algorithm are shown in Fig. 2 (2).
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \hline
**Infract Volume** & **NI** & **MAE** & **Median** & **STD** & **Percentile** \\ \cline{3-7} & & & & & 10\% & 90\% \\ \hline
0 - 5 mL & 108 & 3.200 & 0.158 & 1.121 & 0.032 & 2.746 \\ \hline \(>\) 5 mL & 42 & 8.557 & 25.514 & 34.751 & 5.657 & 80.236 \\ \hline Overall & 150 & 4.700 & 0.600 & 24.397 & 0.042 & 42.080 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance Evaluation of DL-based algorithm in Quantifying Infract; MSE: Mean absolute error; STD: standard deviation.
Figure 1: AUC-ROC plot: (i) Cranial Fracture (ii) ICH and (iii) Infract.
Figure 2: 1) Scatter plot for infarct volume quantification with volume distribution of 0-5 mL and \(>\) 5 mL. 2) Bland-Altman plot for infarct volume quantification
### Visual Explanations of DL-based Algorithm
The experimental findings depict that the evaluated DL-based algorithm achieved superior performance represented in Table. 2 and 3. In most DL-based models the rationale behind the prediction is not reveled explicitly. Since these DL black box models can not be decomposed into intuitive and comprehensive modules, these models are hard to interpret. Consequently, the end-users develop skepticism and find the model difficult to trust. The emergence of explainable artificial intelligence (XAI) is an essential aspect of model transparency and the social right to explain DL inferences [24],[25]. XAI encompasses a better understanding of incoherent output, isolates failure modes, and builds trust in intelligent systems for effective incorporation into our everyday lives [26]. The present evaluated DL-based algorithm outputs a boundary across the infarcts which revels rationale behind the superior performance. In Fig. 3 is can be observed that for both small and large infarcts volume on Head-NCCT scan, the model predicted boundary clearly overlaps with the ground truther boundary.
Figure 3: Representative two head NCCT scan images with infarct. (a) The head NCCT scan with presence of infarct (b) The ground truther annotation around the region of infarct with red boundary (c) DL-based algorithm volume quantification (blue boundary) overlapping with ground truther annotation (red boundary).
## 4 Discussion
This retrospective study evaluated a deep learning algorithm for detecting infarcts in Head-NCCT scans. The algorithm had a good AUC of about 86% in detecting infarcts. After adjusting for thresholds, a balanced sensitivity of 80.2% and specificity of 80.1% was estimated to detect infarcts. The algorithm's sensitivity in detecting infarcts in scans with no other target abnormalities was found to be 80% (136 correctly detected out of 170). It did not differ from the overall sensitivity at optimum sensitivity. This states the robustness of the DL-based algorithm to identify infarcts with negligible drop in sensitivity with presence of other abnormalities. Additionally, it is to be noted that the sensitivity of Head-NCCT scans in detecting infarcts is generally considered low, especially in the case of hyperacute and acute ischemic strokes. In one study, the sensitivity of detecting acute ischemic stroke on head NCCT scans ranged from 57% to 71% with considerable inter-reader variability [27][28]. Additionally, we evaluated the performance to detect ICH and cranial fracture, and both had excellent AUC. However, the interpretation is limited by low sample sizes for these two abnormalities. Our results also show that threshold adjustments might be needed before using such algorithms routinely for clinical decision support.
Deep learning or big data are often called "black box" and represent substantial obstacles in introducing intuitive and comprehensive modules into actual clinical practice; these models are challenging to interpret. However, the DL-based method validated in this study provides a post-hoc attention tool for the clinician to identify the lesion visually. In addition, the DL-based algorithm validated in this study encompasses a better understanding of incoherent output, isolates failure modes, and builds trust in intelligent systems for effective incorporation into routine clinical practice. Moreover, the proposed validation of the DL-based algorithm will be beneficial in the resource constraint areas with a limited number of radiologists or with only access to teleradiology facilities.
Our study has limitations. First, the differentiation of infarct into acute and chronic infarct was not analyzed. Second, the ground truthing for the head NCCT scans images with the presence of infarcts was done by a single radiologist. Thirdly, there were not enough scans for the ICH and cranial fracture to estimate performance metrics with sufficient precision.
## 5 Conclusion
The present study evaluated a DL-based algorithm to determine the presence and absence of ICH and infarcts on head-NCCT scans. The DL-based algorithm demonstrated high detection performance rate in identifying infarcts, ICH, and cranial fracture. Additionally, the DL-based algorithm exhibits a positive correlation between DL-based algorithm volume and ground-truth annotated volume. The study demonstrated the performance of ICH detection and infarcts detection and quantification to indicate the feasibility of introduction of such DL-algorithms in routine workflow in extensive healthcare facilities.
## 6 Data Availability
The datasets used or analyzed during the current study are available from the corresponding author on reasonable request.
|
2306.13270 | Spintronic reservoir computing without driving current or magnetic field | Recent studies have shown that nonlinear magnetization dynamics excited in
nanostructured ferromagnets are applicable to brain-inspired computing such as
physical reservoir computing. The previous works have utilized the
magnetization dynamics driven by electric current and/or magnetic field. This
work proposes a method to apply the magnetization dynamics driven by voltage
control of magnetic anisotropy to physical reservoir computing, which will be
preferable from the viewpoint of low-power consumption. The computational
capabilities of benchmark tasks in single MTJ are evaluated by numerical
simulation of the magnetization dynamics and found to be comparable to those of
echo-state networks with more than 10 nodes. | Tomohiro Taniguchi, Amon Ogihara, Yasuhiro Utsumi, Sumito Tsunegi | 2023-06-23T02:50:21Z | http://arxiv.org/abs/2306.13270v1 | # Spintronic reservoir computing without driving current or magnetic field
###### Abstract
Recent studies have shown that nonlinear magnetization dynamics excited in nanostructured ferromagnets are applicable to brain-inspired computing such as physical reservoir computing. The previous works have utilized the magnetization dynamics driven by electric current and/or magnetic field. This work proposes a method to apply the magnetization dynamics driven by voltage control of magnetic anisotropy to physical reservoir computing, which will be preferable from the viewpoint of low-power consumption. The computational capabilities of benchmark tasks in single MTJ are evaluated by numerical simulation of the magnetization dynamics and found to be comparable to those of echo-state networks with more than 10 nodes.
Recent development of neuromorphic computing with spintronics devices [1, 2, 3, 4], such as pattern recognition and associative memory, has provided a bridge between condensed matter physics, nonlinear science, and information science, and become of great interest from both fundamental and practical viewpoints. In particular, an application of nonlinear magnetization dynamics in ferromagnets to physical reservoir computing [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19] is an exciting topic [1, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]. Physical reservoir computing is a kind of recurrent neural network, which has recurrent interaction among large number of neurons in artificial neural network and, for example, recognizes a time sequence of the input data, such as human voice and movie, from the dynamical response in nonlinear physical systems [19]. In reservoir computing, only the weights between neurons and output are trained, whereas the weights among neurons are randomly given and fixed, and therefore, low calculation cost of training is expected. It has been shown that several kinds of physical systems, such as optical circuit [10], soft matter [12], quantum matter [15], fluid [18], and spintronics devices, can be used as reservoir for information processing [19].
In physical reservoir computing with spintronics devices, nonlinear magnetization dynamics has been excited in nanostructured ferromagnets by applying electric current and/or magnetic field. For example, spin-transfer effect [31, 32] has been frequently used to excite an auto-oscillation of the magnetization in magnetic tunnel junctions (MTJs) [1, 20, 21, 22, 24, 25, 26, 28, 29, 30], where the spin angular momentum from conducting electrons carrying electric current is transferred to ferromagnet and excites magnetization dynamics. It is, however, preferable to excite magnetization dynamics without driving current and magnetic field from the viewpoints of low-power consumption and simple implementation.
In this work, we propose that physical reservoir computing can be performed by magnetization dynamics induced by voltage control of magnetic anisotropy in solid devices [33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50]. The voltage control of magnetic anisotropy is a fascinating technology as the low-power information writing scheme in magnetoresistive random access memory, instead of using spin-transfer torque effect. An application of electric voltage to a metallic ferromagnet/insulator interface modifies electron states near the interface [34, 36, 37] and/or induces magnetic moment [46], and changes magnetic anisotropy. The magnetization in the ferromagnetic metal changes its direction to minimize the magnetic anisotropy energy. Therefore, the voltage application can cause the relaxation dynamics of the magnetization in the ferromagnet. In the practical application of nonvolatile random access memory, an external magnetic field is necessary to achieve a deterministic magnetization switching guaranteeing reliable writing [40, 41, 42, 43]. On the other hand, we notice that the magnetization switching, as well as magnetic field, is not a necessary condition in physical reservoir computing. Accordingly, the voltage control of magnetic anisotropy can be used to realize physical reservoir computing by spintronics devices without driving current or magnetic field. Here, we perform numerical simulation of the Landau-Lifshitz-Gilbert (LLG) equation and find that the computational capabilities of benchmark tasks in single spintronics device are comparable to those of echo-state networks with more than 10 nodes.
### Model
#### LLG equation
The system under investigation is a cylinder-shaped MTJ schematically shown in Fig. 1(a), where the \(z\) axis is perpendicular to the film plane. The MTJ consists of ferromagnetic free layer, MgO insulator, and ferromagnetic reference layer. The ferromagnetic free layer has the perpendicular magnetic anisotropy, where the magnetic energy density is given by
\[\epsilon=\sum_{i=x,y,z}2\pi M^{2}N_{i}m_{i}^{2}+K_{1}\left(1-m_{z}^{2}\right)+K _{2}\left(1-m_{z}^{2}\right)^{2}. \tag{1}\]
The first term on the right-hand side in Eq. (1) represents the shape magnetic anisotropy energy density with the saturation magnetization \(M\) and the demagnetization coefficients \(N_{i}\). Since we assume the cylinder shape, \(N_{x}=N_{y}\). The unit vector pointing in the magnetization direction of the free layer is denoted as \(\mathbf{m}=(m_{x},m_{y},m_{z})\). The second and third terms are the first and second order magnetic anisotropy energy densities with the coefficients \(K_{1}\) and \(K_{2}\). Note that the energy density relates to the magnetic field inside the free layer as
\[\mathbf{H}=\left[H_{\text{K1}}+H_{\text{K2}}\left(1-m_{z}^{2}\right)\right]m_{ z}\mathbf{e}_{z}, \tag{2}\]
where \(H_{\text{K1}}=(2K_{1}/M)-4\pi M(N_{z}-N_{x})\) and \(H_{\text{K2}}=4K_{2}/M\); see also Methods. The magnetization in the reference layer points to the \(z\) direction, and therefore, \(m_{z}\) is experimentally measured through tunnel magnetoresistance effect.
The first order magnetic anisotropy energy coefficient \(K_{1}\) consists of the bulk and interfacial contributions, \(K_{\text{v}}\) and \(K_{\text{i}}\), and the voltage-controlled magnetic anisotropy effect described as \(K_{1}d=K_{\text{v}}d+K_{\text{i}}-\eta\mathscr{E}\). The thickness of the ferromagnetic free layer is \(d\), whereas \(\mathscr{E}=V/d_{\text{i}}\) is the electric field with the voltage \(V\) and the thickness of the insulator \(d_{\text{i}}\). In typical MTJs consisting of CoFeB free layer and MgO insulator, \(K_{\text{i}}\) dominates in \(K_{1}\), where \(K_{\text{i}}\) increases with the increase of the composition of Fe [51, 52, 53]. It can reach on the order of 1.0 mJ/m\({}^{2}\) at maximum, which in terms of magnetic field, \(2K_{\text{i}}/(Md)\), is typically on the order of 1 T. Note that the magnitude of the shape magnetic anisotropy field \(-4\pi M(N_{z}-N_{x})\) is also on the order of 1 T, where a typical value of the saturation magnetization in CoFeB, i.e., \(M\) of about 1000 emu/c.c., is assumed. As a result of the competition between them, the ferromagnetic free layer in the absence of the voltage application can be either in-plane or perpendicular-to-plane magnetized [51, 52, 53]. The voltage control of magnetic anisotropy also modifies the magnetic anisotropy field \(H_{\text{K1}}\) through the modification of the electron occupation states near the ferromagnetic interface [34, 36, 37] and/or the generation of the magnetic dipole moment [46]. The coefficient of the voltage-controlled magnetic anisotropy effect, \(\eta\), is recently achieved in the experiment to be about 300 fJ/(Vm) [45, 50], whereas the thickness of the insulator is about 2.5 nm. A typical values of the applied voltage is about 0.5 V at maximum [48]. Thus, the tunable range of the magnetic anisotropy by the voltage application in terms of the magnetic field, \((2|\eta|V)/(Mdd_{\text{i}})\), is about 1.0 kOe, where we assume that \(M=1000\) emu/c.c., \(d=1\) nm, \(d_{\text{i}}=2.5\) nm, and \(|\eta|=250\) fJ/(V m). Note that the sign of the voltage-controlled magnetic anisotropy effect depends on that of the voltage. Summarizing these contributions, \(H_{\text{K1}}\) in the presence of the voltage can also be either positive or negative, depending on the materials and their compositions, as well as the magnitude and sign of the applied voltage. For example, Ref. [38] uses an in-plane magnetized ferromagnet, i.e., \(H_{\text{K1}}<0\) for \(V=0\). The voltage control of magnetic anisotropy in Ref. [38] enhances the perpendicular anisotropy \(K_{1}\) and makes \(H_{\text{K1}}\) positive at nonzero \(V\). On the other hand, perpendicularly magnetized free
Figure 1: (a) Schematic illustration of an MTJ. The unit vector pointing to the magnetization direction in the ferromagnetic free layer is \(\mathbf{m}\). The \(z\) axis is perpendicular to the film plane. (b) An example of the time evolutions of \(m_{x}\) (red), \(m_{y}\) (blue), and \(m_{z}\) (black). (c) Trajectory of the relaxation dynamics on a sphere. In (b) and (c), the first order magnetic anisotropy field \(H_{\text{K1}}\) is changed from \(-0.1H_{\text{K2}}\) to \(-0.9H_{\text{K2}}\) by the voltage application. The red circle and blue triangle in (c) represent the initial and final states of the dynamics.
layers where \(H_{\text{K1}}>0\) for \(V=0\) have been used in Ref. [43]. Contrary to \(H_{\text{K1}}\), the dependence of \(H_{\text{K2}}\propto K_{2}\) on the applied voltage is still unclear, where Ref. [47] reports that \(H_{\text{K2}}\) is approximately independent of the voltage while Ref. [48] observes the voltage dependence of \(H_{\text{K2}}\). Throughout this paper, for simplicity, we assume that only \(H_{\text{K1}}\) depends on the voltage. As mentioned in the following, we performed numerical simulation by changing the value of \(H_{\text{K1}}\). It means that we do not specify the size (the thickness and cross-section area) of MTJ explicitly because \(H_{\text{K1}}\) includes the information of the shape of MTJ through the demagnetization coefficients \(N_{i}\). It is, however, useful to mention that macrospin model has been proven to work well to describe the magnetization dynamics for MTJ whose typical size is 1-2 nm in thickness and the diameter less than 200 nm [40, 42, 49].
In typical experiments on voltage control of magnetic anisotropy, a relatively thick (typically 1.5-2.5 nm) MgO barrier is used as an insulator [42, 43, 49], compared with MTJ manipulated by spin-transfer torque, where the thickness of the barrier is about 1.0 nm [54]. As a result, the resistance of MTJ used for experiments of voltage control of magnetic anisotropy, on the order of 10-100 k\(\Omega\), is two or three orders of magnitude larger than that used for spin-transfer torque experiments. On the other hand, the maximum voltage used in both experiments is almost identical. Accordingly, current flowing in MTJ used for experiments of voltage control of magnetic anisotropy is two or three orders of magnitude smaller than that used for spin-transfer torque experiments (see also Methods). In this sense, we mention that the driving force of magnetization dynamics is voltage control of magnetic anisotropy effect, although current cannot be completely zero in experiments. As mentioned in Methods, typical value of current \(I\) flowing in MTJ is on the order of 1 \(\mu\)A, while the current used in physical reservoir computing utilizing spin-transfer torque is on the order of 1 mA [29]. On the other hand, the magnitude of the voltage \(V\) applied to MTJ is nearly the same for both experiments on voltage control of magnetic anisotropy and spin-transfer effects. Accordingly, using the voltage control of magnetic anisotropy effect could reduce energy consumption by three orders.
The magnetization in equilibrium points to the direction at which the energy density is minimized. For example, when \(H_{\text{K1}}>(<)0\) and \(H_{\text{K2}}=0\), the energy is minimized when the magnetization is parallel (perpendicular) to the \(z\) axis. Another example is studied in Ref. [55], where, if \(H_{\text{K1}}<0\) and \(|H_{\text{K1}}|<H_{\text{K2}}\), the energy density \(\epsilon\) is minimized when \(m_{z}=\pm\sqrt{1-(|H_{\text{K1}}|/H_{\text{K2}})}\). When the voltage is applied to the MTJ and the minimum energy state is changed as a result, the magnetization relaxes to the state. The relaxation dynamics is described by the LLG equation,
\[\frac{d\mathbf{m}}{dt}=-\gamma\mathbf{m}\times\mathbf{H}+\alpha\mathbf{m} \times\frac{d\mathbf{m}}{dt}, \tag{3}\]
where \(\gamma\) and \(\alpha\) are the gyromagnetic ratio and the Gilbert damping constant, respectively. Note that the macrospin model works well to describe the magnetization dynamics driven by the voltage application [40, 42, 44]. The values of the parameters used in the following are derived from typical experiments [35, 38, 39, 40, 41, 42, 43, 44, 47]. The gyromagnetic ratio and the Gilbert damping constant are \(\gamma=1.764\times 10^{7}\) rad/(Oe s) and \(\alpha=0.01\). The second order magnetic anisotropy field \(H_{\text{K2}}\) is 500 Oe.
Let us show an example of the magnetization dynamics driven by the voltage control of magnetic anisotropy. We firstly set \(H_{\text{K1}}\) to be \(H_{\text{K1}}^{(0)}=-0.1H_{\text{K2}}=-50\) Oe and solve the LLG equation with an arbitrary initial condition. The magnetization saturates to a certain point where \(m_{z}\) saturates to \(m_{z}\to m_{z}^{(0)}\simeq 0.95\). We use this state as a new initial state and solve the LLG equation by changing \(H_{\text{K1}}\) to \(H_{\text{K1}}^{(1)}=-0.9H_{\text{K2}}=-450\) Oe. Then, the magnetization starts to move to a new stable state where \(m_{z}\) saturates to \(m_{z}\to m_{z}^{(1)}\simeq 0.32\). Figures 1(b) and 1(c) show time evolution of \(\mathbf{m}\) and its spatial orbit from the initial state of \(m_{z}^{(0)}\) to the final state \(m_{z}^{(1)}\). We confirm that the initial and final states are those expected from the minimum energy state mentioned above, i.e., \(m_{z}^{(0)}=\sqrt{1-|H_{\text{K1}}^{(0)}|/H_{\text{K2}}}=\sqrt{1-0.1}\simeq 0.95\) and \(m_{z}^{(1)}=\sqrt{1-|H_{\text{K1}}^{(1)}|/H_{\text{K2}}}=\sqrt{1-0.9}\simeq 0.32\). We emphasize that \(m_{z}\) monotonically changes with respect to the change of \(H_{\text{K1}}\). Since the value of \(H_{\text{K1}}\) can be manipulated by the voltage application, the time evolution of \(m_{z}\) can be used to identify the value of the applied voltage. The estimation of the input data, which is the sequence of the applied voltage in the present case, from the dynamical response of physical system is the aim of physical reservoir computing. Therefore, the magnetization dynamics driven by the voltage control of magnetic anisotropy is applicable to physical reservoir computing. In the following, we evaluate its computational ability.
## Results
### Memory capacity
The ability in physical system for reservoir computing has been quantified by memory capacity [15, 18, 20, 25, 28, 30]. The memory capacity corresponds to the number of past data physical reservoir can store. For example, let us imagine injecting random binary input \(b=0\) or 1 to reservoir, as done in experiments [21, 25]. The input data are often injected as pulses with the pulse width of \(t_{\text{p}}\), i.e., the value of \(b\) is constant during time \(t_{\text{p}}\). Therefore, it is convenient to add a suffix \(k=1,2,\cdots\) to \(b\) as \(b_{k}\) to distinguish the order of the input data. We also introduce an integer \(D=0,1,2,\cdots\), called delay, characterizing the order of the
past input data. In this case,
\[y_{k,D}^{\text{STM}}=b_{k-D}, \tag{4}\]
are called target data of short-term memory (STM) task. We predict the value of the target data from the output of the reservoir and evaluate the reproducibility. The predicted data are called system output. The reproducibility is quantified by the correlation coefficient between the target data and system output. Roughly speaking, if the reservoir can reproduce the past data up to \(D\), the STM capacity is defined as \(D\). There is another kind of memory capacity, called parity-check (PC) capacity, where the target data are defined as
\[y_{k,D}^{\text{PC}}=\sum_{\ell=0}^{D}b_{k-D+\ell}\ \left(\text{mod2}\right). \tag{5}\]
According to their definitions, the STM and PC capacities quantify the number of the target data the reservoir can store, where the target data are defined as linear and nonlinear transformations of the input data, respectively. A large memory capacity means that reservoir can store, recognize, and/or predict large data. See also Methods for the detail of the evaluation method of these capacities.
In the present system, the random binary inputs are injected as voltage pulses, which change the first order magnetic anisotropy field \(H_{\text{K1}}\) as
\[H_{\text{K1}}=H_{\text{K1}}^{(0)}\left(1-b_{k}\right)+H_{\text{K1}}^{(1)}b_{k}. \tag{6}\]
Accordingly, when the input is \(b_{k}=0\) (1), the value of \(H_{\text{K1}}\) is \(H_{\text{K1}}^{(0)}\) [\(H_{\text{K1}}^{(1)}\)]. In the following, we fix \(H_{\text{K1}}^{(0)}=-50\) Oe, i.e., \(H_{\text{K1}}^{(0)}/H_{\text{K2}}=-0.1\), whereas \(H_{\text{K1}}^{(1)}\) varies in the range of \(-450\leq H_{\text{K1}}^{(1)}\leq-100\) Oe, i.e., \(-0.9\leq H_{\text{K1}}^{(1)}/H_{\text{K2}}\leq-0.2\). Figures 2(a) and 2(b) show the STM and PC capacities as a function of \(H_{\text{K1}}^{(1)}\) and the pulse width of the input data. The highest value of the STM capacity, 3.29, is found at the conditions of \(H_{\text{K1}}^{(1)}=-430\) Oe and \(t_{\text{p}}=69\) ns, as shown in Fig. 2(c). On the other hand, the highest value of the PC capacity, 3.40, is found at the conditions of \(H_{\text{K1}}^{(1)}=-445\) Oe and \(t_{\text{p}}=43\) ns, as shown in Fig. 2(d).
### NARMA task
Another benchmark task to quantify the computational ability of physical system to reservoir computing is nonlinear autoregressive moving average (NARMA) task [12, 15, 18, 30, 56]. NARMA task is a function-approximation task to reproduce a nonlinear function defined from input data by using output data in recurrent neural networks. The task is classified as NARMA\(D\) with \(D=2,5,10\) and so on, where \(D\) represents the delay included in the nonlinear function. In other words, the target data of NARMA\(D\) task consist of data defined until \(D\) times before the present data. For example, in NARMA2 task, the system is aimed to reproduce the target data,
\[y_{k+1}^{\text{NARMA2}}=0.4y_{k}^{\text{NARMA2}}+0.4y_{k}^{\text{NARMA2}}y_{k- 1}^{\text{NARMA2}}+0.1z_{k}^{3}+0.1, \tag{7}\]
from output data, where \(z_{k}=0.2r_{k}\) is defined from uniform random input data \(r_{k}\) at a discrete time \(k\); see Methods. The computational ability of NARMA task is evaluated from normalized mean-square error (NMSE) defined as
\[\text{NMSE}=\frac{\sum_{k}\left(y_{k}^{\text{NARMA2}}-v_{k}^{\text{NARMA2}} \right)^{2}}{\sum_{k}\left(y_{k}^{\text{NARMA2}}\right)^{2}}, \tag{8}\]
where \(v_{k}^{\text{NARMA2}}\) is the data reproduced from the output data (see also Methods). A low NMSE corresponds to high reproducibility of the target data. Figure 3(a) shows an example of the target data (red line), \(y_{k}^{\text{NARMA2}}\), and the system output (blue dots). By evaluating the difference between the target data and the system output as such, the NMSE is obtained as shown in Fig. 3(b). The NMSE is on the order of \(10^{-6}-10^{-5}\) and is minimized to be \(8.43\times 10^{-6}\) at \(t_{\text{p}}=16\) ns and \(H_{\text{K1}}^{(1)}=-325\) Oe; see Fig. 3(c).
## Discussion
We have developed theoretical analysis of the magnetization dynamics in nanostructured ferromagnetic multilayers driven by the voltage control of magnetic anisotropy, and showed that the dynamics is applicable to physical reservoir computing through the evaluations of the memory capacity and the NMSE of NARMA task. Neither electric current nor external magnetic field is introduced in the computation, contrary to the previous works focusing on the application to nonvolatile memory, because magnetization switching is unnecessary. This fact will be preferable for reducing power consumption in reservoir computing.
Figure 3: (a) Examples of the target data (red line) and the system output (blue dots) of NARMA2 task, where \(t_{\text{p}}=16\) ns and \(H_{\text{K1}}^{(1)}=-325\) Oe. (b) Dependence of the NMSE of NARMA2 task on the pulse width and the first order magnetic anisotropy field. The lowest value of the NMSE is indicated by the red triangle in (c).
Figure 2: Dependences of (a) STM (linear) and (b) PC (nonlinear) capacities on the pulse width and the first order magnetic anisotropy field. Their highest values are indicated by the red triangles in (c) and (d).
Figures 2(a) and 2(b) show that the memory capacity increases with the difference between \(H_{\text{K1}}^{(0)}\) and \(H_{\text{K1}}^{(1)}\) increasing. This is because when the difference between \(H_{\text{K1}}^{(0)}\) and \(H_{\text{K1}}^{(1)}\) is large, the range of the dynamical response of \(m_{z}\) also becomes large, which makes it easy to identify the input data from the change of \(m_{z}\). Due to a similar reason, the memory capacity increases with the increase of the pulse width. When the pulse width is relatively long, the change of \(m_{z}\) during a pulse injection becomes large, which again makes it easy to identify the input data. However, when the pulse width is sufficiently long, \(m_{z}\) finally saturates to a stable state, and becomes approximately constant, as implied from Fig. 1(b). When \(m_{z}\) becomes constant, it becomes impossible to estimate the past input from the present output. Therefore, the memory capacity does not increase monotonically with the pulse width increasing. As written above, the STM and PC capacities are maximized at the pulse width of 69 and 43 ns, respectively. A similar trend is found in NARMA2 task, where low NMSEs are achieved in a relatively large \(H_{\text{K1}}^{(1)}\) region. Note that the memory capacity at the maximum was found to be about 3, which is comparable to the computational ability of echo-state network with approximately 10 nodes [20, 28]. The value is also comparable or larger than that obtained by the other single spintronics reservoirs without additional circuits [20, 21, 29], driven by electric current and/or magnetic field. This might be due to a matching between the relaxation time of the output signal and the pulse width. Another possible reason is a large change in the dynamical amplitude, compared with an oscillator system [29]. The NMSE of NARMA2 task, minimized to be on the order of \(10^{-6}\), is also comparable or lower than that found in soft robot [12] and echo-state network with nodes more than \(10^{18}\). These results indicate the potential applicability of an MTJ driven by the voltage-controlled magnetic anisotropy effect to physical reservoir computing.
An empirical rule shared among the research community is that the computational ability of physical reservoir computing is maximized at the edge of chaos [13, 57, 30]. Simultaneously, an existence of chaos might lose the reproducibility of the computation due to the sensitivity to initial states. Note that chaos is prohibited in the present system when random inputs are absent. This is because the magnetization dynamics are described by two variables, \(\theta=\cos^{-1}m_{z}\) and \(\varphi=\tan^{-1}(m_{y}/m_{x})\), whereas the Poincare-Bendixson theorem argues that chaos does not appear in a two dimensional system. When the random input are injected to the MTJ, the system becomes nonautonomous due to the presence of time-dependent torque. In this case, the number of the dimension in the phase space becomes three, and the possibility to induce chaos becomes finite. For example, Ref. [30] reported the appearance of chaos in a spin-torque oscillator due to the injection of random input current. However, we should notice that the presence of time-dependent input does not necessarily guarantee the presence of chaos. The identification of chaos is done by, for example, evaluating the Lyapunov exponent. The Lyapunov exponent quantifies the time evolution of an infinitesimal difference given at the initial state. The positive Lyapunov exponent implies the presence of chaos. On the other hand, when the Lyapunov exponent is negative, the dynamics saturate to fixed points. When the Lyapunov exponent is zero, the dynamics is periodic. The dynamics with negative or zero Lyapunov exponent are classified as ordered dynamics. Since the LLG equation describes the relaxation dynamics to stable states, one might consider that the largest Lyapunov exponent of an MTJ in the absence of random inputs is negative. However, notice that the axial symmetry of the present system enables us to move the magnetization rotating around the \(z\) axis without energy injection. In fact, the energy density, as well as the equation of motion for \(m_{z}\) depends on \(m_{z}\) only, as explained in Methods; in other words, the equation of motion for \(\theta\) is independent of \(\varphi\). As a result, an infinitesimal difference given to the phase \(\varphi\) is not shortened by the LLG equation. Therefore, the largest Lyapunov exponent in the absence of the random input is zero. The fact that the equation of motion for \(\theta\) depends on \(\theta\) only also implies the absence of homoclinic bifurcations, as well as chaos, even when the pulse data, independent of \(\theta\) and \(\varphi\), are injected; in fact, the numerically evaluated Lyapunov exponent was zero, as explained in Methods. The absence of chaos indicates the reproducibility of the computation in the present reservoir.
In summary, we perform numerical experiments of the magnetization dynamics in an MTJ driven by the voltage control of magnetic anisotropy. Injecting the voltage pulse to the MTJ, the magnetization changes its direction to minimize the magnetic anisotropy energy. The time evolution of the relaxation dynamics reflects the value of the input voltage, and therefore, can be used to reproduce the time sequence of the input data. We evaluate the computing abilities, such as the memory capacity and the error in the reproducibility, of common benchmark task, and show that even a single MTJ can show high computing performance comparable to echo-state network consisting of multiple nodes more than 10. Since neither electric current nor external magnetic field is necessary, the proposal here will be of interest for energy-saving computing technologies.
## Methods
### Definition of magnetic field and relaxation time
The magnetic field \(\mathbf{H}\) relates to the energy density \(\varepsilon\) as \(\mathbf{H}=-\partial\varepsilon/\partial(M\mathbf{m})\), and therefore, is obtained from Eq. (1) as
\[\mathbf{H}=\begin{pmatrix}-4\pi MN_{x}m_{x}\\ -4\pi MN_{y}m_{y}\\ \left[(2K_{1}/M)-4\pi MN_{z}\right]m_{z}+\left(4K_{2}/M\right)\left(1-m_{z}^ {2}\right)m_{z}\end{pmatrix}. \tag{9}\]
We should note that the magnetization dynamics described by the LLG equation is unchanged by adding a term proportional to \(\mathbf{m}\) to \(\mathbf{H}\) because the LLG equation conserves the magnitude of \(\mathbf{m}\). Adding a term as such corresponds to shifting the origin of the energy density \(\varepsilon\) by a constant. In the present case, we added a term \(4\pi MN_{x}\mathbf{m}\) to \(\mathbf{H}\) and obtained Eq. (2), where we should remind that \(N_{x}=N_{y}\) because we assume a cylinder shaped MTJ. The added term to \(\mathbf{H}\) shifts the origin of the energy density \(\varepsilon\) by the constant \(-2\pi M^{2}N_{x}\mathbf{m}^{2}=-2\pi M^{2}N_{x}\) and makes it depend on \(m_{z}\) only.
The LLG equation in the present system can be integrated as
\[t=\frac{1+\alpha^{2}}{\alpha\gamma(H_{\text{K1}}+H_{\text{K2}})}\left[\log \left(\frac{\cos\theta_{\text{f}}}{\cos\theta_{\text{i}}}\right)-\frac{H_{ \text{K1}}+H_{\text{K2}}}{H_{\text{K1}}}\log\left(\frac{\sin\theta_{\text{f}} }{\sin\theta_{\text{i}}}\right)+\frac{H_{\text{K2}}}{2H_{\text{K1}}}\log\left( \frac{H_{\text{K1}}+H_{\text{K2}}\sin^{2}\theta_{\text{f}}}{H_{\text{K1}}+H_{ \text{K2}}\sin^{2}\theta_{\text{i}}}\right)\right], \tag{10}\]
where \(\theta_{\text{i}}\) and \(\theta_{\text{f}}\) are the initial and final values of \(\theta=\cos^{-1}m_{z}\). Equation (10) provides the relaxation time from \(\theta=\theta_{\text{i}}\) to \(\theta=\theta_{\text{f}}\). Note that the relaxation time is scaled by \(\alpha\gamma H_{\text{K1}}/(1+\alpha^{2})\) and \(H_{\text{K2}}/H_{\text{K1}}\), which can be manipulated by the voltage control of magnetic anisotropy. We also note that Eq. (10) has logarithmic divergence due to asymptotic behavior in the relaxation dynamics.
### Role of spin-transfer torque
We neglected spin-transfer torque in the main text because the current magnitude in typical MTJ used for voltage control of magnetic anisotropy effect is usually small. For example, when using typical values [47, 49] for the voltage (0.4 V), resistance (60 k\(\Omega\)), and cross-section being \(\pi\times 60^{2}\) nm\({}^{2}\), the value of the current density is about 0.06 MA/cm\({}^{2}\) (6.7 \(\mu\)A in terms of current). Such a value is sufficiently small compared with that used in spin-transfer torque switching experiments [54]. To verify the argument, we perform numerical simulations, where spin-transfer torque, \(-H_{\text{s}}\mathbf{m}\times(\mathbf{p}\times\mathbf{m})\), is added to the right-hand side of Eq. (3). We fix the values of \(H_{\text{K2}}=500\) Oe and \(H_{\text{K1}}=-0.1H_{\text{K2}}=-50\) Oe. The unit vector \(\mathbf{p}\) along the direction of the magnetization in the reference layer points to the positive \(z\) direction. Spin polarization \(P\) in the spin-transfer torque strength, \(H_{\text{s}}=\hbar Pj/(2eMd)\), is assumed to be 0.5. Figure 4(a) shows time evolution of \(\mathbf{m}\) for the current density \(j\) of 0.06 MA/cm\({}^{2}\). Although the magnetization slightly moves from the initial (stable) state due to spin-transfer torque, the change of the magnetization direction is small compared with that shown in Fig. 1(b). Therefore, we do not consider that spin-transfer torque plays a major role in physical reservoir computing, although current cannot be completely zero in experiments.
For comprehensiveness, however, we also show the magnetization dynamics when the current density \(j\) is increased by one order Figure 4(b) shows the dynamics for \(j=0.6\) MA/cm\({}^{2}\), where the magnetization switching by spin-transfer torque is observed. We note that the current density is sufficiently small compared with that used in typical MTJs in nonvolatile memory [54]. Nevertheless, the switching is observed because of a small value of the magnetic anisotropy field in the present system. We assume that \(H_{\text{K2}}\) is finite and \(|H_{\text{K1}}|<H_{\text{K2}}\) to make a tilted state of the magnetization [\(m_{z}=\pm\sqrt{1-(|H_{\text{K1}}|/H_{\text{K2}})}\)] stable due to the following reason. Remind that there are other stable states, such as \(m_{z}=\pm 1\) for \(H_{\text{K1}}>0\) and \(m_{z}=0\) for \(H_{\text{K1}}<0\), when \(H_{\text{K2}}=0\). Note that these states (\(m_{z}=\pm 1\) or \(m_{z}=0\)) are always local extrema of energy landscape. Accordingly, once the magnetization saturates to these states, it cannot change the direction even if another input is injected. This conclusion can be understood in a different way, where the relaxation time given by Eq. (10) shows a divergence when \(\theta_{\text{i}}=0\) (\(m_{z}=+1\)),
Figure 4: Examples of the time evolutions of \(m_{x}\) (red), \(m_{y}\) (blue), and \(m_{z}\) (black) in the presence of spin-transfer torque, where the current density is (a) 0.06 MA/cm\({}^{2}\) and (b) 0.6 MA/cm\({}^{2}\).
\(\pi\) (\(m_{z}=-1\)), or \(\pi/2\) (\(m_{z}=0\)) is substituted. On the other hand, for a finite \(H_{\rm K2}\), the magnetization can move from the state \(m_{z}=\pm\sqrt{1-(|H_{\rm K1}|/H_{\rm K2})}\) when an input signal changes the value of \(H_{\rm K1}\) and makes the state no longer an extremum. We note that the assumption \(|H_{\rm K1}|<H_{\rm K2}\) restricts the magnitude of the magnetic field. In fact, the magnitude of \({\bf H}\) is small due to a small value of \(H_{\rm K2}=500\) Oe found in experiments [47, 48] and the restriction of \(|H_{\rm K1}|<H_{\rm K2}\). Since a critical current density destabilizing the magnetization by spin-transfer effect is proportional to the magnitude of the magnetic field, a small \({\bf H}\) implies that a small current mentioned above might induce a large-amplitude magnetization dynamics.
In summary, the magnitude of the current density is sufficiently small, and the magnetization dynamics are mainly driven by voltage control of magnetic anisotropy effect. The condition to stabilize a tilted state, however, might make the magnitude of the magnetic field, as well as the critical current density of spin-transfer torque switching, small. Thus, even a small current may cause nonnegligible dynamics. Simultaneously, however, it is practically difficult to increase the current magnitude by one order, and therefore, in the present study, we still consider that voltage control of magnetic anisotropy effect is the main driving force of the magnetization dynamics.
### Evaluation method of memory capacity
The memory capacity corresponds to the number of data which can be reproduced from the output data, as mentioned in the main text. The evaluation of the memory capacity consists of two processes. During the first process called training (or learning), weights are determined to reproduce the target data from the output data. In the second process, the reproducibility of the target data defined from other input data is evaluated.
Let us first describe the training process. We inject the random binary input \(b_{k}=0\) or \(1\) into MTJ as voltage pulse. The number of the random input is \(N\). The input \(b_{k}\) is converted to the first order magnetic anisotropy field through the
Figure 5: (a) An example of the time evolution of \(m_{z}\) (black) in the presence of several binary pulses (red). The dotted lines distinguish the input pulse. The pulse width and the first order magnetic anisotropy field are 69 ns and -430 Oe, respectively, where the STM capacity is maximized. (b) An example of \(m_{z}\) in the presence of a random input. The dots in the inset shows the definition of the nodes \(u_{k,i}\) from \(m_{z}\) during a part of an input pulse. The node number is \(N_{\rm node}=250\). (c) Examples of the target data \(y^{\prime}_{n,D}\) (red line) and the system output \(v^{\prime}_{n,D}\) (blue dots) of STM task with \(D=1\). (d) Dependence of \([{\rm Cor}(D)]^{2}\) on the delay \(D\) for STM task. The node number is \(N_{\rm node}=250\). The inset shows the dependence of the STM capacity on the node number.
voltage control of magnetic anisotropy, which is described by Eq. (6). We choose \(m_{z}\) as output data, which can be measured experimentally through magnetoresistance effect. Figure 5(a) shows an example of the time evolution of \(m_{z}\) in the presence of several random binary inputs, where the values of the parameters are those at the maximum STM capacity conditions, i.e., the pulse width and the first order magnetic anisotropy field are \(t_{\rm p}=69\) ns and \(H_{\rm K1}^{(1)}=-430\) Oe. As can be seen, the injection of the random input drives the dynamics of \(m_{z}\).
The dynamical response \(m_{z}(t)\), during the presence of the \(k\)th input \(b_{k}\), is divided into nodes, where the number of nodes is \(N_{\rm node}\). We denote the \(i(=1,2,\cdots,N_{\rm node})\)th output with respect to the \(k\)th input as \(u_{k,i}=m_{z}(t_{0}+(k-1)t_{\rm p}+i(t_{\rm p}/N_{\rm node}))\), where \(t_{0}\) is time for washout. The output \(u_{k,i}\) is regarded as the status of the \(i\)th neuron at a discrete time \(k\). Figure 5(b) shows an example of the time evolution of \(m_{z}\) with respect to an input pulse, whereas the dots in the inset of the figure are the nodes \(u_{k,i}\) defined from \(m_{z}\). The method to define such virtual neurons is called time-multiplexing method [15, 20, 21]. We also introduce bias term \(u_{k,N_{\rm node}+1}=1\). In the training process, we introduce weight \(w_{D,i}\) and evaluate its value to minimize the error,
\[\sum_{k=1}^{N}\left(\sum_{i=1}^{N_{\rm node}+1}w_{D,i}u_{k,i}-y_{k,D}\right)^{ 2}, \tag{11}\]
where, \(y_{k,D}\) are the target data defined by Eqs. (4) and (5). For simplicity, we omit the superscripts such as "STM" and "PC" in the target data because the difference in the evaluation method of the STM and PC capacities is merely due to the definition of the target data. In the following, we add superscripts or subscripts, such as "STM" and "PC", when distinguishing quantities related to their capacities are necessary. The weight should be introduced for each target data. According to the above statement, we denote the weight to evaluate the STM (PC) capacity as \(w_{D,i}^{\rm STM(PC)}\), when necessary. Also, we note that the weights are different for each delay \(D\).
Once the weights are determined, we inject other random binary inputs \(b_{k}^{\prime}\) to the reservoir, where the number of the input data is \(N^{\prime}\). Note that \(N^{\prime}\) is not necessarily the same with \(N\). Here, we use the prime symbol to distinguish the input data from those used in training. Similarly, we denote the output and target data with respect to \(b_{k}^{\prime}\) as \(u_{n,i}^{\prime}\) and \(y_{n,D}^{\prime}\), respectively, where \(n=1,2,\cdots,N^{\prime}\). From the output data \(u_{n,i}^{\prime}\) and the weight \(w_{D,i}\), we define the system output \(v_{n,D}^{\prime}\) as
\[v_{n,D}^{\prime}=\sum_{i=1}^{N_{\rm node}+1}w_{D,i}u_{n,i}^{\prime}. \tag{12}\]
Figure 5(c) shows an example of the comparison between the target data \(y_{n,D}^{\prime}\) (red line) and the system output \(v_{n,D}^{\prime}\) (blue dots) of STM task with \(D=1\). It is shown that the system output well reproduces the target data. The reproducibility of the target data is quantified from the correlation coefficient \(\text{Cor}(D)\) between \(y_{n,D}^{\prime}\) and \(v_{n,D}^{\prime}\) defined as
\[\text{Cor}(D)\equiv\frac{\sum_{n=1}^{N^{\prime}}\left(y_{n,D}^{\prime}-\langle y _{n,D}^{\prime}\rangle\right)\left(v_{n,D}^{\prime}-\langle v_{n,D}^{\prime} \rangle\right)}{\sqrt{\sum_{n=1}^{N^{\prime}}\left(y_{n,D}^{\prime}-\langle y _{n,D}^{\prime}\rangle\right)^{2}\sum_{n=1}^{N^{\prime}}\left(v_{n,D}^{\prime }-\langle v_{n,D}^{\prime}\rangle\right)^{2}}}, \tag{13}\]
where \(\langle\cdots\rangle\) represents the averaged value. Note that the correlation coefficients are defined for each delay \(D\). We also note that the correlation coefficients are defined for each kind of capacity, as in the case of the weights and target data. In general, \([\text{Cor}(D)]^{2}\leq 1\), where \([\text{Cor}(D)]^{2}=1\) holds only when the system output completely reproduces the target data. Figure 5(d) shows an example of the dependence of \([\text{Cor}(D)]^{2}\) for STM task on the delay \(D\). The results implies that the reservoir well reproduces the target data until \(D=3\), whereas the reproducibility drastically decreases with the delay \(D\) increasing. The STM and PC capacities, \(C_{\text{STM}}\) and \(C_{\text{PC}}\), are defined as
\[C=\sum_{D=1}^{D_{\text{max}}}\left[\text{Cor}(D)\right]^{2}. \tag{14}\]
Note that the definition of the memory capacity obeys, for example, Refs. [18, 20, 21, 25], where the memory capacity in Eq. (14) is defined by the correlation coefficients starting from \(D=1\). In some papers such as Refs. [15, 30], however, the square of the correlation coefficient at \(D=0\) is added to the right-hand side of Eq. (14).
In the present study, we introduce \(N_{\rm node}=250\) nodes and use \(N=1000\) and \(N^{\prime}=1000\) random binary pulses for training of the weight and evaluation of the memory capacity, respectively. The number of nodes is chosen so that the value of the capacity saturates with the number of nodes increasing; see the inset of Fig. 5(d). We also use 300 random binary pulses before the training and between training and evaluation for washout. The maximum delay \(D_{\text{max}}\) is 20. Note that the value of each node should be sampled within a few hundred picosecond: specifically, in the case of an example shown in Fig. 2(c), it is necessary to sample data within \(t_{\rm p}/N_{\rm node}=69{\rm ns}/250\simeq 276\) ps. We emphasize that it is experimentally possible to sample data within such a short time. For example, in Ref. [21], \(t_{\rm p}=20\) ns and \(N_{\rm node}=200\) were used, where the sampling step is 100 ps.
#### NARMA task
The evaluation procedure of the NMSE in NARMA task is similar to that of the memory capacity. The binary input data, \(b_{k}=0\) or 1, in the evaluation of the memory capacity are replaced by uniform random number \(r_{k}\) in \((0,1)\). The variable \(z_{k}\) in Eq. (7) is generally defined as \(z_{k}=\mu+\sigma r_{k}\)[30], where the parameters \(\mu\) and \(\sigma\) are determined to make \(z_{k}\) be in \((0,0.2)\)[15]. As in the case of the evaluation of the memory capacity, the evaluation of the NMSE consists of two procedures. The first procedure is the training, where the weight is determined to reproduce the target data from the output data \(u_{k,l}\). Secondly, we evaluate the reproducibility of another set of the target data from the system output \(v_{n}^{\text{NARMA2}}\) defined from the weight and the output data. Then, the NMSE can be evaluated. Note that some papers [31, 32, 30] define the NMSE in a slightly different way, where \(\sum_{n=1}^{N^{\prime}}\left(y_{n}^{\text{NARMA2}}\right)^{2}\) in the denominator of Eq. (8) is replaced by \(\sum_{n=1}^{N^{\prime}}\left(y_{n}^{\text{NARMA2}}-\overline{y}^{\text{NARMA2} }\right)^{2}\), where \(\overline{y}^{\text{NARMA2}}\) is the average of the target data \(y_{n}^{\text{NARMA2}}\). In this work, we use the definition given by Eq. (8), which is used, for example, in Refs. [12, 15, 18].
### Evaluation of Lyapunov exponent
We evaluated the conditional Lyapunov exponent as follows [58]. The LLG equation was solved by the fourth-order Runge-Kutta method with time increment of \(\Delta t=1\) ps. We added perturbations \(\delta\theta\) and \(\delta\phi\) with \(\epsilon=\sqrt{\delta\theta^{2}+\delta\phi^{2}}=10^{-5}\) to \(\theta(t)\) and \(\varphi(t)\) at time \(t\). Let us denote the perturbed \(\theta(t)\) and \(\varphi(t)\) as \(\theta^{\prime}(t)\) and \(\varphi^{\prime}(t)\), respectively. Solving the LLG equation from time \(t\) to \(t+\Delta t\), the time evolution of the perturbation is obtained as \(\epsilon^{\prime}(t)=\sqrt{[\theta^{\prime}(t+\Delta t)-\theta(t+\Delta t)]^{ 2}+[\varphi^{\prime}(t+\Delta t)-\varphi(t+\Delta t)]^{2}}\). A temporal Lyapunov exponent is obtained as \(\lambda(t)=(1/\Delta t)\log[\epsilon^{\prime}(t)/\epsilon]\). Repeating the procedure, the temporal Lyapunov exponent is averaged as \(\lambda(\mathcal{N})=(1/\mathcal{N})\sum_{i=1}^{\mathcal{N}}\lambda(t_{i})=[1 (\mathcal{N}\mathcal{M})]\sum_{i=1}^{\mathcal{N}}\log\{\epsilon^{\prime}[t_{ 0}+(i-1)\Delta t]/\epsilon\}\), where \(t_{0}\) is time at which the first random input is injected, whereas \(\mathcal{N}\) is the number of averaging. The Lyapunov exponent is given by \(\lambda=\lim_{\mathcal{N}\rightarrow\infty}\lambda(\mathcal{N})\). In the present study, we used the time range same as that used in the evaluations of the memory capacity and the NMSE and added uniform random input. Hence, notice that \(\mathcal{N}=\mathcal{M}_{\text{fp}}/\Delta t\) depends on the pulse width, where \(\mathcal{M}\) is the total number of the random inputs including washout, training, and evaluation. We confirmed that \(\lambda(\mathcal{N})\) monotonically saturates to zero; at least, \(|\lambda(\mathcal{N})|\) is one or two orders of magnitudes smaller than \(1/t_{\text{p}}\). Thus, the expansion rate of the perturbation, \(1/\lambda(\mathcal{N})\), is much slower than the injection rate of the input signal. Considering these facts, we concluded that the largest Lyapunov exponent can be regarded as zero, and therefore, chaos is absent. Note that the absence of chaos in the present system relates to the facts that the free layer is axially symmetric and the applied voltage modifies the perpendicular anisotropy only. When there are factors breaking the symmetry, such as spin-transfer torque with an in-plane spin polarization, chaos will appear [30].
|
2306.03063 | A GALEX view of the DA White Dwarf Population | We present a detailed model atmosphere analysis of 14001 DA white dwarfs from
the Montreal White Dwarf Database with ultraviolet photometry from the GALEX
mission. We use the 100 pc sample, where the extinction is negligible, to
demonstrate that there are no major systematic differences between the best-fit
parameters derived from optical only data and the optical + UV photometry.
GALEX FUV and NUV data improve the statistical errors in the model fits,
especially for the hotter white dwarfs with spectral energy distributions that
peak in the UV. Fitting the UV to optical spectral energy distributions also
reveals UV-excess or UV-deficit objects. We use two different methods to
identify outliers in our model fits. Known outliers include objects with
unusual atmospheric compositions, strongly magnetic white dwarfs, and binary
white dwarfs, including double degenerates and white dwarf + main-sequence
systems. We present a list of 89 newly identified outliers based on GALEX UV
data; follow-up observations of these objects will be required to constrain
their nature. Several current and upcoming large scale spectroscopic surveys
are targeting $>10^5$ white dwarfs. In addition, the ULTRASAT mission is
planning an all-sky survey in the NUV band. A combination of the UV data from
GALEX and ULTRASAT and optical data on these large samples of spectroscopically
confirmed DA white dwarfs will provide an excellent opportunity to identify
unusual white dwarfs in the solar neighborhood. | Renae E. Wall, Mukremin Kilic, P. Bergeron, Nathan D. Leiphart | 2023-06-05T17:35:18Z | http://arxiv.org/abs/2306.03063v1 | # A GALEX view of the DA White Dwarf Population
###### Abstract
We present a detailed model atmosphere analysis of 14001 DA white dwarfs from the Montreal White Dwarf Database with ultraviolet photometry from the GALEX mission. We use the 100 pc sample, where the extinction is negligible, to demonstrate that there are no major systematic differences between the best-fit parameters derived from optical only data and the optical + UV photometry. GALEX FUV and NUV data improve the statistical errors in the model fits, especially for the hotter white dwarfs with spectral energy distributions that peak in the UV. Fitting the UV to optical spectral energy distributions also reveals UV-excess or UV-deficit objects. We use two different methods to identify outliers in our model fits. Known outliers include objects with unusual atmospheric compositions, strongly magnetic white dwarfs, and binary white dwarfs, including double degenerates and white dwarf + main-sequence systems. We present a list of 89 newly identified outliers based on GALEX UV data; follow-up observations of these objects will be required to constrain their nature. Several current and upcoming large scale spectroscopic surveys are targeting \(>10^{5}\) white dwarfs. In addition, the ULTRASAT mission is planning an all-sky survey in the NUV band. A combination of the UV data from GALEX and ULTRASAT and optical data on these large samples of spectroscopically confirmed DA white dwarfs will provide an excellent opportunity to identify unusual white dwarfs in the solar neighborhood.
keywords: ultraviolet: stars -- stars: evolution -- stars: atmospheres -- white dwarfs
## 1 Introduction
The Galaxy Evolution Explorer (GALEX) is the first space based mission to attempt an all-sky imaging survey in the ultraviolet (UV, Martin et al., 2005). In the ten years that it was operational, GALEX surveyed 26000 square degrees of the sky as part of the all-sky imaging survey in two band passes: Far Ultraviolet (FUV) and Near Ultraviolet (NUV) with central wavelengths of 1528 and 2271 A respectively (Morrissey et al., 2005). Although its primary goal was to study star formation and galaxy evolution, the depth (\(m_{\rm AB}\approx 20.5\) mag) and the large sky coverage of the all-sky imaging survey provide an excellent opportunity to study UV bright objects like hot white dwarfs.
Prior to Gaia, the majority of the white dwarfs in the solar neighborhood were identified through Sloan Digital Sky Survey spectroscopy, which specifically targeted hot and blue white dwarfs as flux standards (e.g., Kleinman et al., 2013). Many of the SDSS white dwarfs have spectral energy distributions that peak in the UV. Hence, GALEX FUV and NUV data can help constrain the physical parameters of these white dwarfs. GALEX data will also be useful for cooler white dwarfs; UV photometry will be used to confirm the temperature derived from the optical data, or to constrain the far red wing of the Lyman \(\alpha\) line that dominates the opacity in the blue part of the spectral energy distribution of cool hydrogen atmosphere white dwarfs (Kowalski and Saumon, 2006). Yet, GALEX data are underutilized in the analysis of white dwarfs in the literature, perhaps due to the relatively strong extinction observed in the UV.
Wall et al. (2019) used 1837 DA white dwarfs with high signal to noise ratio spectra and Gaia parallaxes to verify the absolute calibration of the FUV and NUV data, and refined the linearity corrections derived by Camarota and Holberg (2014). They also empirically derived extinction coefficients for both bands, finding \(R_{\rm FUV}=8.01\) and \(R_{\rm NUV}=6.72\), where \(R\) is the ratio of the total absorption \(A_{A}\) to reddening \(E(B-V)\) along the line of sight to an object. Wall et al. (2019) highlighted the utility of their newly derived extinction coefficients for identifying white dwarfs with unusual UV photometry. By comparing the observed GALEX magnitudes to predictions from the model atmosphere calculations, they found 12 outliers in the UV, seven of which were previously known, including three double degenerates, two white dwarf + main-sequence star binaries, one ZZ Ceti, and one double degenerate candidate.
Lajoie and Bergeron (2007) compared the effective temperatures obtained from the optical and UV spectra of 140 DA white dwarfs from the _IUE_ archive. They found that the optical and UV temperatures of the majority of stars cooler than 40000 K and within 75 pc are in fairly good agreement with \(\Delta T_{\rm eff}/T_{\rm optical}\leq 10\%\). They also found that the majority of the discrepancies between the two temperature measurements were caused by interstellar reddening, which affects the UV more than the optical. By restricting their analysis to white dwarfs within 75 pc, where the extinction is negligible, they were able to identify several double degenerate candidates, as well as a DA + M dwarf system, and stars with unusual atmospheric compositions. Lajoie and Bergeron (2007) thus demonstrated that unusual white dwarfs can be identified by comparing temperatures derived solely from optical data and UV data.
In this work, we expand the analysis of optical and UV temperature measurements to the DA white dwarfs in the Montreal White Dwarf Database (MWDD) aided by GALEX UV data and Gaia Data Release 3 astrometry. To identify unusual white dwarfs, we use two methods. First, we compare the UV and optical temperatures in a manner similar to Lajoie & Bergeron (2007). We refer to this as the temperature comparison method. Our second method follows the analysis of Wall et al. (2019) and compares the observed and predicted GALEX magnitudes. We refer to this as the magnitude comparison method.
We provide the details of our sample selection in Section 2, the model atmosphere fitting procedure in Section 3, and the results from the temperature comparison method for the 100 pc sample and the entire MWDD sample in Section 4. Section 5 presents the results from the magnitude comparison method. We conclude in Section 6.
## 2 Sample selection
We started with all spectroscopically confirmed DA white dwarfs from the Montreal White Dwarf Database (Dufour et al., 2017) using the September 2022 version of the database. This sample includes over 30000 stars. We removed known white dwarf + main-sequence binaries and confirmed pulsating white dwarfs from the sample. We then collected the SDSS and Pan-STARRS1 photometry using the cross-match tables provided by Gaia DR3. We found 25840 DA white dwarfs with Gaia astrometry and Pan-STARRS1 photometry, 20898 of which are also detected in the SDSS.
Gaia DR3 does not provide a cross-matched catalog with GALEX, which performed its all-sky imaging survey between 2003 and 2009. The reference epoch for the Gaia DR3 positions is 2016. Assuming a 10 year baseline between the GALEX mission and Gaia DR3, we propagated the Gaia DR3 positions to the GALEX epoch using Gaia proper motions. We then cross-referenced our sample with the GALEX catalogue of unique UV sources from the all-sky imaging survey (GUVcat) presented in Bianchi et al. (2017). We used a cross-match radius of 3 arcseconds with GUVcat. We found 18456 DA white dwarfs with GALEX data.
Some of the DA white dwarfs in our sample are bright enough to be saturated in Pan-STARRS, SDSS, or GALEX. The saturation occurs at \(g,r,i\sim 13.5,z\sim 13\), and \(y\sim 12\) mag in Pan-STARRS (Magnier et al., 2013). We remove objects brighter than these limits. To make sure that there are at least three optical filters available for our model fits, we limit our sample to objects with at least Pan-STARRS \(g,r,i\) photometry available.
We apply the linearity corrections for the GALEX FUV and NUV bands as measured by Wall et al. (2019). These corrections are \(\geq 0.5\) mag for FUV and NUV magnitudes brighter than 13th mag. To avoid issues with saturation and large linearity corrections in the GALEX bands, we further remove objects with FUV and NUV magnitudes brighter than that limit. We further limit our sample to objects with a \(3\sigma\) significant distance measurement (Bailer-Jones et al., 2021) so that we can reliably constrain the radii (and therefore mass and surface gravity) of the stars in our sample. Our final sample contains 14001 DA white dwarfs with photometry in at least one of the GALEX filters and the Pan-STARRS \(gri\) filters. However, more than half of the stars in our final selection, 7574 of them, have GALEX FUV, NUV, SDSS \(u\), and Pan-STARRS \(gri(zy)\) photometry available.
## 3 The fitting procedure
We use the photometric technique as detailed in Bergeron et al. (2019), and perform two sets of fits; 1) using only the optical data, and 2) using both the optical and the UV data. In the first set of fits we use the SDSS \(u\) (if available) along with the Pan-STARRS \(grizy\) photometry to model the spectral energy distribution of each DA white dwarf, and in the second set of fits we add the GALEX FUV (if available) and NUV data.
We correct the SDSS \(u\) magnitude to the AB magnitude system using the corrections provided by Eisenstein et al. (2006). For the reasons outlined in Bergeron et al. (2019), we adopt a lower limit of 0.03 mag uncertainty in all bandpasses, and use the de-reddening procedure outlined in Harris et al. (2006) where the extinction is assumed to be zero for stars within 100 pc, to be maximum for those located at distances 250 pc away from the Galactic plane, and to vary linearly along the line of sight between these two regimes.
We convert the observed magnitudes into average fluxes using the appropriate zero points, and compare with the average synthetic fluxes calculated from pure hydrogen atmosphere models. We define a \(\chi^{2}\) value in terms of the difference between observed and model fluxes over all bandpasses, properly weighted by the photometric uncertainties, which is then minimized using the nonlinear least-squares method of Levenberg-Marquardt (Press et al., 1986) to obtain the best fitting parameters. We obtain the uncertainties of each fitted parameter directly from the covariance matrix of the fitting algorithm, while we calculate the uncertainties for all other quantities derived from these parameters by propagating in quadrature the appropriate measurement errors.
We fit for the effective temperature and the solid angle, \(\pi(R/D)^{2}\), where \(R\) is the radius of the star and \(D\) is its distance. Since the distance is known from Gaia parallaxes, we constrain the radius of the star directly, and therefore the mass based on the evolutionary models for white dwarfs. The details of our fitting method, including the model grids used are further discussed in Bergeron et al. (2019) and Genest-Beaulieu & Bergeron (2019).
## 4 Results from temperature comparison
### The 100 pc SDSS Sample
We use the 100 pc white dwarf sample in the SDSS footprint to test if the temperatures obtained from the optical and the UV data agree, and also to test the feasibility of identifying UV-excess or UV-deficit objects. Kilic et al. (2020) presented a detailed model atmosphere analysis of the 100 pc white dwarf sample in the SDSS footprint and identified 1508 DA white dwarfs. Cross-matching this sample with GUVcat (Bianchi et al., 2017), we find 847 DA white dwarfs with GALEX data; 377 have both FUV and NUV photometry available, while 470 have only NUV data available.
The top panels in Figure 1 show our fits for WD 1448+411, a spectroscopically confirmed DA white dwarf (Gianninas et al., 2011) in the 100 pc SDSS sample. The top left panel shows the SDSS \(u\) and Pan-STARRS \(grizy\) photometry (error bars) along with the predicted fluxes from the best-fitting pure hydrogen atmosphere model (filled dots). The labels in the same panel give the Pan-STARRS coordinates, Gaia DR3 Source ID, and the photometry used in the fitting. The top right panel shows the same model fits, but with the addition of the GALEX FUV and NUV photometry. The temperature and surface gravity estimates from both sets of fits, based on either the optical data only (left panel) or a combination of the optical and UV data (right panel), agree remarkably well for this star. Hence, the
spectral energy distribution of WD 1448+411 in the 0.1-1 \(\mu\)m range is consistent with an isolated pure hydrogen atmosphere white dwarf.
The bottom panels in Figure 1 show the model fits for another white dwarf in the 100 pc SDSS sample. GD 323 (WD 1302+597) is a spectroscopically confirmed DAB white dwarf (Wesemael et al., 1993). The use of pure hydrogen atmosphere models to fit its spectral energy distribution is obviously inappropriate. However, we use GD 323 to demonstrate how fitting the UV to optical spectral energy distribution can reveal objects with unusual atmospheric composition. The bottom left panel in Figure 1 shows our model fits using only the optical data from the SDSS and Pan-STARRS. Assuming a pure hydrogen composition, GD 323 would have the best-fitting \(T_{\rm eff}=26879\pm 1310\) K and \(\log g=8.230\pm 0.047\). This solution provides an excellent match to the optical photometry. The bottom right panel shows the same model fits with the addition of the GALEX FUV and NUV data. The best-fitting model parameters are significantly different, and clearly the pure hydrogen atmosphere models cannot match the UV portion of the spectral energy distribution of GD 323. Hence, a comparison between the two sets of model fits based on optical and/or UV data has the potential to identify DAB or other types of unusual objects among the DA white dwarf population in the solar neighborhood.
Figure 2 shows a comparison between the model fits using optical data only versus a combination of the optical + UV data for the DA white dwarfs in the 100 pc SDSS sample. Blue dots and red triangles mark the magnetic and DAB white dwarfs, respectively. The majority of the objects in this figure fall very close to the 1:1 line, shown in red, confirming that they are consistent with pure hydrogen atmosphere white dwarfs.
Excluding the five significant outliers labeled in the figure, the effective temperature and \(\log g\) derived from the GALEX+optical
Figure 1: _Top:_ Model fits to WD 1448+411, a spectroscopically confirmed DA white dwarf in the 100 pc SDSS sample. Each panel shows the best-fitting pure hydrogen atmosphere white dwarf model (filled dots) to the photometry (error bars). The labels in each panel include the Pan-STARRS coordinates, the Gaia DR3 Source ID, and the photometry used in the fitting: \(F\,Nugrizy\) means GALEX FUV + NUV + SDSS \(u\) + Pan-STARRS \(grizy\). The left panel shows the model fits based on the optical data only, whereas the right panel shows the fit using both optical and the UV data. The best-fitting model parameters are given in each panel. _Bottom:_ Model fits to GD 323 (WD 1302+597), a spectroscopically confirmed DAB white dwarf, assuming a pure H atmosphere.
Figure 2: A comparison between the effective temperature derived from optical only data versus a combination of the UV and optical data for the DA white dwarfs in the 100 pc SDSS \(\cap\) GALEX sample. Unusual objects, magnetic DAH and mixed composition DAB white dwarfs, are labeled with blue dots and red triangles, respectively.
data are slightly higher than the values obtained from the optical data only by \(50^{+215}_{-71}\) K and \(0.01^{+0.04}_{-0.01}\) dex, respectively. Hence, there are no major systematic differences between the best-fit parameters derived from optical only data and the optical + UV photometry. However, the addition of the GALEX FUV and NUV data helps improve the statistical errors in the model fits, especially for the hotter white dwarfs where the spectral energy distribution peaks in the UV. For example, for white dwarfs with \(T_{\rm eff}<10000\) K, the statistical errors in optical + UV temperature estimates are on average better by a factor of 1.3 compared to the errors based on the optical data only, but they are better by a factor of 2.5 for \(T_{\rm eff}>15000\) K.
The five significant outliers in Figure 2 all appear to be fainter than expected in the UV, and that is why their best-fitting temperatures based on the optical + UV model fits are cooler than those based on the optical data. These outliers include two DA white dwarfs with unusual atmospheric composition. J1304+5927 (GD 323, see Figure 1) and J0234\(-\)0406 (PSO J038.5646\(-\)04.1025). The latter was originally classified as a DA white dwarf based on a low-resolution spectrum obtained by Kilic et al. (2020). Higher signal-to-noise ratio follow-up spectroscopy by Gentile Fusiulo et al. (2021) demonstrated that J0234\(-\)0406 is in fact a DABZ white dwarf that hosts a gaseous debris disk. Even though its spectral appearance is visually dominated by broad Balmer absorption lines, the atmosphere of J0234\(-\)0406 is actually dominated by helium, and that is why it is an outlier in Figure 2.
J0842\(-\)0222 (PSO J130.5623\(-\)02.3741) and J1543+3021 (PSO J235.8127+30.3595) are both strongly magnetic and massive white dwarfs with \(M>1.1\)\(M_{\odot}\) and unusual optical spectra. Schmidt et al. (1986) noted problems with fitting the UV and optical spectral energy distribution of the strongly magnetic white dwarf PG 1031+234 with a field stronger than 200 MG. They found that the IUE and optical/infrared fits cannot be reconciled and that there is no Balmer discontinuity in the spectrum of this object. They attribute this to the blanketing due to hydrogen lines being grossly different, and the addition of a strong opacity source (cyclotron absorption). GD 229 is another example of a magnetic white dwarf with inconsistent UV and optical temperature estimates (Green & Liebert, 1981). Out of the 51 magnetic white dwarfs shown in Figure 2, only J0842\(-\)0222 and J1543+3021 have significantly discrepant UV and optical temperatures. Hence, such inconsistencies seem to impact a fraction of the magnetic DA white dwarfs in the solar neighborhood.
Another outlier, J0655+2939 (PSO J103.8966+29.6527), is also a massive white dwarf with \(M\sim 1.2\)\(M_{\odot}\). We obtained follow-up optical spectroscopy of J0655+2939 using the KOSMOS spectrograph on the APO 3.5m telescope on UT 2023 Jan 28. We used the blue grism in the high slit position with a 2.05\(\arcsec\) slit, providing wavelength coverage from 4150 A to 7050 A and a resolution of 1.42 A per pixel in the \(2\times 2\) binned mode.
Figure 3 shows our model fits for J0655+2939. The top panel shows the best-fitting H (filled dots) and He (open circles) atmosphere white dwarf models to the optical photometry (black error bars). Note that the GALEX photometry (red error bars) are not used in these fits. The middle panel shows the observed spectrum (black line) along with the predicted spectrum (red line) based on the pure H atmosphere solution. The bottom panel shows the entire KOSMOS spectrum. We confirm J0655+2939 as a DA white dwarf. Even though its Balmer lines and the optical + NUV photometry agree with the pure H atmosphere solution, J0655+2939 is significantly fainter than expected in the GALEX FUV band. The source of this discrepancy is unclear, but the observed H\(\alpha\) line core is also slightly shallower than expected based on the pure H atmosphere model.
### The MWDD DA sample
The 100 pc SDSS DA white dwarf sample discussed in the previous section clearly demonstrates that 1) there are no large-scale systematic differences between the model fits using optical only data (\(ugriz\)) and a combination of optical + UV data, and 2) GALEX FUV and NUV data can be used to identify unusual DA white dwarfs with helium-rich atmospheres or strong magnetic fields. We now expand our study to the entire Montreal White Dwarf Database DA white dwarf sample in the Pan-STARRS \(\cap\) GALEX footprint.
Figure 4 shows a comparison between the effective temperatures derived from optical and UV data for the DA white dwarfs in the SDSS footprint. The difference from Figure 2 is that the sample shown here extends beyond 100 pc, and therefore is corrected for reddening using the de-reddening procedure from Harris et al. (2006) and the GALEX extinction coefficients from Wall et al. (2019). The left panel includes objects with only NUV data, whereas the right panel includes objects with both FUV and NUV data. The red line shows the 1:1 line, and the green line is the best-fitting polynomial to the data. The magenta points mark the outliers that are 3\(\sigma\) away from both lines. The best-fitting polynomial takes the form
\[y=c_{2}x^{2}+c_{1}x+c_{0}, \tag{1}\]
where y is the \(T_{\rm eff}FNugrizy/1000\) and x is \(T_{\rm eff}ugrizy/1000\). The coefficients are given in table 1. The sample with the NUV data only (left panel) is limited mostly to white dwarfs with temperatures between 5000 and 12000 K. This is simply an observational bias;
Figure 3: Model atmosphere fits to the DA white dwarf J0655+2939. The top panel shows the best-fitting H (filled dots) and He (open circles) atmosphere white dwarf models to the optical photometry (black error bars). The middle panel shows the observed spectrum (black line) along with the predicted spectrum (red line) based on the pure H atmosphere solution. The bottom panel shows a broader wavelength range.
hotter white dwarfs would be brighter in the FUV, and therefore they would have been detected in both NUV and FUV bands.
A comparison between the model parameters obtained from \(ugrizy\) and \(Nugrizy\) (left panel) shows that there are no systematic differences between the two sets of fits. We find eight 3\(\sigma\) outliers based on this analysis, all very similar to the outliers shown in Figure 2 with UV flux deficits.
On the other hand, we do find a systematic trend in the temperature measurements from the fits using the GALEX FUV, NUV, SDSS \(u\), and Pan-STARRS \(grizy\) filters shown in the right panel. Here the best-fitting polynomial shows that the temperatures based on the optical + UV data are slightly underestimated compared to the temperatures obtained from the optical data only. The difference is \(-180\) K at 15000 K, \(-620\) K at 20000 K, and \(-1670\) K at 30000 K. Note that the average temperature errors based on the optical data are 670, 970, and 1850 K at 15000, 20000, and 30000 K, respectively.
Figure 4: A comparison between the effective temperature derived from optical only data (SDSS \(u\) and Pan-STARRS \(grizy\)) versus the optical + UV data for the DA white dwarfs in the SDSS footprint. The left panel shows objects with only NUV data, whereas the right panel includes objects with both FUV and NUV data. The 1:1 line is shown in red. The green line is the best-fitting polynomial to the data. The 3\(\sigma\) outliers are shown in magenta.
Figure 5: Same as Figure 4, but for the DA white dwarfs outside of the SDSS footprint.
Hence, the observed systematic shift in this figure is consistent with the optical constraints on the same systems within \(1\sigma\). We identify 83 outliers \(3\sigma\) away from both the 1:1 line and the best-fitting polynomial (red and green lines in the figure) including a number of UV-excess objects.
Figure 5 shows a similar comparison for the DA white dwarfs outside of the SDSS footprint. These do not have SDSS \(u\)-band measurements, hence our model fits are based on the Pan-STARRS \(grizy\) and GALEX FUV and NUV bands. The left panel shows the model fits for the DA sample with only NUV data available. Here the 1:1 line provides an excellent match to the parameters obtained from both the optical and the optical + UV analysis. We identify only 3 outliers based on this subsample.
The right panel in Figure 5 reveals a systematic trend in the temperature measurements based on the GALEX FUV + NUV + \(grizy\) data compared to the temperatures derived from the optical only data. The best-fitting polynomial takes the form of equation 1 where y is the \(T_{\rm eff}FNgrizy/1000\) and x is \(T_{\rm eff}grizy/1000\). The coefficients are given in table 1. This trend is similar to the one seen for the SDSS sample (right panel in Figure 4) but it is in the opposite direction. The optical + UV analysis leads to temperatures that are slightly over-estimated compared to the analysis using the optical data only. The difference is +850, +950, and +1090 K at 15000, 20000, and 30000 K, respectively. The average temperature errors based on the optical data are 670, 2810, and 6040 K at 15000, 20000, and 30000 K, respectively. Again, the observed systematic trend is consistent with the results from the optical only analysis within \(1\sigma\). We identify 41 outliers, all of which are UV-excess objects, based on this diagram.
In total we identify 135 outliers based on this analysis. Because the full width at half-maximum of the GALEX point spread function is about 5 arcsec (Morrissey et al., 2007), blending and contamination from background sources is an issue. We checked the Pan-STARRS stacked images for each of these outliers to identify nearby sources that could impact GALEX, SDSS, or Pan-STARRS photometry measurements. We found that 24 of these outliers were likely impacted by blending sources, reducing the final sample size to 111 outliers.
Table 2 presents the list of 52 outliers that were previously known to be unusual. This list includes four objects that are confirmed or suspected to be double white dwarfs (PSO J010.0954\(-\)00.3584, J055.6249+00.4048, J063.1211\(-\)11.5012, and J173.7025+46.8094), 20 confirmed or suspected magnetic white dwarfs, seven DA + M dwarf systems, and 21 objects with an unusual atmospheric composition (DAB etc).
Figure 6 shows the spectral energy distributions for two of these outliers. The top panels show the fits to the optical and UV + optical spectral energy distributions of the previously known double-lined spectroscopy binary WD 0037\(-\)006 (Napiwotzki et al., 2020). Under the assumption of a single star, the Pan-STARRS photometry for WD 0037\(-\)006 indicates \(T_{\rm eff}=10330\pm 380\) K and \(\log g=7.36\pm 0.05\). Adding the GALEX FUV and NUV data, the best-fitting solution significantly changes to \(T_{\rm eff}=12590\pm 100\) K and \(\log g=7.63\pm 0.01\). In addition, this solution has problems matching the entire spectral energy distribution, indicating that there is likely a cooler companion contributing significant flux. This figure demonstrates that double-lined spectroscopic binaries with significant temperature differences between the primary and the secondary star could be identified based on an analysis similar to the one presented here. A similar and complementary method for identifying double-lined spectroscopic binaries was pioneered by Bedard et al. (2017), which use optical photometry and spectroscopy to identify systems with inconsistent photometric and spectroscopic solutions.
The bottom panels in Figure 6 show the fits to a previously confirmed DA + M dwarf system in our sample (Rebassa-Mansergas et al., 2016). Here the optical data is clearly at odds with a single DA white dwarf, and GALEX FUV and NUV data reveal UV-excess from a hotter white dwarf. The analysis using \(FNugrizy\) photometry
Figure 6: Fits to the optical (left) and optical + UV (right) spectral energy distributions of two of the outliers in our DA white dwarf sample. The top panels show the fits for the double-lined spectroscopic binary WD 0037\(-\)006, and the bottom panels show the fits for a previously known DA + M dwarf binary.
\begin{table}
\begin{tabular}{c c c} \hline \hline Object & Gaia DR3 Source ID & Spectral Type & Reference \\ \hline PSO J012.0395\(-\)01.4109 & 2530629365419780864 & DA(He) & Kilic et al. (2020) \\ PSO J017.4701+18.0000 & 2785085218267094784 & DA(He) & Kepler et al. (2015) \\ PSO J025.4732+07.7206 & 2571609886069150592 & DAB & Kepler et al. (2015) \\ PSO J027.9384+24.0130 & 2191862120158592 & DZA & Gentile Fusillo et al. (2017) \\ PSO J038.03212+06.7391 & 252135817292538688 & DA:H: & Kleinmann et al. (2013) \\ PSO J038.5646\(-\)04.1025 & 2489275328645218560 & DABZ & Gentile Fusillo et al. (2021) \\ PSO J055.6249+00.4048 & 3263696071424152704 & DA+DB & Limoeges \& Bergeron (2010) \\ PSO J119.5813+35.7543 & 906771872872375104 & DAB & Kleinman et al. (2013) \\ PSO J123.8841+21.9719 & 67647349487352748 & DAB & Kleinman et al. (2013) \\ PSO J125.6983+12.0296 & 64930480753259520 & DA: & Kepler et al. (2015) \\ PSO J130.5623\(-\)02.3741 & 307243815677121280 & DAH?DBH? & Kilic et al. (2020) \\ PSO J131.8174+48.7057 & 10150284914889855776 & DBH: & Kleinman et al. (2013) \\ PSO J132.3710+28.9556 & 705246450842748288 & DAB & Kleinman et al. (2013) \\ PSO J132.6463+23.1245 & 706974394663604 & DABZ & Kong et al. (2019) \\ PSO J133.2881+58.7267 & 1037873899276147840 & DABZ & Gentile Fusillo et al. (2019) \\ PSO J136.6362+08.1209 & 584319855260594560 & DAH & Kleinman et al. (2013) \\ PSO J140.1791+04.8533 & 57947343742123904 & DA:BZ: & Kleinman et al. (2013) \\ PSO J141.7411+13.2557 & 5941622503766976 & DABZ & Kepler et al. (2015) \\ PSO J143.7587+44.4946 & 8815134799361707392 & DA: & Kepler et al. (2015) \\ PSO J144.9871+37.1739 & 799763528023185280 & DAB & Kepler et al. (2015) \\ PSO J150.9846+05.0456 & 38733967052044604 & DAB & Kleinman et al. (2013) \\ PSO J154.6494+30.5584 & 742652484343572408 & DAH & Kleinman et al. (2013) \\ PSO J179.9671+00.1309 & 3891115064506627840 & DA(He) & Kilic et al. (2020) \\ PSO J182.5106+18.0931 & 3949977724441143552 & DAB & Kepler et al. (2015) \\ PSO J196.1335+59.4594 & 1579147088331814144 & DAB & Wesemael et al. (1993) \\ PSO J198.6769+06.5415 & 3729586820010410496 & DA(He) & Kepler et al. (2015) \\ PSO J201.2108+29.5887 & 14606958792702384 & DA(He) & Kepler et al. (2016) \\ PSO J206.1217+21.0809 & 1249447115013660416 & DABZ & Kleinman et al. (2013) \\ PSO J211.9610+30.1917 & 1453322721887656448 & DA:H: & Kleinman et al. (2013) \\ PSO J218.8923+54.738 & 36689019778258959040 & DX & Kepler et al. (2015) \\ PSO J225.2676+06.8724 & 116093127169248416 & DA:H: & Kleinman et al. (2013) \\ PSO J223.9933+18.2145 & 11887559301361576064 & DA:H: & Kleinman et al. (2013) \\ PSO J234.3569+51.8575 & 1595298501827000960 & DBA & Kleinman et al. (2013) \\ PSO J240.2518+04.7101 & 442567655111360512 & DAH & Kleinman et al. (2013) \\ PSO J261.1393+32.5709 & 13338068572209600 & DAH & Kepler et al. (2015) \\ PSO J341.2484+33.1715 & 1890785517284104960 & DAH/DQ & Kepler et al. (2016) \\ PSO J356.5226+38.983 & 1919346461391649152 & DAH & Kleinman et al. (2013) \\ \hline PSO J010.0954\(-\)00.3584 & 2542961560852591744 & DA+DA & Napiwotzki et al. (2020) \\ PSO J042.5074\(-\)04.6175 & 81548598747536175104 & DA: & Kepler et al. (2016) \\ PSO J051.5805+31.5189 & 17709048709907584 & DAH & Kilic et al. (2020) \\ PSO J063.1211+11.5012 & 31891639274867576 & DA-DA & Napiwotzki et al. (2020) \\ PSO J065.0980+7.5929 & 25793382594461520 & DAB & Verbeek et al. (2012) \\ PSO J094.8914+55.6121 & 997854527884948992 & DAO & Gianninas et al. (2011) \\ PSO J109.2922+74.0109 & 1112171030998952256 & DAM & Marsh \& Duck (1996) \\ PSO J122.822+57.4396 & 1035077086847142144 & DAM & Rebass-Mansergas et al. (2016) \\ PSO J121.9537+47.6792 & 912318204322075968 & DAM & Farhi et al. (2010) \\ PSO J140.2868+13.0199 & 594229753561550208 & DAH & Kleinman et al. (2013) \\ PSO J173.7025+46.8094 & 785521450828261632 & DD? & Bedard et al. (2017) \\ PSO J182.0967+06.1655 & 3895444662122848512 & DAM & Rebass-Mansergas et al. (2016) \\ PSO J224.1602+10.6747 & 11802569442227204 & DAM & Rebass-Mansergas et al. (2016) \\ PSO J347.4922+30.4024 & 19005487461693760 & DAM? & Rebass-Mansergas et al. (2019) \\ PSO J344.9451+16.4879 & 282888597582293760 & DAM & Farhi et al. (2010) \\ \hline \end{tabular}
\end{table}
Table 2: The list of outliers that were previously known to be unusual. The horizontal line separates the UV-deficit (top) and the UV-excess (bottom) objects.
\begin{table}
\begin{tabular}{c c} \hline \hline Coefficient & Figure 4 & Figure 5 \\ \(c_{0}\) & 0.82743420 & 0.483959599 \\ \(c_{1}\) & 0.94949536 & 1.02740975 \\ \(c_{2}\) & -0.00109481 & -0.00021316 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Coefficients for the best-fitting polynomials in figures 4 and 5.
confirms excess emission in the Pan-STARRS \(zy\)-bands, consistent with an M dwarf companion.
Table 3 presents the list of 59 newly identified outliers among the DA white dwarfs with GALEX data; 24 of them show flux deficits in the UV (their optical + UV temperatures are lower than the temperatures based on the optical data only), and 35 are UV-excess objects. We include the spectral types from the literature for each source.
Even though the 24 UV-deficit objects (shown in the top half of the table) are classified as DA in the literature, our analysis indicates that they are unusual. For example, re-inspecting the SDSS spectra for three of the sources classified as DAZ in the literature, we find that the Ca H and K lines are actually stronger than the Balmer lines, indicating that they are in fact DZA white dwarfs.
Similarly, re-inspecting the SDSS and LAMOST spectra for four of these sources (PSO J018.6848+35.4095, J151.1401+40.2417, J196.7725+49.1045, and J338.5445+25.1894), we find that their Balmer lines are much weaker than expected for these relatively warm white dwarfs with \(T_{\rm eff}>10,000\) K. Figure 7 shows the model fits to three of these objects based on the optical photometry. All three stars are significantly fainter than expected in the FUV and NUV bands compared to the pure H atmosphere models. The UV photometry and the weak Balmer lines indicate that these stars are likely DA(He) white dwarfs with helium dominated atmospheres.
The newly identified UV excess sample likely includes many binaries, including white dwarf + main-sequence and double white dwarf systems. We classify 14 of these systems as likely DA + M dwarfs based on their spectral energy distributions, which are dominated by the white dwarf in the UV and by a redder source in the Pan-STARRS \(zy\) bands. Four of these DA + M dwarf systems are also resolved in the Pan-STARRS \(zy\) band stacked images, but the resolved companions are not included in the Pan-STARRS photometric catalog. However, one of these resolved systems is confirmed to be a physical binary through Gaia astrometry. Both components of PSO J211.4189+74.6498 are detected in Gaia with source IDs Gaia DR3 1712016196599965312 and 1712016196599171840.
Figure 8 shows the fits to the optical and optical + UV spectral energy distributions for three of the newly identified UV excess sources that may be double white dwarfs. There are small but significant temperature discrepancies between the photometric solutions relying on optical and optical + UV data and also the optical spectroscopy. For example, for PSO J218.2047+01.7710 the model fits to the optical photometry give \(T_{\rm eff}=10341\pm 329\) K and \(\log g=7.51\pm 0.05\), while the fits to the optical + UV photometry give \(T_{\rm eff}=11793\pm 98\) K and \(\log g=7.74\pm 0.02\). Fitting the normalized Balmer line profiles, Tremblay et al. (2011) obtained \(T_{\rm eff}=11360\pm 120\) K and \(\log g=8.19\pm 0.06\) for the same star. The inconsistent \(\log g\) estimates can be explained if the photometry is contaminated by a companion (see also Bedard et al., 2017), and the small temperature differences between the different solutions favor a white dwarf companion rather than a cool, late-type M dwarf star. Follow-up spectroscopy of these three systems, as well as the rest of the UV excess sample would be helpful for constraining the nature of these objects and identifying additional double white dwarf binaries.
## 5 Results from UV magnitude comparison
The optical/UV temperature comparison method presented in the previous section provides an excellent method to identify sources with grossly different temperatures. However, it may miss some sources with unusual UV fluxes. Those model fits rely on three (\(gri\)) to six (\(ugrizy\)) optical filters versus one or two GALEX UV filters, hence the UV data have a lesser weight in constraining the temperatures.
To search for additional outliers that were potentially missed by the temperature comparison method, here we use model fits to the optical photometry plus Gaia parallaxes to predict the brightness of each star in the GALEX filters, and search for significant outliers using FUV and NUV data. To obtain the best constraints on the predicted FUV and NUV brightness of each source, we further require our stars to have photometry in the SDSS \(u\) filter as well as all of the Pan-STARRS filters. Our final magnitude comparison sample contains 10049 DA white dwarfs with photometry in at least one of the GALEX filters, the SDSS \(u\), and the Pan-STARRS \(grizy\) filters.
Figure 9 shows a comparison of the observed and predicted FUV (left) and NUV (right panel) magnitudes of the 10049 DA white dwarfs in our magnitude comparison sample. The blue dashed line is the 1:1 correlation between observed and model magnitudes. The green diamonds are previously known DAB white dwarfs while the green triangles are DA white dwarfs that have significant amounts of helium in their atmospheres, making the use of pure hydrogen atmosphere models inappropriate. The yellow diamonds are previously known magnetic white dwarfs and the black triangles are previously known DA + M dwarf systems. The blue diamonds are white dwarfs with uncertain (e.g., DA:) classifications.
As with the temperature comparison sample, blending and contamination from background sources is an issue for some sources. We checked the Pan-STARRS stacked images for each of these outliers to identify nearby sources that could impact GALEX, SDSS, or Pan-STARRS photometry measurements. The outliers that were affected by contamination are marked by blue triangles in Figure 9. The red squares are 30 newly identified 3\(\sigma\) outliers. Table 4 presents this list along with their photometric and spectroscopic temperatures based on the optical data.
Figure 10 displays the spectral energy distributions for four of these outliers. Outliers with UV excesses, such as PSO J226.4550+11.0849 shown in the top right panel of Figure 10, are likely binaries. Outliers with UV deficits, such as PSO J253.3655+27.5061 shown in the bottom left of Figure 10, do not fit the expectations from pure hydrogen atmosphere models in the UV. Their atmospheres might be dominated by helium or might contain metals, making the use of pure hydrogen models inappropriate. Alternatively, they could also be magnetic. Further observations are needed to confirm the nature of these UV excess and UV deficit objects.
## 6 Conclusions
We analyzed the UV to optical spectral energy distributions of 14001 DA white dwarfs from the Montreal White Dwarf Database, taking advantage of the GALEX FUV and NUV data and Gaia DR3 parallaxes. Using the 100 pc sample where extinction is negligible, we demonstrated that there are no major systematic differences between the best-fit parameters derived from optical only data and the optical + UV photometry. The effective temperatures derived from optical and UV + optical data differ by only \(50^{+215}_{-71}\) K. The addition of GALEX FUV and NUV data in the model atmosphere analysis helps improve the statistical errors in the fits, especially for hot white dwarfs.
We used two different methods to identify UV excess or UV deficit objects. In the first method, we compared the temperatures obtained from fitting the optical data with those obtained from fitting optical + UV data. We identified 111 significant outliers with this method, including 52 outliers that were previously known to be unusual.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Object & Gaia DR3 Source ID & Optical & Optical + UV & Spectral & Reference & Notes \\ & & \(T_{\rm eff}\) (K) & \(T_{\rm eff}\) (K) & Type & & \\ \hline PSO J018.6848\(+\)35.4095 & 321093335597030400 & 15369 \(\pm\) 678 & 11872 \(\pm\) 223 & DA & Gentile Fusillo et al. (2015) & DA(He) LAMOST \\ PSO J032.2011\(+\)21.2256 & 73623921366683008 & 27516 \(\pm\) 1379 & 21586 \(\pm\) 476 & DA & Kleinman et al. (2013) & \\ PSO J043.8655\(+\)02.6202 & 1559111783825792 & 8685 \(\pm\) 254 & 7788 \(\pm\) 117 & DA & Kilic et al. (2020) & \\ PSO J056.0479\(+\)15.1626 & 4287119614383616 & 853 \(\pm\) 241 & 7548 \(\pm\) 90 & DA & Andrews et al. (2015) & \\ PSO J103.8966\(+\)29.6527 & 88787180378480504 & 19249 \(\pm\) 849 & 15130 \(\pm\) 168 & DA & Kilic et al. (2020) & massive \\ PSO J130.7484\(+\)16.0677 & 89542140316534 & 15674 \(\pm\) 731 & 12139 \(\pm\) 293 & DAZ & Kepler et al. (2015) & DZA SDSS \\ PSO J132.2963\(+\)14.4454 & 608922974120358784 & 19793 \(\pm\) 1038 & 11354 \(\pm\) 226 & DA & Gentile Fusillo et al. (2019) & \\ PSO J139.0499\(+\)34.9872 & 714469355877947136 & 23646 \(\pm\) 1571 & 14667 \(\pm\) 545 & DA & Kleinman et al. (2013) & massive \\ PSO J151.1141\(+\)40.2417 & 8036926914983223 & 13881 \(\pm\) 797 & 1086 \(\pm\) 188 & DA & Kepler et al. (2015) & DA(He) SDSS \\ PSO J158.8932\(+\)27.510 & 728222390591647872 & 17876 \(\pm\) 998 & 12284 \(\pm\) 307 & DA & Gentile Fusillo et al. (2019) & \\ PSO J159.5929\(+\)37.5335 & 7519335511863040 & 14891 \(\pm\) 628 & 12383 \(\pm\) 194 & DA:DC & Kleinman et al. (2013) & DAB SDSS \\ PSO J163.5943\(-\)02.786 & 3801901270848297600 & 25209 \(\pm\) 1429 & 15858 \(\pm\) 522 & DA & Croom et al. (2001) & massive \\ PSO J172.6518\(-\)00.3655 & 37972016532086360 & 15198 \(\pm\) 761 & 11058 \(\pm\) 264 & DA:Z & Kleinman et al. (2013) & DZA SDSS \\ PSO J180.6015\(+\)05.4822 & 40349287794285184 & 16551 \(\pm\) 835 & 11880 \(\pm\) 377 & DA & Kleinman et al. (2013) & DC: SDSS \\ PSO J196.7725\(+\)49.1045 & 145854861883850476 & 14573 \(\pm\) 736 & 11689 \(\pm\) 225 & DA: & Kleinman et al. (2013) & DA(He) SDSS \\ PSO J213.8277\(+\)31.9308 & 147763319553214752 & 16960 \(\pm\) 829 & 13040 \(\pm\) 608 & DAZ & Gentile Fusillo et al. (2019) & DZA SDSS \\ PSO J215.2971\(+\)38.9912 & 14849315819492454 & 1773 \(\pm\) 873 & 14159 \(\pm\) 483 & DA: & Kleinman et al. (2013) & DC: SDSS \\ PSO J231.8495\(+\)06.7581 & 1162640920197098624 & 16792 \(\pm\) 1013 & 12897 \(\pm\) 411 & DA & Carter et al. (2013) & massive \\ PSO J249.3471\(+\)53.6494 & 14266645078068114 & 16315 \(\pm\) 761 & 12321 \(\pm\) 257 & DAZ & Kepler et al. (2016) & \\ PSO J309.1036\(+\)77.8178 & 229067158609770240 & 28040 \(\pm\) 1427 & 21372 \(\pm\) 486 & DA & Bédard et al. (2020) & \\ PSO J324.1725\(+\)01.0846 & 268825992222372196 & 16404 \(\pm\) 937 & 12078 \(\pm\) 389 & DA & Vidrih et al. (2007) & \\ PSO J338.5445\(+\)25.1894 & 187737462815704 & 16838 \(\pm\) 892 & 11787 \(\pm\) 266 & DA & Gentile Fusillo et al. (2019) & DA(He) SDSS \\ PSO J342.5362\(+\)27.580 & 2836803805058456 & 26580 \(\pm\) 1279 & 20752 \(\pm\) 605 & DA & Beddard et al. (2020) & \\ PSO J348.7601\(+\)22.1674 & 2838958711048617856 & 26837 \(\pm\) 1327 & 1973 \(\pm\) 571 & DA & Kleinman et al. (2013) & \\ \hline PSO J003.9449\(-\)30.1015 & 2320237751020937728 & 9768 \(\pm\) 375 & 13816 \(\pm\) 408 & DA & Vennes et al. (2002) & \\ PSO J009.0492\(-\)17.5443 & 2364297204875140224 & 13479 \(\pm\) 1408 & 21203 \(\pm\) 437 & DA & Gianninas et al. (2011) & \\ PSO J015.0435\(-\)28.1077 & 503974938207807488 & 13023 \(\pm\) 1105 & 17630 \(\pm\) 337 & DA & Croom et al. (2004) & \\ PSO J019.3103\(+\)24.6726 & 2940625378623126 & 12160 \(\pm\) 1007 & 17090 \(\pm\) 90 & DA & Kleinman et al. (2013) & \\ PSO J021.9568\(+\)37.4798 & 53548264113724400 & 6422 \(\pm\) 190 & 7259 \(\pm\) 75 & DA & Limogos et al. (2015) & \\ PSO J023.0575\(-\)28.1766 & 50350329665246593040 & 12745 \(\pm\) 931 & 17999 \(\pm\) 527 & DA & Croom et al. (2004) & \\ PSO J029.9572\(-\)27.8589 & 5024390701506507648 & 11176 \(\pm\) 562 & 14449 \(\pm\) 337 & DA & Croom et al. (2004) & \\ PSO J041.4724\(-\)12.7058 & 5158731712247303040 & 9493 \(\pm\) 307 & 12643 \(\pm\) 504 & DA & Kilkenny et al. (2016) & DAM? & \\ PSO J051.6792\(+\)69.4045 & 4946417617692834944 & 13855 \(\pm\) 1939 & 19655 \(\pm\) 343 & DA & Gianninas et al. (2011) & \\ PSO J052.094\(-\)25.9060 & 443475555046044 & 10729 \(\pm\) 46 3492
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Object & Gaia DR3 Source ID & Photometric & Spectroscopic & Spectral & Reference & Notes \\ & & \(T_{\rm eff}\) (K) & \(T_{\rm eff}\) (K) & Type & & \\ \hline PSO J001.0830+23.8334 & 2849729771768028544 & 27453 & 34738 & DA & Kepler et al. (2016) & \\ PSO J004.9372+33.6842 & 286401153016354816 & 7513 & 8982 & DA & Kepler et al. (2016) & \\ PSO J005.7002+00.7079 & 254689356055427840 & 6803 & 6992 & DA & Kleinman et al. (2013) & DAM? \\ PSO J021.8549+27.6214 & 24937240541642148 & 6823 & 6723 & DA & Kepler et al. (2016) & \\ PSO J056.0308+05.2121 & 324487211251826048 & 10331 & 12371 & DA & Kleinman et al. (2013) & DAM? \\ PSO J118.9063+21.1283 & 673549759340472272 & 9270 & 9941 & DA & Kleinman et al. (2013) & \\ PSO J126.3419+17.4310 & 662102679359467648 & 7867 & 7838 & DA & Kleinman et al. (2013) & \\ PSO J137.9380+35.2566 & 71437792891156992 & 14027 & 19527 & DA & Kleinman et al. (2013) & \\ PSO J149.14951+67.0718 & 146836711133757184 & 10292 & 11288 & DA & Kleinman et al. (2013) & \\ PSO J152.4805+00.1622 & 3831830527112439936 & 10569 & 10513 & DA & Kleinman et al. (2013) & \\ PSO J176.3500+24.1592 & 4004972723377902592 & 7188 & 7403 & DA & Kepler et al. (2016) & \\ PSO J189.9978+33.1080 & 1514768341766352992 & 10348 & 10957 & DA & Kleinman et al. (2013) & \\ PSO J204.9287+06.1751 & 166252484641427460 & 7888 & 9463 & DA & Kleinman et al. (2013) & \\ PSO J205.8897+23.2339 & 1443624343108095216 & 9516 & 10373 & DA & Kleinman et al. (2013) & \\ PSO J210.9347+37.1660 & 1483513830393895680 & 8703 & 11040 & DA & Kepler et al. (2015) & \\ PSO J213.9910+62.5129 & 16675059898974208 & 9593 & 10114 & DA & Kleinman et al. (2013) & \\ PSO J262.4550+11.0849 & 1180520439763502008 & 10315 & 11354 & DA & Kleinman et al. (2013) & \\ PSO J226.6898+06.4651 & 16900305585791618 & 9500 & 10670 & DA & Fairli et al. (2012) & \\ PSO J227.2923+37.129 & 1292306146987734784 & 8264 & 8526 & DA & Kepler et al. (2015) & \\ PSO J244.4451+40.3379 & 1308686815769537920 & 7600 & 13013 & DA & Kepler et al. (2015) & DAM? \\ PSO J248.9274+26.3827 & 1304383217063475968 & 30346 & 34544 & DA & Kleinman et al. (2013) & \\ PSO J249.2986+26.1882 & 45946170420927716 & 71824 & 7904 & DA & Kepler et al. (2015) & \\ PSO J250.5693+22.9411 & 1299405148103896832 & 11188 & 12763 & DA & Kleinman et al. (2013) & \\ PSO J251.3785+41.0348 & 1356243233471452288 & 7884 & 8068 & DA & Kepler et al. (2015) & \\ PSO J253.5654+52.5061 & 1309941963308160 & 29557 & 30472 & DA & Kleinman et al. (2013) & \\ PSO J238.6590+00.6697 & 268018672335282768 & 17608 & 20257 & DA & Kleinman et al. (2013) & \\ PSO J331.0859+24.2120 & 17953940716599190632 & 6847 & 6873 & DA & Kepler et al. (2015) & \\ PSO J341.3178+00.6951 & 2653703714870987648 & 8299 & 9611 & DA & Kleinman et al. (2013) & \\ PSO J349.8567+07.6224 & 2664938112366990080 & 7394 & 8519 & DA & Kepler et al. (2016) & \\ PSO J358.8416+16.8000 & 2773308246143281920 & 7149 & 7066 & DA & Kepler et al. (2016) & \\ \hline \hline \end{tabular}
\end{table}
Table 4: Additional outliers identified through a comparison of the observed and predicted UV magnitudes.
Figure 7: Model atmosphere fits to three DA white dwarfs with UV flux deficits. The top panels show the best-fitting H (filled dots) and He (open circles) atmosphere white dwarf models to the optical photometry (black error bars). The middle panels show the observed spectrum (black line) along with the predicted spectrum (red line) based on the pure H atmosphere solution. The bottom panels show a broader wavelength range. GALEX FUV and NUV data clearly favor the He-dominated atmosphere solutions, which are also confirmed by the relatively weak Balmer lines in their spectra.
These include DA white dwarfs with helium dominated atmospheres, magnetic white dwarfs, double white dwarfs, and white dwarf + M dwarf systems. Out of the 59 newly identified systems, 35 are UV excess and 24 are UV deficit objects. In the second method, we used the optical photometry to predict the FUV and NUV magnitudes for each source, and classified sources with 3\(\sigma\) discrepant FUV and/or NUV photometry as outliers. Using this method, we identified 30 additional outliers.
Combining these two methods, our final sample includes 89 newly identified outliers. The nature of these outliers cannot be constrained by our analysis alone. Many of the UV excess objects are likely binaries, including double degenerates and white dwarfs with late-type stellar companions. Follow-up spectroscopy and infrared observations of these outliers would help constrain their nature.
There are several current and upcoming surveys that are specifically targeting large numbers of white dwarfs spectroscopically. For example, the Dark Energy Spectroscopic Instrument Data Release 1 is expected to contain spectra for over 47000 white dwarf candidates (Manser et al., 2023). DA white dwarfs make up the majority of the white dwarf population. Hence, the number of spectroscopically confirmed DA white dwarfs will increase significantly in the near future. The Ultraviolet Transient Astronomy Satellite (ULTRASAT, Ben-Ami et al., 2022) will perform an all-sky survey during the first 6 months of the mission to a limiting magnitude of 23 to 23.5 in its 230-290 nm NUV passband. This survey will be about an order of magnitude deeper than GALEX. Future analysis of these larger DA white dwarf samples with GALEX FUV/NUV or ULTRASAT NUV data would provide an excellent opportunity to identify unusual objects among the DA white dwarf population.
## Acknowledgements
This work is supported by NASA under grant 80NSSC22K0479, the NSERC Canada, the Fund FRQ-NT (Quebec), and by NSF under grants AST-1906379 and AST-2205736. The Apache Point Observatory 3.5-meter telescope is owned and operated by the Astrophysical Research Consortium. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national
Figure 8: Fits to the optical (left) and optical + UV (right) spectral energy distributions of three of the newly identified UV excess sources in our DA white dwarf sample. The inconsistent temperature estimates from the optical and UV photometry and optical spectroscopy indicate that they may be double white dwarfs.
institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement.
## Data Availability
The data underlying this article are available in the MWDD at [http://www.montrealwhitedwarfdatabase.org](http://www.montrealwhitedwarfdatabase.org) and also from the corresponding author upon reasonable request.
|
2302.11190 | A Hitting Time Analysis for Stochastic Time-Varying Functions with
Applications to Adversarial Attacks on Computation of Markov Decision
Processes | Stochastic time-varying optimization is an integral part of learning in which
the shape of the function changes over time in a non-deterministic manner. This
paper considers multiple models of stochastic time variation and analyzes the
corresponding notion of hitting time for each model, i.e., the period after
which optimizing the stochastic time-varying function reveals informative
statistics on the optimization of the target function. The studied models of
time variation are motivated by adversarial attacks on the computation of value
iteration in Markov decision processes. In this application, the hitting time
quantifies the extent that the computation is robust to adversarial
disturbance. We develop upper bounds on the hitting time by analyzing the
contraction-expansion transformation appeared in the time-variation models. We
prove that the hitting time of the value function in the value iteration with a
probabilistic contraction-expansion transformation is logarithmic in terms of
the inverse of a desired precision. In addition, the hitting time is analyzed
for optimization of unknown continuous or discrete time-varying functions whose
noisy evaluations are revealed over time. The upper bound for a continuous
function is super-quadratic (but sub-cubic) in terms of the inverse of a
desired precision and the upper bound for a discrete function is logarithmic in
terms of the cardinality of the function domain. Improved bounds for convex
functions are obtained and we show that such functions are learned faster than
non-convex functions. Finally, we study a time-varying linear model with
additive noise, where hitting time is bounded with the notion of shape
dominance. | Ali Yekkehkhany, Han Feng, Donghao Ying, Javad Lavaei | 2023-02-22T07:54:28Z | http://arxiv.org/abs/2302.11190v1 | A Hitting Time Analysis for Stochastic Time-Varying Functions with Applications to Adversarial Attacks on Computation of Markov Decision Processes
###### Abstract
Stochastic time-varying optimization is an integral part of learning in which the shape of the function changes over time in a non-deterministic manner. This paper considers multiple models of stochastic time variation and analyzes the corresponding notion of hitting time for each model, i.e., the period after which optimizing the stochastic time-varying function reveals informative statistics on the optimization of the target function. The studied models of time variation are motivated by adversarial attacks on the computation of value iteration in Markov decision processes. In this application, the hitting time quantifies the extent that the computation is robust to adversarial disturbance. We develop upper bounds on the hitting time by analyzing the contraction-expansion transformation appeared in the time-variation models. We prove that the hitting time of the value function in the value iteration with a probabilistic contraction-expansion transformation is logarithmic in terms of the inverse of a desired precision. In addition, the hitting time is analyzed for optimization of unknown continuous or discrete time-varying functions whose noisy evaluations are revealed over time. The upper bound for a continuous function is super-quadratic (but sub-cubic) in terms of the inverse of a desired precision and the upper bound for a discrete function is logarithmic in terms of the cardinality of the function domain. Improved bounds for convex functions are obtained and we show that such functions are learned faster than non-convex functions. Finally, we study a time-varying linear model with additive noise, where hitting time is bounded with the notion of shape dominance.
_Keywords--_ Stochastic time-varying functions, stochastic operators, hitting time, probabilistic contraction-expansion mapping, probabilistic Banach fixed-point theorem, adversarial Markov decision process
## 1 Introduction and Related Work
In many practical applications of optimization, such as those in the training of neural networks [1, 2], online advertising [3], decision-making process of power systems [4, 5], and the real-time state estimation of nonlinear systems [6], the parameters of the problem are often uncertain and change over time [7]. To put the time-varying and uncertainty of the systems into perspective in optimization problems, time-varying or online optimization aims to find the solution trajectories determined by
\[x_{t}^{*}=\operatorname*{argmin}_{x\in\mathcal{X}}\left\{f_{t}(x)=\mathbb{E}F_ {t}(x,\xi)\right\},\quad t\in\{1,2,\dots\}, \tag{1}\]
where the random variable \(\xi\) models the uncertainty in the objective that comes from disturbance, inexactness of model, use of small batches, or injected noise, and where \(\operatorname*{argmin}\) denotes any global minimizer of the input function. Note that the expectation \(\mathbb{E}\) over \(\xi\) can only be evaluated approximately since the probability
distribution is unknown, and therefore the target function \(f_{t}\) should be approximated by observed samples. The estimate of the target function may not capture the shape of the target function given a limited number of observed samples. However, there is a point of time, named _hitting time_, after which optimizing the estimated target function results in optimizing the target function up to some precision and confidence level. The hitting time captures the stochastic complexity of the time-varying problem in (1).
### Motivating Applications
In order to motivate the analysis of hitting time for time-varying probabilistic transformations, we first explain its applications in Markov Decision Process (MDP) and reinforcement learning (RL). Consider an MDP with the set of states (state space) \(\mathcal{S}\), the set of actions (action space) \(\mathcal{A}\), the time-invariant state transition \(h\) such that \(s_{k+1}=h(s_{k},a_{k},w_{k})\), where \(w_{k}\) for \(k\in\{0,1,\dots\}\) is a sequence of independent and identically distributed (i.i.d.) random variables, and the immediate reward \(r(s_{k},a_{k},w_{k})\) received after taking action \(a_{k}\) in state \(s_{k}\). A state-contingent decision policy is a mapping \(\mu:\mathcal{S}\rightarrow\mathcal{A}\). Given a discount factor \(0<q<1\) and a policy \(\mu\), the value function \(V^{\mu}:\mathcal{S}\rightarrow\mathcal{R}\) is defined as
\[V^{\mu}(s)=\mathbb{E}\left[\sum_{k=0}^{\infty}q^{k}\cdot r(s_{k},\mu(s_{k}),w _{k})\bigg{|}s_{0}=s\right], \tag{2}\]
where expectation is taken over \(w_{k}\) for \(k\geq 0\). Then, the optimal value function \(V^{*}\) is defined by
\[V^{*}(s)=\max_{\mu}V^{\mu}(s). \tag{3}\]
For a finite action space, any policy \(\mu^{*}\) given by
\[\mu^{*}(s)=\underset{a\in\mathcal{A}}{\text{argmax}}\ \mathbb{E}\big{[}r(s,a,w)+q \cdot V^{*}(h(s,a,w))\big{]} \tag{4}\]
is optimal in the sense that \(V^{*}(s)=V^{\mu^{*}}(s)\), which gives rise to the Bellman equation
\[V^{*}(s)=\max_{a\in\mathcal{A}}\ \mathbb{E}\big{[}r(s,a,w)+q\cdot V^{*}(h(s,a, w))\big{]}\quad\forall s\in\mathcal{S}, \tag{5}\]
where \(w\) is a random variable with the same distribution as \(w_{k}\) for some \(k\). Define the Bellman operator \(\mathcal{T}\) as
\[(\mathcal{T}V)(s)=\max_{a\in\mathcal{A}}\ \mathbb{E}\big{[}r(s,a,w)+q\cdot V(h(s,a, w))\big{]} \tag{6}\]
Starting from an arbitrary \(V_{0}\), the value iteration method constructs a sequence \(\{V_{0},V_{1},V_{2},\dots\}\) with \(V_{t+1}=\mathcal{T}(V_{t})\) for \(t\in\{0,1,\dots\}\). It is well known that the Bellman operator is a contraction mapping, which guarantees convergence to \(V^{*}\). The optimal value function \(V^{*}\) is unknown in MDP and RL applications. The value function \(V_{t}\) is a time-varying function and may never be exactly equal to \(V^{*}\). Moreover, \(V_{t}\) is rarely computed exactly and is subject to adversarial attacks. We will introduce multiple models of attack and analyze the corresponding notion of hitting time for each model to be able to study the convergence of \(V_{t}\).
\begin{table}
\begin{tabular}{c c c} \hline \hline Theorem & Assumptions & Hitting Time Definition \\ \hline
3 & Assumptions 3-4, bounded difference functions & (45) \\
4 & Assumptions 3-6, convex bounded difference functions & (45) \\
5 & Assumptions 3 and 7 & (65) \\
6 & Assumptions 3 and 7, unimodal functions & (65) \\
8 & linear dynamics and shape dominance & (84) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of Selected Theorems in Sections II-III
### Related Work
#### 1.2.1 Approximate Dynamic Programming
The field approximate dynamic programming encompasses a wide range of techniques that overcomes the curse of dimensionality in the computation of Bellman operator. The adversarial attack model studied in this paper is motivated by the following approaches:
1. **Approximation in computing expectation:** There are different approaches to circumventing the costly computation of expectation in (6), e.g., a) assuming certainty equivalence by replacing stochastic quantities with deterministic ones to arrive at a deterministic optimization, b) using Monte Carlo tree search and adaptive simulation to determine which expectations associated with actions should be computed more accurately [8, 9, 10, 11, 12]. Both of these approaches introduce some errors in the expectation.
2. **Approximation in maximization:** The maximization in the Bellman operator in (6) can be over a large number of actions, possibly a continuous action space with an infinite number of actions. In addition to the discretization of the action space, nonlinear programming techniques are prone to errors especially when they are used in an online fashion.
3. **Approximation of value function:** Due to the large number of states in many recent applications of Markov decision processes and reinforcement learning, parametric feature-based approximation methods, such as neural network architectures, are used for value function representation [8, 13, 14, 15, 16]. The parameterization of the value function is another source of error in value iteration that can cause expansion in value iteration [13, 14].
4. **Adversarial value iteration:** The emergence of cloud, edge, and fog computing means that large-scale MDP and RL problems will likely be solved by distributed servers [17, 18, 19]. This swift shift to edge reinforcement learning brings a host of new adversarial attack challenges that can be catastrophic in critical applications of autonomous vehicles and Internet of Things (IoT) in general [20, 21, 22].
The first three causes have been studied extensively in the literature [23], while there is no mathematical analysis of adversarial attacks on the computation of the value functions.
#### 1.2.2 Reinforcement Learning in Time-varying Environment
Consider a reinforcement learning framework in which the model is being learned or there is a time-varying environment whose state transition probabilities and rewards change over time [24]. An example of a time-varying environment is the changing environment at which autonomous vehicles interact with each other, human drivers, and pedestrians. In the context of reinforcement learning and Markov decision processes, this gradual change is translated into time-varying reward functions and transition probabilities. The relevance of time-varying functions to MDP and RL problems presented above is one of the many problems that can be described by time-varying functions whose hitting time analysis is of interest. Other applications of a time-varying framework, such as bandit optimization, model predictive control, and empirical risk minimization, are discussed in [25].
#### 1.2.3 Scenario-based Approach for Optimization
Scenario-based approach for optimization [26, 27, 28] is concerned with decision making based on seen cases while having the ability to generalize to new situations. In this context, a bound on the violation probability captures the generalization of time-invariant decisions. The hitting time defined in this paper is related to the violation probability. Our work departs from this line of research in that we study a sequence of time-varying functions instead of a time-invariant function, which can potentially be corrupted by an adversary, and seeking to constantly adjusting our understanding of the optimal solution. The hitting time captures the time-varying aspect in our setting.
#### 1.2.4 Dynamical Systems
Our work is also related to asynchronous dynamical systems [29], which have been extensively studied in the literature. Despite the mathematical resemblance, our work is different from this line of research since our focus is on analyzing the associated hitting times of different models and the dynamics considered in this work may not even be linear.
### Contributions
We propose a probabilistic model of adversarial attacks, in which both expansion up to a constant and contraction occur with certain probabilities in iterates of the value iteration method. We then study the hitting time of such stochastic time-varying value functions in Section 2. We develop an upper bound on the hitting time under a time-varying contraction mapping with additive noise and develop an upper bound on the distance between the fixed point and the value function.
In the rest of this paper, different models of stochastic time variation for continuous and discrete functions are studied in Sections 2 and 3, respectively. In particular, probabilistic contraction-expansion mappings are studied in Section 2.1, time-varying probabilistic contraction-expansion mappings with additive noise are studied in Section 2.2, time-varying continuous functions with additive noise are studied in Section 2.3, and improved bounds for convex functions with additive noise are studied in Section 2.4. Time-varying discrete functions with additive noise are studied in Section 3.1, improved bounds for unimodal functions with additive noise are studied in Section 3.2, and a time-varying linear model with additive noise with the notion of shape dominance are studied in Section 3.3. We summarize the theorems and the associated assumptions as well as the hitting times definitions in Table 1. Finally, the simulation results are presented in Section 4 and the paper is concluded in Section 5 in which a discussion of opportunities for future work is presented as well.
## 2 The Hitting Time Analysis for Continuous Functions
In this section, three variants of stochastic time-varying models are studied and their hitting times are analyzed. In the first model, a probabilistic contraction-expansion mapping is analyzed, where the classical Banach fixed-point theorem cannot be applied to this model due to the probabilistic contraction-expansion nature of the problem. In the second model, a time-varying probabilistic contraction-expansion mapping with additive noise is investigated. The above two models are applicable to both continuous and discrete functions. In the last model, an unknown time-varying continuous function is observed with additive noise whose estimated function changes over time.
To motivate the three stochastic time-varying models, we revisit the motivating example in the previous section, where a sequence of value functions \(V_{0},V_{1},\dots\) is generated by the Bellman operator \(\mathcal{T}\) defined in (6). Note that the theoretical proof of convergence behind the value iteration method depends heavily on the contraction mapping parameter \(q\) and the fact that \(d\big{(}\mathcal{T}(V_{t+1}),\mathcal{T}(V_{t})\big{)}\leq q\cdot d\big{(}V_{ t+1},V_{t}\big{)}\) deterministically, where \(d(\cdot,\cdot)\) is a translation-invariant distance function induced by a norm. However, in an online implementation of the value iteration with large state or action spaces, the actual calculation in practice may result in the value iteration method not to satisfy the contraction condition \(d\big{(}\mathcal{T}(V_{t+1}),\mathcal{T}(V_{t})\big{)}\leq q\cdot d\big{(}V_{ t+1},V_{t}\big{)}\) in some iterations. Instead, the distance may expand up to a factor greater than one in some iterations of the value iteration, i.e., \(d\big{(}\mathcal{T}(V_{t+1}),\mathcal{T}(V_{t})\big{)}\leq Q\cdot d\big{(}V_ {t+1},V_{t}\big{)}\), where \(Q\geq 1\). In this problem, the Bellman contraction mapping in value iteration may not be fixed anymore and could change over time. Hence, instead of applying the same transformation \(\mathcal{T}\) in value iteration, a time-varying transformation \(\mathcal{T}_{t}\) for \(t\in\{0,1,\dots\}\) may be applied to value iteration. Section 2.1 formalizes this observation.
### Probabilistic Contraction-Expansion Mapping
Let \((X,\|\cdot\|)\) be a non-empty complete normed vector (linear) space, known as a Banach space, over the field \(\mathbb{R}\) of real scalars, where \(X\) is a vector space, e.g., a function space, together with a norm \(\|\cdot\|\). The norm induces a translation invariant distance function, called canonical induced metric, as \(d(f,g)=\|f-g\|\). Let \(\|f\|=\langle f,f\rangle^{1/2}\), where the inner product of \(f,g\in X\) in general is defined by \(\langle f,g\rangle=\int f(x)g(x)dx\). Consider
a contraction mapping \(\mathcal{T}:X\to X\) with the property that for all \(f,g\in X\), there exists a scalar \(q\in[0,1)\) such that
\[d\big{(}\mathcal{T}(f),\mathcal{T}(g)\big{)}\leq q\cdot d(f,g). \tag{7}\]
In light of the Banach-Caccioppoli fixed-point theorem, this contraction mapping has its own unique fixed point, i.e., there exists \(f^{*}\in X\) such that \(\mathcal{T}(f^{*})=f^{*}\). Furthermore, starting with an arbitrary function \(f^{0}\in X\), the sequence \(\{f^{n}\}\) with \(f^{n}=\mathcal{T}(f^{n-1})\) for \(n\geq 1\) converges to \(f^{*}\); in other words, \(f^{n}\to f^{*}\), where \(d\big{(}f^{*},f^{n}\big{)}\leq\frac{q^{n}}{1-q}\cdot d(f^{1},f^{0})\). Note that in all iterations of the above value iteration, the mapping \(\mathcal{T}\) operates as a contraction mapping according to (7) with probability one. However, in the rest of this subsection, we consider a probabilistic version of the Banach fixed-point theorem, where the mapping either contracts or expands the distance between any two points in a probabilistic manner.
Consider the time-varying function \(f_{t}\in X\) for \(t\in\{0,1,2,\dots\}\) evolving over time according to
\[f_{t+1}=\overline{\mathcal{T}}(f_{t}),\quad t\in\{0,1,2,\dots\}, \tag{8}\]
where \(\overline{\mathcal{T}}\) is a probabilistic contraction-expansion mapping such that
\[d\big{(}\overline{\mathcal{T}}(f_{t+1}),\overline{\mathcal{T}}(f_{t})\big{)} \leq\begin{cases}q\cdot d(f_{t+1},f_{t})&\text{w.p.}\quad p\\ Q\cdot d(f_{t+1},f_{t})&\text{otherwise}\end{cases},\ \ \forall t\in\mathbb{N}_{0} \tag{9}\]
for some constants \(q\in[0,1)\), \(Q\geq 1\), and \(p\in(0,1]\), where w.p. stands for "with probability" and \(\mathbb{N}_{0}\) is natural numbers with zero. The expansion in (9) is caused by an adversary in an attempt to move the function sequence away from the fixed point. The contraction or expansion of \(\overline{\mathcal{T}}\) is independent over time and \(f^{*}\) is a fixed point of the mapping if \(\overline{\mathcal{T}}(f^{*})=f^{*}\). The shape of the function \(f_{t}\) changes over time, but there can be a time, called hitting time \(T\), at which \(f_{T}\) reaches a neighborhood of \(f^{*}\), as formally defined below.
**Definition 1**.: _Given \(\epsilon>0\) and \(a\in(0,1]\), the hitting time \(T(\epsilon,a)\) for the stochastic function sequence introduced in (8) is defined as_
\[T(\epsilon,a)=\min\big{\{}T:\mathbb{P}\left\{d\big{(}f_{t},f^{*}\big{)}< \epsilon\right\}\geq 1-a,\ \forall t\geq T\big{\}}, \tag{10}\]
_where \(f^{*}\) is a fixed point whose existence and uniqueness is proven in Theorem 1 and \(\mathbb{P}\{\cdot\}\) takes the probability of the input event._
As a result, the complexity of optimizing the functions \(f_{t}\) for \(t<T\) can be irrelevant to the optimization complexity of the functions \(f_{t}\) for \(t\geq T\). Consequently, the hitting time \(T\) together with the optimization complexity of any function \(f_{t}\) for \(t\geq T\) captures the complexity of optimizing the time-varying sequence of functions \(\{f_{t}\}\). In the following theorem, the limiting behavior of the function sequence \(\{f_{t}\}\) is studied and an upper bound on the hitting time is derived.
**Theorem 1**.: _Probabilistic Banach Fixed-Point Theorem. Let \((X,\|\cdot\|)\) be a non-empty complete normed vector space with a probabilistic contraction-expansion mapping \(\overline{\mathcal{T}}:X\to X\) defined in (9) such that \(q^{2}\cdot p+Q^{2}\cdot(1-p)<1\). Starting with an arbitrary element \(f_{0}\in X\), the sequence \(\{f_{t}\}\) defined in (8) converges to an element \(f^{*}\in X\) with an associated confidence level \(1-a\), where \(f^{*}\) is a unique fixed point for the mapping \(\overline{\mathcal{T}}\). Furthermore, for every \(0<L<\frac{\epsilon}{d(f_{1},f_{0})}\), the hitting time \(T(\epsilon,a)\) satisfies the inequality_
\[T(\epsilon,a)\!\leq\!\max\!\left\{\!\frac{\ln\!\left(\!\frac{\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
probability. Given arbitrary integer values \(n\) and \(m\) such that \(n>m\), one can write
\[\begin{split} d\big{(}f_{n},f_{m}\big{)}=d\big{(}\overline{\mathcal{T} }^{n}(f_{0}),\overline{\mathcal{T}}^{m}(f_{0})\big{)}\overset{(a)^{n-m}}{\leq }&\sum_{i=1}^{n-m}d\big{(}\overline{\mathcal{T}}^{n-i+1}(f_{0}), \overline{\mathcal{T}}^{n-i}(f_{0})\big{)}=\sum_{i=1}^{n-m}d\big{(}\overline{ \mathcal{T}}^{n-i}(f_{1}),\overline{\mathcal{T}}^{n-i}(f_{0})\big{)}\\ \overset{(b)}{\leq}&\sum_{i=1}^{n-m}\left(\prod_{j= 1}^{n-i}B_{j}\right)\cdot d\big{(}f_{1},f_{0}\big{)}=d\big{(}f_{1},f_{0}\big{)} \cdot\sum_{i=1}^{n-m}\prod_{j=1}^{n-i}B_{j},\end{split} \tag{12}\]
where triangular inequality is applied \(n-m-1\) times in \((a)\) and the independent and identically distributed random variables \(B_{j}\) for \(j\in\{1,2,\ldots,n-1\}\) used in \((b)\) have the distribution
\[B_{j}=\begin{cases}q&\text{w.p.}\quad p\\ Q&\text{otherwise}\end{cases}. \tag{13}\]
Next, we study the mean and variance of the random variable \(S_{n,m}=\sum_{i=1}^{n-m}\prod_{j=1}^{n-i}B_{j}\) in (12). Using the independence of \(B_{j}\) for \(j\in\{1,2,\ldots,n-1\}\), the mean can be upper-bounded as
\[\mathbb{E}[S_{n,m}]=\mathbb{E}\left[\sum_{i=1}^{n-m}\prod_{j=1}^{n-i}B_{j} \right]=\sum_{i=1}^{n-m}\prod_{j=1}^{n-i}\mathbb{E}\left[B_{j}\right]=\sum_{i= 1}^{n-m}\big{(}q\cdot p+Q\cdot(1-p)\big{)}^{n-i}\leq\frac{\big{(}q\cdot p+Q \cdot(1-p)\big{)}^{m}}{1-q\cdot p-Q\cdot(1-p)}. \tag{14}\]
On the other hand, \(\operatorname{Var}\left(S_{n,m}\right)\leq\mathbb{E}\left[S_{n,m}^{2}\right]\), where \(\operatorname{Var}\left(\cdot\right)\) takes the variance of the input random variable, and the second moment of \(S_{n,m}\) will be upper-bounded next. Note that
\[S_{n,m}=B_{1}\cdot B_{2}\cdots B_{m}\cdot\big{(}1{+}B_{m+1}+B_{m+1}\cdot B_{m +2}+\cdots+B_{m+1}\cdots B_{n-1}\big{)}. \tag{15}\]
Let \(\bar{S}_{n,m}=1+B_{m+1}+B_{m+1}\cdot B_{m+2}+\cdots+B_{m+1}\cdots B_{n-1}\), where \(\bar{S}_{n,m}\) is a random variable independent of \(B_{j}\) for \(j\in\{1,2,\ldots,m\}\), and \(\bar{S}=\lim_{n\to\infty}\bar{S}_{n,m}\). We leave out the subscript \(m\) since the limits \(\lim_{n\to\infty}\bar{S}_{n,m}\) and \(\lim_{n\to\infty}\bar{S}_{n,m^{\prime}}\) are identically distributed for all \(m,m^{\prime}\geq 0\). This is because \(\bar{S}\) is an infinite sum and \(\{B_{j}\}\) are i.i.d. random variables. Since \(\mathbb{E}[B_{j}]>0\) for \(j\geq 1\), we have \(\mathbb{E}[\bar{S}_{n,m}^{2}]\leq\mathbb{E}[\bar{S}^{2}]\); hence, it follows from (15) that
\[\mathbb{E}\left[S_{n,m}^{2}\right]=\mathbb{E}\left[B_{1}^{2}\right]\cdots \mathbb{E}\left[B_{m}^{2}\right]\cdot\mathbb{E}\left[\bar{S}_{n,m}^{2}\right] \leq\mathbb{E}\left[B_{1}^{2}\right]\cdots\mathbb{E}\left[B_{m}^{2}\right] \cdot\mathbb{E}\left[\bar{S}^{2}\right]. \tag{16}\]
In order to find an upper bound on \(\mathbb{E}\left[\bar{S}^{2}\right]\), we have
\[\bar{S}=1{+}B_{m+1}\cdot(1+B_{m+2}+B_{m+2}\cdot B_{m+3}+B_{m+2}\cdot B_{m+3} \cdot B_{m+4}+\ldots)=1+B_{m+1}\cdot\tilde{S}, \tag{17}\]
where \(\tilde{S}\) is independent of \(B_{m+1}\), and the random variables \(\bar{S}\) and \(\tilde{S}\) are identically distributed but not independent of each other. By taking expectation on both sides of \(\bar{S}^{2}=(1+B_{m+1}\cdot\tilde{S})^{2}\), and using the independence of \(\tilde{S}\) and \(B_{m+1}\) and the fact that \(\mathbb{E}\big{[}\bar{S}^{2}\big{]}=\mathbb{E}\big{[}\bar{S}^{2}\big{]}\), one can obtain
\[\mathbb{E}\left[\bar{S}^{2}\right]=1+\mathbb{E}\left[B_{m+1}^{2}\right]\cdot \mathbb{E}\left[\tilde{S}^{2}\right]+2\mathbb{E}\left[B_{m+1}\right]\cdot \mathbb{E}\left[\tilde{S}\right]\Longrightarrow\mathbb{E}\left[\bar{S}^{2} \right]=\frac{1+2\mathbb{E}\left[B_{m+1}\right]\cdot\mathbb{E}\left[\tilde{S} \right]}{1-\mathbb{E}\left[B_{m+1}^{2}\right]}. \tag{18}\]
In the same way as finding the mean of \(S_{n,m}\) in (14), it is derived that \(\mathbb{E}\left[\tilde{S}\right]=\frac{1}{1-q\cdot p-Q\cdot(1-p)}\); furthermore, \(\mathbb{E}\left[B_{m+1}\right]=q\cdot p+Q\cdot(1-p)\) and \(\mathbb{E}\left[B_{m+1}^{2}\right]=q^{2}\cdot p+Q^{2}\cdot(1-p)\). As a result, if \(q^{2}\cdot p+Q^{2}\cdot(1-p)<1\), Equation (18) results in
\[\mathbb{E}\left[\bar{S}^{2}\right]=\frac{1+q\cdot p+Q\cdot(1-p)}{\big{(}1-q \cdot p-Q\cdot(1-p)\big{)}\cdot\big{(}1-q^{2}\cdot p-Q^{2}\cdot(1-p)\big{)}}. \tag{19}\]
Using Equation (16), we have
\[\operatorname{Var}\left(S_{n,m}\right)\leq\mathbb{E}\left[S_{n,m}^{2}\right] \leq\big{(}q^{2}\cdot p+Q^{2}\cdot(1-p)\big{)}^{m}\times\frac{1+q\cdot p+Q\cdot(1 -p)}{\big{(}1-q\cdot p-Q\cdot(1-p)\big{)}\cdot\big{(}1-q^{2}\cdot p-Q^{2}\cdot(1 -p)\big{)}}. \tag{20}\]
So far, it is shown that \(d(\overline{\mathcal{T}}^{n}(f_{0}),\overline{\mathcal{T}}^{m}(f_{0}))\leq S_{n,m} \cdot d\big{(}f_{1},f_{0}\big{)}\), where \(S_{n,m}\) is a random variable with its mean and variance upper-bounded in (14) and (20), respectively. Using Chebyshev's inequality, for any \(L>0\), we have
\[\mathbb{P}\left\{\left|S_{n,m}-\mathbb{E}[S_{n,m}]\right|\leq L \right\}\geq 1-\frac{\operatorname{Var}\left(S_{n,m}\right)}{L^{2}}\Longrightarrow \tag{21}\] \[\mathbb{P}\left\{S_{n,m}\leq\frac{\big{(}q\cdot p+Q\cdot(1-p) \big{)}^{m}}{1-q\cdot p-Q\cdot(1-p)}+L\right\}\hskip-10.0pt\geq\hskip-10.0pt1- \frac{\big{(}q^{2}\cdot p+Q^{2}\cdot(1-p)\big{)}^{m}\cdot\big{(}1+q\cdot p+Q \cdot(1-p)\big{)}}{L^{2}\cdot\big{(}1-q\cdot p-Q\cdot(1-p)\big{)}\cdot\big{(}1 -q^{2}\cdot p-Q^{2}\cdot(1-p)\big{)}}.\]
As a result, for any \(\epsilon>0\) and \(a\in(0,1]\), we have \(d(f_{n},f_{m})=d\big{(}\overline{\mathcal{T}}^{n}(f_{0}),\overline{\mathcal{T }}^{m}(f_{0})\big{)}\leq\epsilon\) with the confidence level \(1-a\) if \(m\) satisfies the two inequalities
\[\frac{\big{(}q^{2}\cdot p+Q^{2}\cdot(1-p)\big{)}^{m}\cdot\big{(}1 +q\cdot p+Q\cdot(1-p)\big{)}}{L^{2}\cdot\big{(}1-q\cdot p-Q\cdot(1-p)\big{)} \cdot\big{(}1-q^{2}\cdot p-Q^{2}\cdot(1-p)\big{)}} \leq a \tag{22a}\] \[\left(\frac{\big{(}q\cdot p+Q\cdot(1-p)\big{)}^{m}}{1-q\cdot p-Q \cdot(1-p)}+L\right)\cdot d\big{(}f_{1},f_{0}\big{)} \leq\epsilon. \tag{22b}\]
Assume that \(d\big{(}f_{1},f_{0}\big{)}\neq 0\); otherwise, \(f_{0}\) is a fixed point by definition. Hence, for \(0<L<\frac{\epsilon}{d(f_{1},f_{0})}\), if \(q\cdot p+Q\cdot(1-p)<1\) and \(q^{2}\cdot p+Q^{2}\cdot(1-p)<1\), then the two inequalities in (22a) and (22b) are satisfied when
\[m\geq\max\left\{\!\frac{\ln\!\left(\!\frac{\!\ln\!\left(\!\frac{\!\ln\!\left( \!\frac{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Theorem 1 states that if contraction of an operator in the iterates of the value iteration is compromised by an adversary via expansions in the iterates of value iteration, the value function sequence can still converge to the fixed point of the operator with high probability. The standard Banach fixed-point theorem is a special case of Theorem 1 by setting \(p=1\) and \(L=0\). The analysis in the proof of this theorem suggests that the compromised operator being contractive on expectation is not enough for the convergence of the value function sequence with high probability since the introduced randomness to the operator by the adversary can lead to high variance in the elements of the value function sequence. Hence, the additional assumption \(q^{2}\cdot p+Q^{2}\cdot(1-p)<1\) is required to bound such a variance rooted from the expansion caused by the adversary. Furthermore, this theorem provides an upper bound on the number of rounds for value iteration to defeat the effect of the adversary that attempts to move the value function sequence away from the fixed point. If the adversary is not modeled, the user who expects a normal scenario may perform fewer iterations of the value iteration. This can lead to a highly inaccurate estimate of the fixed point in the presence of an adversary.
**Remark 1**.: _The parameter \(L\in\left(0,\frac{\epsilon}{d(f_{1},f_{0})}\right)\) serves as an auxiliary parameter used in (21). We observe that the first term in the upper bound (11) is decreasing with respect to \(L\) and the second term is increasing with respect to \(L\). By minimizing the bound (11) over \(L\), we have that \(T(\epsilon,a)\) has the order \(\mathcal{O}\left(\frac{d(f_{1},f_{0})}{\epsilon}\right)\)._
### Time-Varying Probabilistic Contraction-Expansion Mapping with Additive Noise
Let \((X,\|\cdot\|)\) be the same complete normed vector space as in Section 2.1. Consider time-varying probabilistic contraction-expansion mappings \(\overline{\mathcal{T}}_{t}(\cdot):X\to X\) for \(t\in\{0,1,2,\dots\}\) with parameters \(p_{t},q_{t}\), and \(Q_{t}\), i.e.,
\[d\big{(}\overline{\mathcal{T}}_{t}(f),\overline{\mathcal{T}}_{t}(g)\big{)} \leq\begin{cases}q_{t}\cdot d(f,g)&\text{w.p.}\quad p_{t}\\ Q_{t}\cdot d(f,g)&\text{otherwise}\end{cases},\ \ \forall t\in\mathbb{N}_{0}. \tag{26}\]
By Theorem 1, starting with an arbitrary function \(f^{0}\in X\), the sequence \(\{f^{n}\}\) with \(f^{n}=\overline{\mathcal{T}}_{t}(f^{n-1})\) for \(n\geq 1\), where the same probabilistic contraction-expansion mapping \(\overline{\mathcal{T}}_{t}\) is applied repeatedly, converges to \(f_{t}^{*}\) with high probability.
**Assumption 1**.: _The fixed points of every two consecutive mappings are at most \(\epsilon_{f}>0\) away from each other, i.e., \(d\big{(}f_{t}^{*},f_{t-1}^{*}\big{)}\leq\epsilon_{f}\) for all \(t\in\{1,2,3,\dots\}\)._
It is worth mention that, even under Assumption 1, there can be non-consecutive mappings \(\overline{\mathcal{T}}_{t}\) and \(\overline{\mathcal{T}}_{t^{\prime}}\) whose fixed points are arbitrarily far away from each other. Note that in all iterations of the probabilistic value iteration, the same probabilistic contraction-expansion mapping \(\overline{\mathcal{T}}_{t}\) is applied to the function sequence \(\{f^{n}\}\). However, in the remainder of this subsection, we consider a time-varying and noisy version of the probabilistic Banach fixed-point theorem, where the underlying mapping changes over time and noise functions are added to the outcome of the mapping in each iteration.
Consider the time-varying function \(f_{t}\in X\) for \(t\in\{0,1,2,\dots\}\) evolving over time according to
\[f_{t+1}=\widetilde{\mathcal{T}}_{t}(f_{t})=\overline{\mathcal{T}}_{t}(f_{t})+w _{t},\quad t\in\{0,1,2,\dots\}, \tag{27}\]
where \(w_{t}\in X\) is some additive noise.
**Assumption 2**.: _The additive noise is uniformly upper-bounded by a constant \(\epsilon_{w}>0\), i.e., \(\|w_{t}\|\leq\epsilon_{w}\) for all \(t\in\{0,1,2,\dots\}\)._
Note that the shape of the function \(f_{t}\) can change over time and can be non-convex. However, the following theorem shows that an upper bound can be established for the distance between \(f_{t}\) and the time-varying fixed point \(f_{t}^{*}\).
**Theorem 2**.: _Consider arbitrary time-varying probabilistic contraction-expansion mappings \(\mathcal{T}_{t}\) with fixed points \(f_{t}^{*}\), where \(\sup_{t}\big{(}q_{t}^{2}\cdot p_{t}+Q_{t}^{2}\cdot(1-p_{t})\big{)}<1\) for \(t\in\{0,1,2,\dots\}\). Let the time-varying function \(f_{t}\) evolve
over time according to the time-varying noisy probabilistic transformation in (27). Under Assumptions 1 and 2, it holds that_
\[d\big{(}f_{t},f_{t}^{*}\big{)}\leq P_{t}\cdot d\big{(}f_{0},f_{0}^{*}\big{)}+S_{t }\cdot(\epsilon_{f}+\epsilon_{w}), \tag{28}\]
_where \(P_{t}=\left(\prod_{i=0}^{t-1}B_{i}\right)\) and \(S_{t}=\left(1+\sum_{i=1}^{t-1}\prod_{j=1}^{t-i}B_{j}\right)\) are random variables with independent random variables \(B_{t}\) having the distribution_
\[B_{t}=\begin{cases}q_{t}&\text{w.p. }\ p_{t}\\ Q_{t}&\text{otherwise}\end{cases}. \tag{29}\]
_The means and variances of \(P_{t}\) and \(S_{t}\) are upper-bounded as_
\[\begin{split}\mathbb{E}\left[P_{t}\right]&\leq\left(\sup_ {t}\big{(}q_{t}\cdot p_{t}+Q_{t}\cdot(1-p_{t})\big{)}\right)^{t}\xrightarrow{ t\to\infty}0,\\ \operatorname{Var}\left(P_{t}\right)&\leq\left(\sup_{t} \big{(}q_{t}^{2}\cdot p_{t}+Q_{t}^{2}\cdot(1-p_{t})\big{)}\right)^{t} \xrightarrow{t\to\infty}0,\end{split} \tag{30}\]
_and_
\[\begin{split}\mathbb{E}\left[S_{t}\right]&\leq \frac{1}{1-\sup_{t}\big{(}q_{t}\cdot p_{t}+Q_{t}\cdot(1-p_{t})\big{)}},\\ \operatorname{Var}\left(S_{t}\right)&\leq\frac{ \left(\bar{q}^{2}\cdot\bar{p}+\bar{Q}^{2}\cdot(1-\bar{p})\right)\cdot\big{(}1 +\bar{q}\cdot\bar{p}+\bar{Q}\cdot(1-\bar{p})\big{)}}{\left(1-\bar{q}^{2}\cdot \bar{p}-\bar{Q}^{2}\cdot(1-\bar{p})\right)\cdot\big{(}1-\bar{q}\cdot\bar{p}- \bar{Q}\cdot(1-\bar{p})\big{)}},\end{split} \tag{31}\]
_where \(\bar{q}\), \(\bar{Q}\), and \(\bar{p}\) satisfy \(\bar{q}\cdot\bar{p}+\bar{Q}\cdot(1-\bar{p})\geq\sup_{t\geq 1}\mathbb{E}[B_{t}]\) and \(\bar{q}^{2}\cdot\bar{p}+\bar{Q}^{2}\cdot(1-\bar{p})\geq\sup_{t\geq 1} \mathbb{E}[B_{t}^{2}]\)._
Proof.: Under the time-varying probabilistic contraction-expansion mappings with added noise functions introduced in (27), the distance between \(f_{t}\) and \(f_{t}^{*}\) can be upper-bounded as
\[\begin{split} d\big{(}f_{t},f_{t}^{*}\big{)}&=d \big{(}\widetilde{\mathcal{T}}_{t-1}\circ\cdots\circ\widetilde{\mathcal{T}}_{ 0}(f_{0}),f_{t}^{*}\big{)}\\ &\stackrel{{(a)}}{{=}}d\big{(}\overline{\mathcal{T}}_{t -1}\big{(}\widetilde{\mathcal{T}}_{t-2}\circ\cdots\circ\widetilde{\mathcal{T}} _{0}(f_{0})\big{)}+w_{t-1},f_{t}^{*}\big{)}\\ &=\big{\|}\overline{\mathcal{T}}_{t-1}\big{(}\widetilde{\mathcal{T }}_{t-2}\circ\cdots\circ\widetilde{\mathcal{T}}_{0}(f_{0})\big{)}+w_{t-1}-f_{ t}^{*}\big{\|}\\ &\stackrel{{(b)}}{{\leq}}d\big{(}\overline{\mathcal{T }}_{t-1}\big{(}\widetilde{\mathcal{T}}_{t-2}\circ\cdots\circ\widetilde{ \mathcal{T}}_{0}(f_{0})\big{)},f_{t}^{*}\big{)}+\|w_{t-1}\|\\ &\stackrel{{(c)}}{{\leq}}d\big{(}\overline{\mathcal{T }}_{t-1}\big{(}\widetilde{\mathcal{T}}_{t-2}\circ\cdots\circ\widetilde{ \mathcal{T}}_{0}(f_{0})\big{)},f_{t-1}^{*}\big{)}+d\big{(}f_{t-1}^{*},f_{t}^{* }\big{)}+\|w_{t-1}\|\\ &\stackrel{{(d)}}{{\leq}}B_{t-1}\cdot d\big{(} \widetilde{\mathcal{T}}_{t-2}\circ\cdots\circ\widetilde{\mathcal{T}}_{0}(f_{0} ),f_{t-1}^{*}\big{)}+\epsilon_{f}+\epsilon_{w},\end{split} \tag{32}\]
where \(\circ\) denotes the composition of linear operators, the definition of the mapping \(\widetilde{\mathcal{T}}_{t-1}\) in (27) is used in \((a)\), inequalities \((b)\) and \((c)\) are true by the triangular inequality, and \((d)\) follows from Assumptions 1 and 2 in addition to the probabilistic contraction-expansion property of the operator \(\overline{\mathcal{T}}_{t-1}\) and the fact that \(\overline{\mathcal{T}}_{t-1}(f_{t-1}^{*})=f_{t-1}^{*}\). Furthermore, the independent random variables \(B_{t}\) for \(t\geq 0\) used in \((d)\) have the distribution as specified in (29). Taking similar steps as in (32), we have
\[\begin{split} d\big{(}f_{t},f_{t}^{*}\big{)}&\leq B_{t-1} \cdot\Big{(}B_{t-2}\cdot d\big{(}\widetilde{\mathcal{T}}_{t-3}\circ\cdots \circ\widetilde{\mathcal{T}}_{0}(f_{0}),f_{t-2}^{*}\big{)}+\epsilon_{f}+ \epsilon_{w}\Big{)}+\epsilon_{f}+\epsilon_{w}\\ &\leq B_{t-1}\cdot\Big{(}B_{t-2}\cdot\Big{(}B_{t-3}\cdot d\big{(} \widetilde{\mathcal{T}}_{t-4}\circ\cdots\circ\widetilde{\mathcal{T}}_{0}(f_{0} ),f_{t-3}^{*}\big{)}+\epsilon_{f}+\epsilon_{w}\Big{)}+\epsilon_{f}+\epsilon_{w} \Big{)}+\epsilon_{f}+\epsilon_{w}\\ &\leq\left(\prod_{i=0}^{t-1}B_{i}\right)\cdot d\big{(}f_{0},f_{0}^{ *}\big{)}+\left(1+\sum_{i=1}^{t-1}\prod_{j=1}^{t-i}B_{j}\right)\cdot(\epsilon_{f}+ \epsilon_{w})\\ &\leq P_{t}\cdot d\big{(}f_{0},f_{0}^{*}\big{)}+S_{t}\cdot(\epsilon _{f}+\epsilon_{w}),\end{split} \tag{33}\]
where \(P_{t}=\left(\prod_{i=0}^{t-1}B_{i}\right)\) and \(S_{t}=\left(1+\sum_{i=1}^{t-1}\prod_{j=1}^{t-i}B_{j}\right)\) are random variables whose means and variances will be calculated below. Using the independence of random variables \(B_{t}\) for \(t\geq 0\), we have
\[\mathbb{E}\left[P_{t}\right]=\mathbb{E}\left[\prod_{i=0}^{t-1}B_{i}\right]=\prod _{i=0}^{t-1}\mathbb{E}\left[B_{i}\right]=\prod_{i=0}^{t-1}\left(q_{t}\cdot p_{t }+Q_{t}\cdot(1-p_{t})\right)\leq\left(\sup_{t}\big{(}q_{t}\cdot p_{t}+Q_{t} \cdot(1-p_{t})\big{)}\right)^{t} \tag{34}\]
\[\begin{split}\operatorname{Var}\left(P_{t}\right)&=\mathbb{E} \left[P_{t}^{2}\right]-\left(\mathbb{E}\left[P_{t}\right]\right)^{2}\\ &=\mathbb{E}\left[\prod_{i=0}^{t-1}B_{i}^{2}\right]-\prod_{i=0}^{ t-1}\left(q_{t}\cdot p_{t}+Q_{t}\cdot\left(1-p_{t}\right)\right)^{2}\\ &\leq\prod_{i=0}^{t-1}\left(q_{t}^{2}\cdot p_{t}+Q_{t}^{2}\cdot \left(1-p_{t}\right)\right)\\ &\leq\left(\sup_{t}\left(q_{t}^{2}\cdot p_{t}+Q_{t}^{2}\cdot \left(1-p_{t}\right)\right)\right)^{t}.\end{split} \tag{35}\]
Note that it is already shown in (25) that \(q_{t}^{2}\cdot p_{t}+Q_{t}^{2}\cdot\left(1-p_{t}\right)<1\) implies \(q_{t}\cdot p_{t}+Q_{t}\cdot\left(1-p_{t}\right)<1\), and therefore it suffices to assume that \(\sup_{t}\left(q_{t}^{2}\cdot p_{t}+Q_{t}^{2}\cdot\left(1-p_{t}\right)\right)<1\). Furthermore,
\[\begin{split}\mathbb{E}\left[S_{t}\right]&= \mathbb{E}\left[1+\sum_{i=1}^{t-1}\prod_{j=1}^{t-i}B_{j}\right]\\ &=1+\sum_{i=1}^{t-1}\prod_{j=1}^{t-i}\mathbb{E}\left[B_{j}\right] \\ &=1+\sum_{i=1}^{t-1}\prod_{j=1}^{t-i}\left(q_{j}\cdot p_{j}+Q_{j} \cdot\left(1-p_{j}\right)\right)\\ &\leq 1+\sum_{i=1}^{t-1}\left(\sup_{j}\left(q_{j}\cdot p_{j}+Q_{j} \cdot\left(1-p_{j}\right)\right)\right)^{t-i}\\ &\leq\frac{1}{1-\sup_{j}\left(q_{j}\cdot p_{j}+Q_{j}\cdot\left(1 -p_{j}\right)\right)}\end{split} \tag{36}\]
and
\[\operatorname{Var}\left(S_{t}\right)=\operatorname{Var}\left(1+\sum_{i=1}^{t- 1}\prod_{j=1}^{t-i}B_{j}\right)=\operatorname{Var}\left(\sum_{i=1}^{t-1}\prod _{j=1}^{t-i}B_{j}\right)\leq\mathbb{E}\left[\left(\sum_{i=1}^{t-1}\prod_{j=1}^ {t-i}B_{j}\right)^{2}\right]. \tag{37}\]
Consider the sequence of independent and identically distributed random variables \(\bar{B}_{t}\) for \(t\in\{1,2,\dots\}\) that have the distribution
\[\bar{B}_{t}=\begin{cases}\bar{q}&\text{w.p. }\bar{p}\\ \bar{Q}&\text{otherwise}\end{cases} \tag{38}\]
such that \(\mathbb{E}[\bar{B}_{t}]\geq\sup_{i\geq 1}\mathbb{E}[B_{i}]\) and \(\mathbb{E}[\bar{B}_{t}^{2}]\geq\sup_{i\geq 1}\mathbb{E}[B_{i}^{2}]\). Proceeding with (37), one can write
\[\operatorname{Var}\left(S_{t}\right)\leq\mathbb{E}\left[\left(\sum_{i=1}^{t-1 }\prod_{j=1}^{t-i}B_{j}\right)^{2}\right]\leq\mathbb{E}\left[\left(\sum_{i=1}^ {t-1}\prod_{j=1}^{t-i}\bar{B}_{j}\right)^{2}\right]\leq\mathbb{E}\left[\left( \sum_{i=1}^{\infty}\prod_{j=1}^{i}\bar{B}_{j}\right)^{2}\right]=\mathbb{E} \left[\bar{S}^{2}\right], \tag{39}\]
where \(\bar{S}=\sum_{i=1}^{\infty}\prod_{j=1}^{i}\bar{B}_{j}\). We have \(\mathbb{E}[\bar{S}]=\frac{\bar{q}\cdot\bar{p}+\bar{Q}\cdot\left(1-\bar{p} \right)}{1-\bar{q}\cdot\bar{p}-\bar{Q}\cdot\left(1-\bar{p}\right)}\) and \(\bar{S}=\bar{B}_{1}\cdot\left(1+\bar{B}_{2}+\bar{B}_{2}\cdot\bar{B}_{3}+ \cdots\right)=\bar{B}_{1}\cdot\left(1+\bar{S}\right)\), where \(\bar{S}\) is independent of \(B_{1}\), and the random variables \(\bar{S}\) and \(\bar{S}\) are identically distributed but not independent of each other. Taking expectation on both sides of \(\bar{S}^{2}=\bar{B}_{1}^{2}\cdot(1+\bar{S})^{2}\), and using the independence of \(\bar{S}\) and \(B_{1}\) and the fact that \(\mathbb{E}[\bar{S}^{2}]=\mathbb{E}[\bar{S}^{2}]\), we have
\[\begin{split}\mathbb{E}[\bar{S}^{2}]&=\mathbb{E}[ \bar{B}_{1}^{2}]\cdot\mathbb{E}[1+2\bar{S}+\bar{S}^{2}]=\left(\bar{q}^{2}\cdot \bar{p}+\bar{Q}^{2}\cdot\left(1-\bar{p}\right)\right)\times\left(1+\frac{2 \left(\bar{q}\cdot\bar{p}+\bar{Q}\cdot\left(1-\bar{p}\right)\right)}{1-\bar{ q}\cdot\bar{p}-\bar{Q}\cdot\left(1-\bar{p}\right)}+\mathbb{E}[\bar{S}^{2}] \right)\\ \implies\mathbb{E}[\bar{S}^{2}]&=\frac{\left(\bar{q}^{2 }\cdot\bar{p}+\bar{Q}^{2}\cdot\left(1-\bar{p}\right)\right)\cdot\left(1+\bar{ q}\cdot\bar{p}+\bar{Q}\cdot\left(1-\bar{p}\right)\right)}{\left(1-\bar{q}^{2} \cdot\bar{p}-\bar{Q}^{2}\cdot\left(1-\bar{p}\right)\right)\cdot\left(1-\bar{q} \cdot\bar{p}-\bar{Q}\cdot\left(1-\bar{p}\right)\right)}.\end{split} \tag{40}\]
Putting (39) and (40) together, it can be concluded that \(\operatorname{Var}\left(S_{t}\right)\leq\frac{\left(\bar{q}^{2}\cdot\bar{p}+\bar{p} \cdot(2\cdot 1-\bar{p})\right)\cdot\left(1+\bar{q}\cdot\bar{p}+\bar{Q}\cdot(1-\bar{p}) \right)}{\left(1-\bar{q}^{2}\cdot\bar{p}-\bar{Q}^{2}\cdot(1-\bar{p})\right) \cdot\left(1-\bar{q}\cdot\bar{p}-\bar{Q}\cdot(1-\bar{p})\right)}\), which completes the proof.
In the absence of the adversary, the probabilistic contraction-expansion mapping \(\overline{\mathcal{T}}_{t}\) is purely a contraction with the rate \(q_{t}\). We obtain the following corollary as a direct consequence of Theorem 2.
**Corollary 1**.: _Consider arbitrary time-varying contraction mappings \(\overline{\mathcal{T}}_{t}\) with the contraction constants \(q_{t}\) and fixed points \(f_{t}^{*}\). Suppose that \(q=\sup_{t}q_{t}<1\) and that Assumption 1 holds. Let the time-varying function \(f_{t}\) evolve over time according to (27). For \(\epsilon>0\), we define the hitting time as \(T(\epsilon)=\min\left\{T:d\big{(}f_{t},f_{t}^{*}\big{)}<\epsilon,\ \forall t\geq T\right\}\). If \(\epsilon\in(\frac{1}{1-q}\cdot(\epsilon_{f}+\epsilon_{w}),\frac{1}{1-q}\cdot (\epsilon_{f}+\epsilon_{w})+D]\), then_
\[T(\epsilon)\leq 1+\ln\left(\bigg{(}\epsilon-\frac{1}{1-q}\cdot(\epsilon_{f} +\epsilon_{w})\bigg{)}\Big{/}D\right)\bigg{/}\ln(q), \tag{41}\]
_where \(\epsilon_{w}\) is an upper bound on the norm of each noise function and \(D>0\) is an upper bound on \(d\big{(}f_{0}^{*},f_{0}\big{)}\)._
Proof.: When the time-varying mappings \(\{\mathcal{T}_{t}\}\) are only contraction mappings, the random variable \(B_{t}\) is equal to \(q_{t}\) with probability 1 in (29). As a result, Equation (33) has the following form:
\[d\big{(}f_{t},f_{t}^{*}\big{)}\leq q^{t}\cdot d\big{(}f_{0},f_{0}^{*}\big{)}+ \frac{1}{1-q}\cdot(\epsilon_{f}+\epsilon_{w}), \tag{42}\]
where we use \(q=\sup_{t}q_{t}\). Since the right-hand side of (42) is decreasing in \(t\), the hitting time \(T(\epsilon)\) is upper-bounded by the minimum value of \(t\) that satisfies \(q^{t}\cdot d\big{(}f_{0},f_{0}^{*}\big{)}+\frac{1}{1-q}\cdot(\epsilon_{f}+ \epsilon_{w})\leq\epsilon\). The proof is completed by noticing that \(d\big{(}f_{0}^{*},f_{0}\big{)}\) is upper-bounded by a constant \(D>0\) and \(\frac{1}{1-q}\cdot(\epsilon_{f}+\epsilon_{w})\leq\epsilon\leq\frac{1}{1-q} \cdot(\epsilon_{f}+\epsilon_{w})+D\).
Corollary 1 formalizes how many iterations are required in the value iteration with additive noise and a time-varying contraction operator - that can be caused by a time-varying environment - to guarantee that the ultimate function value is in an \(\epsilon\)-neighborhood of the fixed point.
**Remark 2**.: _Tighter bounds on the hitting time for Theorems 1 and 2 may be obtained by applying concentration inequalities involving higher moments instead of Chebyshev's inequality. However, since our bounds already have logarithmic dependence on the relevant parameters \(p\), \(Q\), \(L\), \(\epsilon\), and \(d(f_{1},f_{0})\), they are sufficient for most practical purposes as long as those parameters do not scale exponentially with the problem size._
### Optimization of Time-Varying Functions with Additive Noise
Consider the unknown time-varying continuous function \(f_{t}:\mathcal{D}\to\mathcal{R}\) with the known bounded Lipschitz constant \(K_{t}\), over the discrete-time horizon \(t\in\{1,2,\dots\}\), where \(\mathcal{D}\subset\mathbb{R}^{d}\) is a compact set and \(\mathcal{R}\subset\mathbb{R}\). The goal is to \(\epsilon\)-optimize the unknown time-varying function \(f_{t}\), i.e., to find a possibly time-varying point \(\widetilde{x}_{t}^{*}\) such that \(|f_{t}(\widetilde{x}_{t}^{*})-f_{t}(x_{t}^{*})|\leq\epsilon\) for \(\epsilon>0\), where \(x_{t}^{*}=\operatorname*{argmin}_{x\in\mathcal{D}}f_{t}(x)\). Although the function \(f_{t}\) is unknown, inquiries of the function values at given input points can be made in consecutive rounds, which are evaluated with added noise. More precisely, at round \(t\in\{1,2,\dots\}\), we consider querying the function \(f_{t}\) on the set of input points \(\mathcal{P}=\{x_{1},\dots,x_{n}\}\subset\mathcal{D}\), and the revealed values are
\[\widetilde{f}_{t}(x_{i})=f_{t}(x_{i})+N_{t}(x_{i}), \tag{43}\]
where \(N_{t}(x_{i})\) is some noise satisfying the following assumption.
**Assumption 3**.: _The noise parameters \(N_{t}(x_{i})\) are bounded i.i.d. random variables with zero mean, i.e., \(\mathbb{E}[N_{t}(x_{i})]=0\), for which there exists \(L_{N}>0\) such that \([\sup\{N_{t}(x_{i})\}-\inf\{N_{t}(x_{i})\}]<L_{N}\) for all \(t\in\{1,2,\dots\}\) and \(x_{i}\in\mathcal{P}\)._
If the noise is disruptive enough, a single set of observed noisy function values \(f_{t}(x_{i})\) for all \(x_{i}\in\mathcal{P}\) may not represent the unknown target function accurately, making it impossible to \(\epsilon\)-optimize the function with a few number of observations. Furthermore, since the function changes over time, old observations may not
be useful in \(\epsilon\)-optimizing the time-varying function as \(t\) increases. Putting these two facts into perspective, the estimate of the target function \(f_{t}\) at round \(t-1\), namely \(\widehat{f}_{t-1}\), may need to be updated with the new observation at round \(t\), while discarding inaccurate old observations. We propose the following formula for estimating \(f_{t}\):
\[\widehat{f}_{t}(x_{i})= \frac{\min\{t,T+1\}-1}{\min\{t,T\}}\cdot\widehat{f}_{t-1}(x_{i})+ \frac{1}{\min\{t,T\}}\cdot\widetilde{f}_{t}(x_{i})-\frac{1}{T}\cdot\widetilde{ f}_{t-T}(x_{i})\cdot 1\,\{t>T\}, \tag{44}\]
where \(\mathbb{1}\{\cdot\}\) is the indicator function. The parameter \(T\), whose value to be specified, should be chosen such that old data is discarded due to the time-varying nature of the function while not harming accurate estimation of the function value in the presence of noise. The computational cost of (44) is on the same order of that of the moving average update in reinforcement learning, but in (44) there is a need for storing the previous \(T\) observations in order to have access to \(\widetilde{f}_{t-T}(x_{i})\).
The estimation function \(\widehat{f}_{t}(x_{i})\) changes over time and may not represent the target function for small values of \(t\). However, there may exist a hitting time \(T\) that is used in (44) after which optimizing the estimated function \(\widehat{f}_{t}\)\(\epsilon\)-optimizes the target function \(f_{t}\) with an associated confidence level \(1-a\), where \(0<a\leq 1\). As a result, the complexity of \(\epsilon\)-optimizing the unknown time-varying target function \(f_{t}\) in long-run is irrelevant to the complexity of optimizing function \(\widehat{f}_{t}\) up to the hitting time \(T\). Consequently, the hitting time \(T\) as well as the optimization complexity of \(\widehat{f}_{t}\) for \(t\geq T\) captures the difficulty of \(\epsilon\)-optimizing the target function \(f_{t}\) rather than the cumulative optimization complexities of functions \(\widehat{f}_{t}\) for \(t<T\). Formally speaking, the hitting time \(T(\epsilon,a)\) is defined below.
**Definition 2**.: _Given \(\epsilon>0\) and \(a\in(0,1]\), the hitting time \(T(\epsilon,a)\) is defined as_
\[T(\epsilon,a)=\min\Big{\{}T:\mathbb{P}\big{(}\big{|}f_{t}(\widehat{x}_{t}^{*} )-f_{t}(x_{t}^{*})\big{|}\leq\epsilon\big{)}\geq 1-a,\ \forall t\geq T\Big{\}}, \tag{45}\]
_where \(\widehat{x}_{t}^{*}=\operatorname*{argmin}_{x\in\mathcal{P}}\widehat{f}_{t}(x)\) and \(x_{t}^{*}=\operatorname*{argmin}_{x\in\mathcal{D}}f_{t}(x)\)._
To make the time-varying problem amenable to optimization, we also make the following assumption about the set of input points \(\mathcal{P}\).
**Assumption 4**.: _For a given \(\epsilon>0\), the set of input points \(\mathcal{P}=\{x_{1},x_{2},\ldots,x_{n}\}\) is a \(\delta\)-uniform grid of the function domain \(\mathcal{D}\) such that \(\delta<\frac{2\epsilon}{7\sqrt{d}K}\), where \(K=\sup_{t\geq 1}K_{t}\) with \(K_{t}\) being the Lipschitz constant of function \(f_{t}\)._
Recall that being a \(\delta\)-uniform grid means that \(\mathcal{P}\) satisfies two properties: (i) \(\{x_{i}+\delta e_{j},x_{i}-\delta e_{j}\}\cap\mathcal{D}\in\mathcal{P}\) for all \(i\in\{1,\ldots,n\}\) and \(j\in\{1,\ldots,d\}\), where \(e_{1},\ldots,e_{d}\) are the standard basis of \(\mathbb{R}^{d}\), and (ii) for every \(x\in\mathcal{D}\) there exists \(x_{i}\in\mathcal{P}\) such that \(\|x_{i}-x\|\leq\sqrt{d}\delta/2\). The fine granularity assumption, i.e., \(\delta<\frac{2\epsilon}{7\sqrt{d}K}\), assures that there exists a grid point whose unknown function value at time \(t\) is at least \(\frac{\epsilon}{7}\) close to the minimum of function \(f_{t}\). Denote such points of the grid \(\mathcal{P}\) by \(\mathcal{N}_{t}(\frac{\epsilon}{7})=\{x_{i}\in\mathcal{P}:f_{t}(x_{i})-f_{t}( x_{t}^{*})\leq\frac{\epsilon}{7}\}\) and let \(\overline{\mathcal{N}}_{t}(\epsilon)=\{x_{i}\in\mathcal{P}:f_{t}(x_{i})-f_{t}( x_{t}^{*})>\epsilon\}\). Without loss of generality, we assume that \(\overline{\mathcal{N}}_{t}(\epsilon)\neq\emptyset\); otherwise, any point in \(\mathcal{P}\)\(\epsilon\)-optimizes function \(f_{t}\). The following theorem presents an upper bound on the hitting time.
**Theorem 3**.: _Consider the unknown time-varying function \(f_{t}\) with the property \(\,|f_{t}(x)-f_{t-1}(x)|\leq\frac{\epsilon^{3}}{43L_{N}^{*}\cdot\ln(\frac{a}{a})}\), for all \(t\geq 1\) and \(x\in\mathcal{D}\). Given \(\epsilon>0\) and \(a\in(0,1]\), let Assumptions 3 and 4 hold. Then, the hitting time \(T(\epsilon,a)\) satisfies the inequality_
\[T(\epsilon,a)\leq\frac{49L_{N}^{2}}{8\epsilon^{2}}\cdot\ln\left(\frac{n}{a} \right)+1. \tag{46}\]
Proof.: In order to find an upper bound on the hitting time \(T(\epsilon,a)\), it is reasonable to assume that the function variation over time is upper-bounded; otherwise, there may not be enough time for learning the rapidly changing functions \(\{f_{t}\}\). Assume that the time-variation of the unknown time-varying target function \(f_{t}\) is upper-bounded by
\[|f_{t}(x)-f_{t-1}(x)|\leq\frac{\epsilon}{7T},\quad\forall t\geq 1,\forall x\in \mathcal{D}. \tag{47}\]
Then, under Assumption 4, the hitting event defined in (45) satisfies the following condition
\[\left\{\exists x_{i}\in\mathcal{N}_{t}(\frac{\epsilon}{7})\text{ such that }\frac{1}{T}\cdot\sum_{s=t-T+1}^{t}N_{s}(x_{i})\leq\frac{2\epsilon}{7} \text{ \bf and }\frac{1}{T}\cdot\sum_{s=t-T+1}^{t}N_{s}(x_{i})\geq-\frac{2\epsilon}{7}, \forall x_{i}\in\overline{\mathcal{N}}_{t}(\epsilon)\right\} \tag{48}\] \[\subseteq \left\{\left|f_{t}(\widehat{x}_{t}^{*})-f_{t}(x_{t}^{*})\right| \leq\epsilon\right\},\quad\forall t\geq T.\]
The above equation holds true because (43) and (44) result in \(\widehat{f}_{t}(x_{i})=\frac{1}{T}\cdot\sum_{s=t-T+1}^{t}f_{s}(x_{i})+\frac{1} {T}\cdot\sum_{s=t-T+1}^{t}N_{s}(x_{i})\) for \(t\geq T\), and by (47), one can write
\[\widehat{f}_{t}(x_{i}) \leq f_{t}(x_{i})+\frac{\epsilon}{7}+\frac{1}{T}\cdot\sum_{s=t-T +1}^{t}N_{s}(x_{i}),\quad\forall x_{i}\in\mathcal{N}_{t}(\frac{\epsilon}{7}), \tag{49}\] \[\widehat{f}_{t}(\overline{x}_{j}) \geq f_{t}(\overline{x}_{j})-\frac{\epsilon}{7}+\frac{1}{T}\cdot \sum_{s=t-T+1}^{t}N_{s}(\overline{x}_{j}),\quad\forall\overline{x}_{j}\in \overline{\mathcal{N}}_{t}(\epsilon).\]
Furthermore, \(f_{t}(\overline{x}_{j})-f_{t}(x_{i})>\frac{6\epsilon}{7}\) for all \(\overline{x}_{j}\in\overline{\mathcal{N}}_{t}(\epsilon)\) and \(x_{i}\in\mathcal{N}_{t}(\frac{\epsilon}{7})\). Taking the difference of the two inequalities in (49) yields that \(\widehat{f}_{t}(\overline{x}_{j})-\widehat{f}_{t}(x_{j})>\frac{4\epsilon}{7} +\sum_{s=t-T+1}^{t}N_{s}(\overline{x}_{j})-\sum_{s=t-T+1}^{t}N_{s}(x_{i})\). If the event on the left-hand side of (48) is true, then \(\widehat{f}_{t}(\overline{x}_{j})-\widehat{f}_{t}(x_{j})>0\), which means that there exists \(\widetilde{x}_{t}^{*}\in\mathcal{N}_{t}(\frac{\epsilon}{7})\) whose estimated function value is less than the estimated function value at all points \(\overline{x}_{j}\in\overline{\mathcal{N}}_{t}(\epsilon)\). Note that the estimated function value at a point \(\overline{x}_{t}^{*}\in\mathcal{P}\setminus\left(\mathcal{N}_{t}(\frac{ \epsilon}{7})\cup\overline{\mathcal{N}}_{t}(\epsilon)\right)\) can be less than \(\widehat{f}_{t}(\widetilde{x}_{t}^{*})\), but such a point also \(\epsilon\)-optimizes the function \(f_{t}\). Hence, \(\widehat{x}_{t}^{*}=\operatorname*{argmin}_{x\in\mathcal{P}}\widehat{f}_{t}(x)\)\(\epsilon\)-optimizes the function \(f_{t}\), which means that the event on right-hand side of (48) is true.
Denote the event on the left-hand side of (48) as \(E_{t}\), whose probability can be lower-bounded as
\[\mathbb{P}\{E_{t}\} \overset{(a)}{\geq} \mathbb{P}\left\{\frac{1}{T}\cdot\sum_{s=t-T+1}^{t}N_{s}(x_{i}) \leq\frac{2\epsilon}{7},x_{i}\in\mathcal{N}_{t}(\frac{\epsilon}{7})\right\} \times\prod_{x_{i}\in\overline{\mathcal{N}}_{t}(\epsilon)}\mathbb{P}\left\{ \frac{1}{T}\cdot\sum_{s=t-T+1}^{t}N_{s}(x_{i})\geq-\frac{2\epsilon}{7}\right\} \tag{50}\] \[\overset{(b)}{\geq} \prod_{x_{i}\in\mathcal{P}}\left(1-\exp\left(-\frac{8T\epsilon^{ 2}}{49L_{N}^{2}}\right)\right)\] \[> 1-n\cdot\exp\left(-\frac{8T\epsilon^{2}}{49L_{N}^{2}}\right),\]
where \((a)\) is true as the added noise signals are independent of each other and \((b)\) follows from Hoeffding's inequality and possibly multiplying by positive terms that are less than one. Putting (48) and (50) together, we have
\[\mathbb{P}\left\{\left|f_{t}(\widehat{x}_{t}^{*})-f_{t}(x_{t}^{*})\right|\leq \epsilon\right\}\geq 1-n\cdot\exp\left(-\frac{8T\epsilon^{2}}{49L_{N}^{2}} \right),\forall t\geq T. \tag{51}\]
If \(1-n\cdot\exp\left(-\frac{8T\epsilon^{2}}{49L_{N}^{2}}\right)\geq 1-a\) or equivalently \(T\geq\frac{49L_{N}^{2}}{8\epsilon^{2}}\cdot\ln\left(\frac{n}{a}\right)\), we have
\[\mathbb{P}\left\{\left|f_{t}(\widehat{x}_{t}^{*})-f_{t}(x_{t}^{*})\right|\leq \epsilon\right\}\geq 1-a,\quad\forall t\geq T. \tag{52}\]
As a result, an upper bound on the hitting time \(T(\epsilon,a)\) defined in (45) is provided as
\[T(\epsilon,a)\leq\frac{49L_{N}^{2}}{8\epsilon^{2}}\cdot\ln\left(\frac{n}{a} \right)+1. \tag{53}\]
We substitute the upper bound on \(T(\epsilon,a)\) into (47). It follows that the above analysis is valid if
\[|f_{t}(x)-f_{t-1}(x)|\leq\frac{8\epsilon^{3}}{343L_{N}^{2}\cdot\ln(\frac{n}{a} )},\quad\forall t\geq 1,\forall x\in\mathcal{D}. \tag{54}\]
This completes the proof.
**Remark 3**.: _Note that the cardinality of the \(\delta\)-grid with \(\delta<\frac{2\epsilon}{\gamma\sqrt{d}K}\) used in Theorem 3, namely \(n=|\mathcal{P}|\), depends on \(\epsilon\). As an example, if \(\mathcal{D}\) can be written as the Cartesian product of \(d\) intervals of length at most \(M\) as \(\mathcal{D}=\mathcal{D}_{1}\times\mathcal{D}_{2}\times\cdots\times\mathcal{D} _{d}\), then the cardinality of the \(\delta\)-grid would be \(n=\mathcal{O}\left(\left(\frac{\sqrt{d}KM}{\epsilon}\right)^{d}\right)\), and therefore the upper bound on the hitting time in Theorem 3 is given by \(T(\epsilon,a)\leq\mathcal{O}\left(\frac{dL_{N}^{2}}{\epsilon^{2}}\cdot\ln\left( \frac{\sqrt{d}KM}{\sqrt{a}\epsilon}\right)\right)\)._
Theorem 3 determines how fast the unknown function \(f_{t}\) is allowed to change over time such that one can still learn the estimation function \(\widehat{f_{t}}\) which is used to \(\epsilon\)-optimize the target function \(f_{t}\) with a confidence level. The parameter \(T\) in (44) can be set to the upper bound provided in Theorem 3 so that old inaccurate observations are discarded and at the same time enough observations are used for an accurate estimation of \(f_{t}\).
### Improved Bounds for Convex Functions
Consider the same framework as in Section 2.3 under additional assumptions to be stated here. Let \(f_{t}\) be a convex function for all \(t\geq 1\). Denote the lower contour set of the convex function \(f_{t}\) by \(C_{t}(c)=\{x\in\mathcal{D}:f_{t}(x)-f_{t}(x_{t}^{*})\leq c\}\) and the level set of the convex function \(f_{t}\) by \(L_{t}(c)=\{x\in\mathcal{D}:f_{t}(x)-f_{t}(x_{t}^{*})=c\}\) for \(c>0\). Define \(\overline{C}_{t}(c_{1},c_{2})=\{x\in\mathcal{D}:c_{1}<f_{t}(x)-f_{t}(x_{t}^{*} )\leq c_{2}\}\) when \(c_{2}>c_{1}\). Let \(\mathcal{M}_{t}(c)=\{x_{i}\in\mathcal{P}:x_{i}\in C_{t}(c)\}\) and \(\overline{\mathcal{M}}_{t}(c_{1},c_{2})=\{x_{i}\in\mathcal{P}:x_{i}\in \overline{C}_{t}(c_{1},c_{2})\}\).
**Assumption 5**.: _There exists \(M>0\) such that \(L_{t}(M)\) is homeomorphic to a \(d\)-dimensional sphere and is inside \(\mathcal{D}\) for all \(t\geq 1\)._
If \(d=1\) or \(d=2\), a sphere is defined as two distinctive points or a circle, respectively. Note that a lower bound on \(M\) can be estimated up to a precision with high probability, but \(M\) is assumed to be known to simplify the proof concepts.
**Assumption 6**.: _There exists \(k>0\) such that \(\|\nabla f_{t}(x)\|\geq k\), for all \(t\geq 1\) and \(x\in\mathcal{D}\setminus C_{t}(\epsilon)\)._
Intuitively, Assumption 6 requires every convex function \(f_{t}\) have enough curvature inside its lower contour set \(C_{t}(\epsilon)\), so that \(\|\nabla f_{t}(x)\|\) can be uniformly lower-bounded by a positive constant \(k\) in \(\mathcal{D}\setminus C_{t}(\epsilon)\) for all \(t\geq 1\).
Leveraging the new assumptions on the time-varying functions \(\{f_{t}\}\), the following theorem presents a tighter upper bound on the hitting time compared to Theorem 3.
**Theorem 4**.: _Consider the unknown time-varying convex function \(f_{t}\) with the property \(|f_{t}(x)-f_{t-1}(x)|\leq\frac{\epsilon^{3}}{43L_{N}^{2}\cdot\ln(\frac{n}{ \epsilon})}\), for all \(t\geq 1\) and \(x\in\mathcal{D}\). Given \(\epsilon>0\) and \(a\in(0,1]\), suppose that Assumptions 3-6 hold. Then, the hitting time \(T(\epsilon,a)\) is upper-bounded by the minimum \(T\) satisfying the inequality_
\[\sum_{l=0}^{l_{m}}n_{l}\cdot\exp\Big{(}-\frac{2T\big{(}l+\frac{2}{\epsilon} \big{)}^{2}\epsilon^{2}}{L_{N}^{2}}\Big{)}\leq a, \tag{55}\]
_where \(\sum_{l=0}^{l_{m}}n_{l}=n\) and \(l_{m}\leq\lfloor\frac{M}{\epsilon}\rfloor-3\) such that \(n_{l}=\frac{m_{l}}{1+m_{l}}\cdot n+1\) for \(l\in\{0,1,\ldots,l_{m}-1\}\) with \(m_{l}=\frac{2^{d+1}\cdot K\cdot\epsilon}{k\cdot\big{(}M-(l+4)\epsilon\big{)}}\)._
Proof.: Following the same logic as in (48) and leveraging the convexity of \(\{f_{t}\}\), we obtain that the the hitting event in (45) satisfies the condition
\[\begin{split}&\left\{\exists x_{i}\in\mathcal{M}_{t}(\frac{ \epsilon}{7})\text{ such that }\frac{1}{T}\cdot\sum_{s=t-T+1}^{t}N_{s}(x_{i})\leq\frac{2 \epsilon}{7}\text{ and }\frac{1}{T}\cdot\sum_{s=t-T+1}^{t}N_{s}(x_{i})\geq-\frac{2 \epsilon}{7},\forall x_{i}\in\overline{\mathcal{M}}_{t}\Big{(}\epsilon,2 \epsilon\Big{)}\text{ and }\\ &\frac{1}{T}\cdot\sum_{s=t-T+1}^{t}N_{s}(x_{i})\geq-\left(l+\frac {2}{7}\right)\epsilon,\forall x_{i}\in\overline{\mathcal{M}}_{t}\Big{(}(l+1) \epsilon,(l+2)\epsilon\Big{)},\forall 1\leq l\leq\Big{\lfloor}\frac{M}{\epsilon}\Big{\rfloor} \right\}\\ &\subseteq\left\{\left|f_{t}(\widetilde{x}_{t}^{*})-f_{t}(x_{t}^{*} )\right|\leq\epsilon\right\},\quad\forall t\geq T.\end{split} \tag{56}\]
Denote the event on the left-hand side of (56) as \(E_{t}\), whose probability can be lower-bounded as
\[\begin{split}\mathbb{P}\{E_{t}\}\overset{(a)}{\geq}& \,\mathbb{P}\left\{\frac{1}{T}\cdot\sum_{s=t-T+1}^{t}N_{s}(x_{i}) \leq\frac{2\epsilon}{7},x_{i}\in\mathcal{M}_{t}(\frac{\epsilon}{7})\right\} \times\prod_{x_{i}\in\overline{\mathcal{M}}_{t}\big{(}\epsilon,2\epsilon\big{)} }\mathbb{P}\left\{\frac{1}{T}\cdot\sum_{s=t-T+1}^{t}N_{s}(x_{i})\geq-\frac{2 \epsilon}{7}\right\}\\ &\times\prod_{l=1}^{\lfloor\frac{M}{2}\rfloor}\prod_{x_{i}\in \overline{\mathcal{M}}_{t}\big{(}\left(l+1\right)\epsilon,\left(l+2\right) \epsilon\big{)}}\mathbb{P}\left\{\frac{1}{T}\cdot\sum_{s=t-T+1}^{t}N_{s}(x_{i })\geq-\big{(}l+\frac{2}{7}\big{)}\epsilon\right\}\\ &\overset{(b)}{\geq}\left[1-\exp\left(-\frac{8T\epsilon^{2}}{49 L_{N}^{2}}\right)\right]^{\overline{n}_{0}+1}\times\prod_{l=1}^{l_{m}}\left[1-\exp \left(-\frac{2T\big{(}l+\frac{2}{7}\big{)}^{2}\epsilon^{2}}{L_{N}^{2}}\right) \right]^{n_{l}}\\ &\geq 1-\sum_{l=0}^{l_{m}}n_{l}\cdot\exp\left(-\frac{2T\big{(}l+ \frac{2}{7}\big{)}^{2}\epsilon^{2}}{L_{N}^{2}}\right)\end{split} \tag{57}\]
where \((a)\) is true as the added noise signals are independent of each other and \((b)\) follows from Hoeffding's inequality, \(\overline{n}_{0}\) is an upper bound on the number of grid points in the set \(\overline{\mathcal{M}}_{t}\big{(}\epsilon,2\epsilon\big{)}\) and \(n_{0}=\overline{n}_{0}+1\), and \(n_{l}\) is an upper bound on the number of grid points in the set \(\overline{\mathcal{M}}_{t}\big{(}\big{(}l+1\big{)}\epsilon,\left(l+2\right) \epsilon\big{)}\), where \(l_{m}\) satisfies \(\sum_{l=0}^{l_{m}}n_{l}=n\) and \(l_{m}\leq\lfloor\frac{M}{\epsilon}\rfloor-3\). Note that the last nonzero \(n_{l}\) is not a free parameter since the sum of all \(n_{l}\) should be \(n\). Putting (56) and (57) together, we have \(\mathbb{P}\left\{\big{|}f_{t}(\widehat{x}_{t}^{*})-f_{t}(x_{t}^{*})\big{|} \leq\epsilon\right\}\geq 1-a\) for all \(t\geq T\) provided that
\[\sum_{l=0}^{l_{m}}n_{l}\cdot\exp\Big{(}-\frac{2T\big{(}l+\frac{2}{7}\big{)}^{2 }\epsilon^{2}}{L_{N}^{2}}\Big{)}\leq a, \tag{58}\]
which provides an upper bound on the hitting time \(T(\epsilon,a)\) defined in (45). As stated earlier in (47), the above analysis is true if \(|f_{t}(x)-f_{t-1}(x)|\leq\frac{\epsilon}{7T(\epsilon,a)}\) for all \(t\geq 1\) and \(x\in\mathcal{D}\). Using the general upper bound on the hitting time provided in Theorem 3, the analysis holds if \(|f_{t}(x)-f_{t-1}(x)|\leq\frac{\epsilon^{3}}{43L_{N}^{2}\ln\left(\frac{n}{ \epsilon}\right)}\) for all \(t\geq 1\) and \(x\in\mathcal{D}\).
In the rest of the proof, the values of \(n_{l}\) for \(0\leq l\leq l_{m}\) are computed. The key ideas behind finding these upper bounds are that the level sets \(\overline{L}_{t}\big{(}(l+1)\epsilon\big{)}\) for \(0\leq l\leq l_{m}+2\) are nested surfaces that are homeomorphic to a \(d\)-dimensional sphere inside the function domain and that the minimum distance between any point of a level set from any of the other level set is controlled by \(K\) and \(k\). Let \(Vol(\cdot)\) denote the volume of an input \(d\)-dimensional set and \(A(\cdot)\) denote the area of an input \((d-1)\)-dimensional surface. By convention, the area of a \(d\)-dimensional sphere for \(d=1\) and \(d=2\) is equal to \(2\) and the length of the sphere, respectively. For every \(l\in\{0,1,\ldots,l_{m}\}\), one can write
\[\begin{split}& n_{l}-1\leq\frac{2^{d}\cdot Vol\left(C_{t}\big{(}(l+1) \epsilon,(l+3)\epsilon\big{)}\right)}{\delta^{d}}\leq\frac{2^{d}\cdot\frac{2 \epsilon}{k}\cdot A\left(P_{t}\big{(}(l+1)\epsilon,(l+3)\epsilon\big{)} \right)}{\delta^{d}},\\ &\sum_{l=l+1}^{l_{m}}n_{l}\geq\frac{Vol\left(C_{t}\big{(}(l+3) \epsilon,M-\epsilon\big{)}\right)}{\delta^{d}}\geq\frac{\frac{M-(l+4)\epsilon} {K}\cdot A\left(P_{t}\big{(}(l+3)\epsilon,M-\epsilon\big{)}\right)}{\delta^{d }},\end{split} \tag{59}\]
where the term \(2^{d}\) comes from the facts that each \(d\)-dimensional cube has at most \(2^{d}\) endpoints and \(P_{t}\big{(}(l+1)\epsilon,(l+3)\epsilon\big{)}\subset C_{t}\big{(}(l+1)\epsilon,( l+3)\epsilon\big{)}\) and \(P_{t}\big{(}(l+3)\epsilon,M-\epsilon\big{)}\subset C_{t}\big{(}(l+3)\epsilon,M- \epsilon\big{)}\) are two \((d-1)\)-dimensional planes such that \(A\left(P_{t}\big{(}(l+1)\epsilon,(l+3)\epsilon\big{)}\right)\leq A\left(L_{t} \big{(}(l+3)\epsilon\big{)}\right)\leq A\left(P_{t}\big{(}(l+3)\epsilon,M- \epsilon\big{)}\right)\). Then,
\[\frac{n_{l}-1}{n-n_{l}}\leq\frac{n_{l}-1}{\sum_{l=l+1}^{l_{m}}n_{l}}\leq\frac{2^ {d+1}\cdot K\cdot\epsilon}{k\cdot\big{(}M-(l+4)\epsilon\big{)}}=m_{l}\implies n _{l}\leq\frac{m_{l}}{1+m_{l}}\cdot n+1, \tag{60}\]
which completes the proof.
**Remark 4**.: _We note that, since the left-hand side of (55) is monotone decreasing in \(T\), a number \(T\) satisfying (55) always exists. By substituting the bound in (46) into (55), it can be verified that Theorem 4 provides a better bound than Theorem 3 since some properties of convex functions are leveraged. A comparison of the results of Theorems 3 and 4 along with the simulation details is depicted in Figure 1._
## 3 The Hitting Time Analysis for Discrete Functions
In this section, two variants of stochastic time-varying models are studied for discrete functions. In the first model, an unknown discrete function is observed with additive noise whose estimation function changes over time due to the presence of noise. In the second model, a time-varying linear model with additive noise is studied.
### Optimization of Functions with Additive Noise
Consider an unknown discrete function \(f:\mathcal{X}\rightarrow\mathcal{R}\), where \(\mathcal{X}\subset\mathbb{Z}^{d}\) is a bounded subset of \(d\) integer tuples and \(\mathcal{R}\subset\mathbb{R}\) is a subset of real numbers (\(\mathbb{Z}\) denotes the set of integer numbers). Denote the strict local minima and maxima, known collectively as strict local extrema, of the unknown function \(f\) by \(\mathcal{X}^{*}\) defined as
\[\mathcal{X}^{*}= \{x^{*}\in\mathcal{X}:f(x^{*})<f(x),\forall x\in\mathcal{B}(x^{* })\}\cup\{x^{*}\in\mathcal{X}:f(x^{*})>f(x),\forall x\in\mathcal{B}(x^{*})\} \tag{61}\]
where \(\mathcal{B}(x^{*})=\cup_{j=1}^{d}\{x^{*}+h_{j},x^{*}-h_{j}\}\cap\mathcal{X}\) with \(h_{1},\dots,h_{d}\) being the standard basis of \(\mathbb{Z}^{d}\). The goal is to find \(\mathcal{X}^{*}\), the set of strict local extrema of the unknown function \(f\). Although the function \(f\) is unknown, inquiries of the function values at points in the domain can be made in consecutive rounds, which are evaluated with added noise signals that are mean zero, independent and identically distributed over time and over \(\mathcal{X}\). Formally speaking, the revealed values of the target function \(f\) at round \(t\in\{1,2,\dots\}\) are
\[f_{t}(x)=f(x)+N_{t}(x),\quad\forall x\in\mathcal{X}, \tag{62}\]
where \(N_{t}(x)\) are noise signals satisfying Assumption 3. Note that if the noise is disruptive enough, a single set of observed noisy function values \(f_{t}(x)\) for all \(x\in\mathcal{X}\) may not represent the unknown target function accurately, making it impossible to find local extrema of the function. To address this issue, we estimate the target function \(f\) at round \(t-1\) by leveraging the new observations at round \(t\in\{2,3,\dots\}\) as
\[\widehat{f}_{t}(x)=\frac{t-1}{t}\cdot\widehat{f}_{t-1}(x)+\frac{1}{t}\cdot f_ {t}(x),\quad\forall x\in\mathcal{X}. \tag{63}\]
Note that the estimation function \(\widehat{f}_{t}(x)\) changes over time and may not represent the shape of the unknown target function \(f\) when \(t\) is small. However, there may exist a hitting time \(T\) after which the estimation function \(\widehat{f}_{t}\) shares the same set of local extrema as the target function \(f\) with an associated confidence level \(1-a\), where \(0<a\leq 1\). As a result, the complexity of finding the local extrema of the target function \(f\) may be irrelevant to the complexity of finding the local extrema of function \(\widehat{f}_{t}\) before the hitting time \(T\)
Figure 1: A comparison of the upper bounds in Theorems 3 and 4 when \(M=K=16,k=2\times 10^{-2}\), and \(d=2\). In Figure 0(b), the value of \(n\) depends on \(\epsilon\), which is taken into account for drawing the plots.
Consequently, the complexity of finding the local extrema of the unknown target function \(f\) is related to the hitting time \(T\) as well as the computational complexity of optimizing function \(\widehat{f}_{T}\). Denote the set of strict local extrema of \(\widehat{f}_{t}\) by \(\widehat{\mathcal{X}}_{t}^{*}\), defined as
\[\widehat{\mathcal{X}}_{t}^{*}= \left\{\widehat{x}^{*}\in\mathcal{X}:\widehat{f}_{t}(\widehat{x} ^{*})<\widehat{f}_{t}(x),\forall x\in\mathcal{B}(\widehat{x}^{*})\right\} \cup\left\{\widehat{x}^{*}\in\mathcal{X}:\widehat{f}_{t}(\widehat{x}^{*})> \widehat{f}_{t}(x),\forall x\in\mathcal{B}(\widehat{x}^{*})\right\}. \tag{64}\]
**Definition 3**.: _Given \(a\in(0,1]\), the hitting time \(T(a)\) for an unknown discrete function \(f\) is defined as_
\[T(a)=\min\left\{T:\mathbb{P}\left(\widehat{\mathcal{X}}_{t}^{*}=\mathcal{X}^{ *}\right)\geq 1-a,\ \forall t\geq T\right\}, \tag{65}\]
_where \(\mathcal{X}^{*}\) and \(\widehat{\mathcal{X}}_{t}^{*}\) are defined in (61) and (64), respectively._
The hitting time \(T(a)\) depends on the minimum distance of the function values of \(f\) at point \(x\in\mathcal{X}\) from the function values at its neighbor points. This distance, denoted by \(\delta(x)\), is defined as
\[\delta(x)=\min_{x^{\prime}\in\mathcal{B}(x)}|f(x)-f(x^{\prime})|. \tag{66}\]
In order to simply the analysis, we make the following assumption about the target function \(f\).
**Assumption 7**.: _The minimum distance \(\delta(x)\) of function \(f\) is uniformly lower-bounded by a positive number for all \(x\in\mathcal{X}\), i.e., \(\delta_{m}=\min_{x\in\mathcal{X}}\delta(x)>0\)._
Intuitively, Assumption 7 ensures that function values of \(f\) at adjacent points are different, so that their noisy values become distinguishable after enough observations. The following theorem presents an upper bound on the hitting time \(T(a)\).
**Theorem 5**.: _Consider the time-varying function \(\widehat{f}_{t}\) in (63). Under Assumptions 3 and 7, given \(a\in(0,1]\), the associated hitting time \(T(a)\) defined in (65), satisfies the inequality_
\[T(a)\leq\frac{2L_{N}^{2}}{\delta_{m}^{2}}\cdot\ln\left(\frac{2|\mathcal{X}|}{a }\right), \tag{67}\]
_where \(|\mathcal{X}|\) denotes the number of elements in the set \(\mathcal{X}\)._
Proof.: In order to find an upper bound on the hitting time \(T(a)\), note that the hitting event used in (65) satisfies the condition
\[\left\{\frac{1}{T}\cdot\Big{\|}\sum_{t=1}^{T}N_{t}(x)\Big{\|}<\frac{\delta(x) }{2},\ \forall x\in\mathcal{X}\right\}\subseteq\left\{\widehat{\mathcal{X}}_{T}^{*}= \mathcal{X}^{*}\right\}. \tag{68}\]
The above equation holds because (62) and (63) result in \(\widehat{f}_{T}(x)=f(x)+\frac{1}{T}\cdot\sum_{t=1}^{T}N_{t}(x)\), and if the magnitude of the noise added to the true value of function \(f\) at point \(x\) is less than \(\delta(x)/2\) for all \(x\in\mathcal{X}\), then the set of local extrema of the function \(\widehat{f}_{T}\) coincides with the set \(\mathcal{X}^{*}\), the local extrema of function \(f\). The probability of the event on the left-hand side of (68) can be lower-bounded as
\[\mathbb{P}\left\{\frac{1}{T}\cdot\Big{\|}\sum_{t=1}^{T}N_{t}(x) \Big{\|}<\frac{\delta(x)}{2},\ \forall x\in\mathcal{X}\right\} \stackrel{{(a)}}{{=}} \prod_{i=1}^{|\mathcal{X}|}\mathbb{P}\left\{\frac{1}{T}\cdot \Big{\|}\sum_{t=1}^{T}N_{t}(x)\Big{\|}<\frac{\delta(x)}{2}\right\} \tag{69}\] \[\stackrel{{(b)}}{{\geq}} \prod_{i=1}^{|\mathcal{X}|}\left(1-2\exp\left(-\frac{T\delta(x)^ {2}}{2L_{N}^{2}}\right)\right)\] \[> 1-2\sum_{i=1}^{|\mathcal{X}|}\exp\left(-\frac{T\delta(x)^{2}}{2L_ {N}^{2}}\right)\] \[\geq 1-2|\mathcal{X}|\cdot\exp\left(-\frac{T\delta_{m}^{2}}{2L_{N} ^{2}}\right),\]
where \((a)\) holds because the added noise signals are independent from each other and \((b)\) follows from Hoeffding's inequality. Putting (68) and (69) together, we have
\[\mathbb{P}\left\{\widehat{\mathcal{X}}_{T}^{*}=\mathcal{X}^{*}\right\}>1-2| \mathcal{X}|\cdot\exp\left(-\frac{T\delta_{m}^{2}}{2L_{N}^{2}}\right). \tag{70}\]
If \(1-2|\mathcal{X}|\cdot\exp\left(-\frac{T\delta_{m}^{2}}{2L_{N}^{2}}\right)\geq 1 -a\) or equivalently \(T\geq\frac{2L_{N}^{2}}{\delta_{m}^{2}}\cdot\ln\left(\frac{2|\mathcal{X}|}{a}\right)\), we have \(\mathbb{P}\left\{\widehat{\mathcal{X}}_{T}^{*}=\mathcal{X}^{*}\right\}>1-a\), from which the upper bound in (65) follows.
### A Special Case for Unimodal Functions
A function \(f\) over a bounded set \(\mathcal{X}\subset\mathbb{Z}\) is called unimodal if it has only one global minimum \(x^{*}\in\mathcal{X}\) and \(f(i)>f(j)\) for all \(i<j\leq x^{*}\), \(i,j\in\mathcal{X}\), while \(f(i)<f(j)\) for all \(x^{*}\leq i<j\). Assume that the unknown target function \(f\) is unimodal over \(\mathcal{X}\), which implies it has a single global minimum. As mentioned earlier, the time-varying function \(\widehat{f}_{t}\) may not even be unimodal for small values of \(t\) under disruptive noise, and therefore it could have multiple local extrema. However, the single global minimum of the function \(f\) becomes known after the hitting time with an associated confidence level. In this section, a new notion of hitting time is proposed for unimodal functions that captures the complexity of finding the global minimum of the function and does not take the local extrema of the estimated function \(\widehat{f}_{t}\) into account.
Without loss of generality, we additionally assume that the noise signals \(N_{t}(x)\) are continuous random variables. This implies that the estimation function \(\widehat{f}_{t}\) has a single global minimum with probability 1. Let \(\widehat{x}_{t}^{*}=\operatorname*{argmin}_{x\in\mathcal{X}}~{}\widehat{f}_{ t}(x)\) denote the global minimum. The hitting time for a unimodal function \(f\) is defined below.
**Definition 4**.: _Given \(a\in(0,1],\) the hitting time \(T_{u}(a)\) for a unimodal function \(f\) with its global minimum at \(x^{*}=\operatorname*{argmin}_{x\in\mathcal{X}}~{}f(x)\) and its estimated global minimum \(\widehat{x}_{t}^{*}=\operatorname*{argmin}_{x\in\mathcal{X}}~{}\widehat{f}_{ t}(x)\) is defined as_
\[T_{u}(a)=\min\left\{T:\mathbb{P}\big{(}\widehat{x}_{t}^{*}=x^{*}\big{)}\geq 1 -a,~{}\forall t\geq T\right\}. \tag{71}\]
The distance of the function value at point \(x\in\mathcal{X}\) from the minimum function value is denoted by \(\Delta(x)\), which is defined as
\[\Delta(x)=\begin{cases}f(x)-f(x^{*}),&\text{if }x\in\mathcal{X}\setminus\{x^{*} \},\\ \min\{f(x^{*}-1)-f(x^{*}),f(x^{*}+1)-f(x^{*})\},&\text{if }x=x^{*}.\end{cases} \tag{72}\]
The following theorem presents an upper bound on the hitting time for a unimodal function.
**Theorem 6**.: _Consider the time-varying function \(\widehat{f}_{t}\) defined in (63) with \(f\) being a unimodal function. Suppose that Assumptions 3 and 7 hold. Given \(a\in(0,1]\), the associated hitting time \(T_{u}(a)\) satisfies the inequality \(T_{u}(a)\leq T\), where \(T\) is the smallest number such that_
\[\exp\left(-\frac{\delta_{m}^{2}T}{2L_{N}^{2}}\right)+2\sum_{i\in[|\mathcal{X} |/2]]}\exp\left(-\frac{i^{2}\delta_{m}^{2}T}{2L_{N}^{2}}\right)\leq a. \tag{73}\]
Proof.: By construction, we have \(\Delta(x)>0\) for all \(x\in\mathcal{X}\). In order to find an upper bound on the hitting time \(T_{u}(a)\), note that the hitting event used in (71) satisfies the condition
\[\left\{\frac{1}{T}\cdot\sum_{t=1}^{T}N_{t}(x)>-\frac{\Delta(x)}{2},\forall x \in\mathcal{X}\setminus\{x^{*}\}\text{ \bf and }\frac{1}{T}\cdot\sum_{t=1}^{T}N_{t}(x^{*})<\frac{\Delta(x^{*})}{2}\right\} \subseteq\Big{\{}\widehat{x}_{T}^{*}=x^{*}\Big{\}}. \tag{74}\]
Denote the event on the left-hand side of (74) as \(E_{t}\), whose probability can be lower-bounded as
\[\mathbb{P}\{E_{t}\} \stackrel{{(a)}}{{=}}\mathbb{P}\left\{\frac{1}{T}\cdot \sum_{t=1}^{T}N_{t}(x^{*})<\frac{\Delta(x^{*})}{2}\right\}\times\prod_{x\in \mathcal{X}\setminus\{x^{*}\}}\mathbb{P}\left\{\frac{1}{T}\cdot\sum_{t=1}^{T}N _{t}(x)>-\frac{\Delta(x)}{2}\right\}\] \[\stackrel{{(b)}}{{\geq}} \left(1-\exp\left(-\frac{T\Delta(x^{*})^{2}}{2L_{N}^{2}}\right) \right)\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
We further note that for any scalar \(\lambda>0\), the functions \(f\) and \(\lambda f\) share the same set of local minima. Rescaling by a positive number does not affect the complexity of the optimization problem. Hence, restricting the linear operators \(\mathcal{T}\) to have norm \(1\) incurs no loss of generality.
In practice, the functions to be minimized are often not specified exactly, due to the rounding error of numerical computation or the inexact nature of the model. We model this limitation by the random perturbation \(w\) sampled from some distribution. Given a sequence of linear operators \(\{\mathcal{A}_{t}\}\) such that \(\|\mathcal{A}_{t}\|=\sup_{f\neq 0}\frac{\|\mathcal{A}_{t}f\|}{\|f\|}=1\) together with the perturbations \(\{w_{t}\}\), consider the following model of linear time variation:
\[f_{t+1}=\mathcal{T}_{t}f_{t}=\mathcal{A}_{t}f_{t}+w_{t},\quad\text{for }t\in\{0,1,\dots\}. \tag{76}\]
_What properties the operators \(\{\mathcal{T}_{t}\}\) should satisfy in order for \(f_{t}\) to almost reach a target function \(f^{*}\) at time \(t=T\)?_ We will provide an answer using the notion of shape dominant operator. To understand the importance of this problem, suppose that at time \(t=0\), we optimize \(f_{0}\) around a poor local minimum \(x_{0}^{*}\). If at \(t=T\), the function \(f_{T}\) becomes convex with a unique global minimum \(x_{T}^{*}\), then no matter how optimization is carried out for \(f_{1}\) through \(f_{T-1}\), minimizing \(f_{T}\) will yield the same solution \(x_{T}^{*}\), which is globally optimal. The effect of minimizing \(f_{T}\) cancels out the sub-optimality at time \(t=0\). Moreover, under some technical conditions, the global solution at time \(T\) can be used to find global solutions at future times using tracking methods [30, 31, 32]. In other words, the shape of \(f_{T}\) affects the complexity of online optimization in the long run.
Now, we introduce the notion of shape dominant operator. Consider time-varying functions \(\{f_{t}\}\) defined on a finite discrete set \(\mathcal{X}=\{x_{1},\dots,x_{n}\}\subset\mathbb{Z}^{d}\). Equivalently, \(f_{t}\) can be viewed as a vector in \(\mathbb{R}^{n}\). For the noisy linear operator \(\mathcal{T}_{t}\) defined in (76), let \(A_{t}\) denote the associated matrix of the linear operator \(\mathcal{A}_{t}\) represented under the standard basis, for \(t\in\{1,2,\dots\}\). Let \(P(A_{t},w_{t})\) denote the joint distribution of \(A_{t}\) and \(w_{t}\).
**Definition 5**.: _The joint distribution \(P(A,w)\) is said to be \((\delta,\sigma,f^{*},\phi^{*})\) shape dominant if following conditions hold with probability \(1\): 1) the unit vector \(f^{*}\) is the eigenvector of \(A\) associated with eigenvalue \(1\); 2) the unit vector \(\phi^{*}\) is the eigenvector of \(A^{\top}\) associated with eigenvalue \(1\); 3) \(\langle f^{*},\phi^{*}\rangle\neq 0\); 4) all other eigenvalues of \(A\) have absolute values less than \(1-\delta\); 5) conditioned on \(A\), the noise \(w\) has zero mean and is sub-Gaussian with parameter \(\sigma^{2}\) in the sense that for all \(u\in\mathbb{R}^{n}\) with \(\|u\|\leq 1\), it holds that \(\mathbb{E}[\exp(su^{\top}w)]\leq\exp\left(\frac{\sigma^{2}s^{2}}{2}\right)\)._
**Theorem 7**.: _For the time-varying operator \(\mathcal{T}_{t}\) defined in (76), suppose that \(P(A_{t},w_{t})\) is \((\delta,\sigma_{t},f^{*},\phi^{*})\) shape dominant and independent for all \(t\in\{0,1,\dots,T-1\}\), then,_
\[f_{T}=\frac{\langle\phi^{*},f_{0}+\sum_{t=0}^{T-1}w_{t}\rangle}{\langle\phi^{ *},f^{*}\rangle}f^{*}+v+w, \tag{77}\]
_where \(\|v\|\leq(1-\delta)^{T}\left(\|f_{0}\|+\frac{\langle\phi^{*},f_{0}\rangle}{ \langle\phi^{*},f^{*}\rangle}\right)\) and \(w\) is sub-Gaussian with parameter \(\sigma^{2}=\left(1+\frac{1}{\langle\phi^{*},f^{*}\rangle^{2}}\right)\sum_{t=0 }^{T-1}(1-\delta)^{2(T-t)}\sigma_{t}^{2}\)._
Proof.: Consider the subspace \(\mathcal{G}=\{g\in\mathbb{R}^{n},\langle\phi^{*},g\rangle=0\}\). Since \(\langle\phi^{*},f^{*}\rangle\neq 0\), we have \(f^{*}\notin\mathcal{G}\). Since \(\phi^{*}\) is the eigenvector of \(A_{t}^{\top}\), the following holds for all \(g\in\mathcal{G}\)
\[\langle\phi^{*},A_{t}g\rangle=\langle{A_{t}}^{\top}\phi^{*},g\rangle=\langle \phi^{*},g\rangle=0. \tag{78}\]
Therefore, \(A_{t}g\in\mathcal{G}\), and \(\mathcal{G}\) is an invariant subspace of \(A_{t}\) in \(\mathbb{R}^{n}\) for \(t\in\{0,1,\dots,T-1\}\). Let a basis of \(\mathcal{G}\) be given by \(\{g_{1},\dots,g_{n-1}\}\). Then, \(B=\{f^{*},g_{1},\dots,g_{n-1}\}\) is a basis of \(\mathbb{R}^{n}\), under which the linear operator \(A_{t}\) takes the form
\[A_{t}=\begin{bmatrix}1&0&\dots&0\\ 0&&&\\ \vdots&&A_{t}^{\prime}&\\ 0&&&\end{bmatrix}, \tag{79}\]
where \(A^{\prime}_{t}\) is a random matrix in \(\mathbb{R}^{(n-1)\times(n-1)}\). With a slight abuse of notation, we regard \(A^{\prime}_{t}\) as a linear transformation from \(\mathcal{G}\) to \(\mathcal{G}\). Note that \(\|A^{\prime}_{t}\|\leq 1-\delta\) because all other eigenvalues of \(A_{t}\) have norm less than \(1-\delta\). Under the basis \(B\), \(f_{0}\) has the representation \(f_{0}=\frac{\langle\phi^{*},f_{0}\rangle}{\langle\phi^{*},f^{*}\rangle}f^{*}+g\), where \(g\in\mathcal{G}\). As a result,
\[\begin{split} f_{T}&=\mathcal{T}_{T-1}\circ\cdots \circ\mathcal{T}_{0}f_{0}\\ &=A_{T-1}\cdots A_{0}f_{0}+\sum_{t=0}^{T-1}A_{T-1}\cdots A_{t+1} w_{t}\\ &=\frac{\langle\phi^{*},f_{0}\rangle}{\langle\phi^{*},f^{*} \rangle}f^{*}+A^{\prime}_{T-1}\ldots A^{\prime}_{1}g+\sum_{t=0}^{T-1}A_{T-1} \cdots A_{t+1}w_{t}.\end{split} \tag{80}\]
The norm estimate gives rise to
\[\left\|A^{\prime}_{T-1}\ldots A^{\prime}_{1}g\right\|\leq(1-\delta)^{T}\cdot \left\|g\right\|\leq(1-\delta)^{T}\cdot\left(\left\|f_{0}\right\|+\left|\frac{ \langle\phi^{*},f_{0}\rangle}{\langle\phi^{*},f^{*}\rangle}\right|\right), \tag{81}\]
where the triangle inequality is used. Similarly, one can write \(w_{t}=\frac{\langle\phi^{*},w_{t}\rangle}{\langle\phi^{*},f^{*}\rangle}f^{*} +h_{t}\), where \(h_{t}\in\mathcal{G}\). We have
\[A_{T-1}\cdots A_{t+1}w_{t}=\frac{\langle\phi^{*},w_{t}\rangle}{\langle\phi^{* },f^{*}\rangle}f^{*}+A^{\prime}_{T-1}\cdots A^{\prime}_{t+1}h_{t}. \tag{82}\]
For all \(u\in\mathbb{R}^{n}\) with \(\|u\|\leq 1\), it holds that
\[\begin{split}&\quad\mathbb{E}\left[\exp\left(s\left\langle u,A^{ \prime}_{T-1}\cdots A^{\prime}_{t+1}h_{t}\right\rangle\right)\right]\\ &=\mathbb{E}\left[\exp\left(s\left\langle A^{\prime\top}_{t+1} \cdots A^{\prime\top}_{T-1}u,h_{t}\right\rangle\right)\right]\\ &=\mathbb{E}\left[\exp\left(s\left\langle A^{\prime\top}_{t+1} \cdots A^{\prime\top}_{T-1}u,w_{t}-\frac{\langle\phi^{*},w_{t}\rangle}{ \langle\phi^{*},f^{*}\rangle}f^{*}\right\rangle\right)\right]\\ &=\mathbb{E}\Bigg{[}\exp\left(s\left\langle A^{\prime\top}_{t+1} \cdots A^{\prime\top}_{T-1}u,w_{t}\right\rangle\right)\times\exp\left(s\left \langle-\frac{\langle A^{\prime\top}_{t+1}\cdots A^{\prime\top}_{T-1}u,f^{*} \rangle}{\langle\phi^{*},f^{*}\rangle}\phi^{*},w_{t}\right\rangle\right) \Bigg{]}\\ &\leq\exp\left(\frac{\sigma_{t}^{2}s^{2}\big{\|}A^{\prime\top}_{t+ 1}\cdots A^{\prime\top}_{T-1}u\big{\|}^{2}}{2}\right)\times\exp\left(\frac{ \sigma_{t}^{2}s^{2}}{2}\left(\frac{\langle A^{\prime\top}_{t+1}\cdots A^{ \prime\top}_{T-1}u,f^{*}\rangle}{\langle\phi^{*},f^{*}\rangle}\right)^{2} \right)\\ &\leq\exp\left(\frac{\sigma_{t}^{2}s^{2}(1-\delta)^{2(T-t)}\left( 1+\frac{1}{\langle\phi^{*},f^{*}\rangle^{2}}\right)}{2}\right),\end{split} \tag{83}\]
which implies that \(A^{\prime}_{T-1}\cdots A^{\prime}_{t+1}h_{t}\) is sub-Gaussian with parameter \(\sigma_{t}^{2}(1-\delta)^{2(T-t)}\left(1+\frac{1}{\langle\phi^{*},f^{*}\rangle ^{2}}\right)\), and thereby, \(\sum_{t=0}^{T-1}A^{\prime}_{T-1}\cdots A^{\prime}_{t+1}h_{t}\) is sub-Gaussian with parameter \(\sigma^{2}=\left(1+\frac{1}{\langle\phi^{*},f^{*}\rangle^{2}}\right)\sum_{t =0}^{T-1}(1-\delta)^{2(T-t)}\sigma_{t}^{2}\). This completes the proof.
Theorem 7 states that if the time-varying model is given by shape dominant operators, the function \(f_{T}\) decomposes into the sum of dominating shape \(f^{*}\), a bias term \(v\) that gradually fades away, and a cumulating noise term that discounts noise in previous iterations. We provide a bound on the hitting time below.
**Theorem 8**.: _Under the same assumptions made in Theorem 7, for a given \(\epsilon>0\), define the associated hitting time \(T(\epsilon)\) as_
\[T(\epsilon)=\min\big{\{}T:\exists\lambda\in\mathbb{R}\text{ s.t. }\|f_{T}-\lambda f^{*}\|<\epsilon\big{\}}. \tag{84}\]
_Then, for all \(T>\frac{\log 2\left(\left\|f_{0}\right\|+\left|\frac{\langle\phi^{*},f_{0}\rangle }{\langle\phi^{*},f^{*}\rangle}\right|\right)-\log\epsilon}{\log\frac{1}{1-\delta}}\), it holds that_
\[\mathbb{P}(T(\epsilon)\geq T)\leq C_{n}\exp\left(-\frac{\epsilon^{2}}{32\left(1+ \frac{1}{\langle\phi^{*},f^{*}\rangle^{2}}\right)\sum_{t=0}^{T-1}(1-\delta)^{2( T-t)}\sigma_{t}^{2}}\right), \tag{85}\]
_where \(C_{n}\) is a universal constant depending only on \(n\)._
Proof.: By Theorem 7, for a fixed number \(T\), we have the following decomposition for \(f_{T}\):
\[f_{T}=\frac{\left\langle\phi^{*},f_{0}+\sum_{t=0}^{T-1}w_{t}\right\rangle}{ \left\langle\phi^{*},f^{*}\right\rangle}f^{*}+v^{(T)}+w^{(T)}, \tag{86}\]
where \(\left\|v^{(T)}\right\|<(1-\delta)^{T}\left(\left\|f_{0}\right\|+\left|\frac{ \left\langle\phi^{*},f_{0}\right\rangle}{\left\langle\phi^{*},f^{*}\right\rangle }\right|\right)\) and \(w^{(T)}=\sum_{t=0}^{T-1}A^{\prime}_{T-1}\cdots A^{\prime}_{t+1}h_{t}\) is sub-Gaussian with parameter \(\sigma^{2}=\left(1+\frac{1}{\left\langle\phi^{*},f^{*}\right\rangle^{2}} \right)\sum_{t=0}^{T-1}(1-\delta)^{2(T-t)}\sigma_{t}^{2}\). From the definition of the hitting time \(T(\epsilon)\) in (84), we have
\[\mathbb{P}(T(\epsilon)<T)\geq\mathbb{P}\left(\left\|v^{(T)}\right\|<\epsilon/ 2,\left\|w^{(T)}\right\|<\epsilon/2\right). \tag{87}\]
When \(T>\frac{\log 2\left(\left\|f_{0}\right\|+\left|\frac{\left\langle\phi^{*},f_{0} \right\rangle}{\left\langle\phi^{*},f^{*}\right\rangle}\right|\right)-\log \epsilon}{\log\frac{1}{1-\delta}}\), the bound \(\left\|v^{(T)}\right\|<\epsilon/2\) is satisfied. Since \(w^{(T)}\) is sub-Gaussian with parameter \(\sigma^{2}\), the tail-bound for \(w^{(T)}\) yields
\[\mathbb{P}\left(\left\|w^{(T)}\right\|<\epsilon/2\right)=1-\mathbb{P}\left( \left\|w^{(T)}\right\|>\epsilon/2\right)\geq 1-C_{n}\exp\left(-\frac{\epsilon^{2} }{32\sigma^{2}}\right), \tag{88}\]
where \(C_{n}\) is a universal constant depending only on \(n\). This completes the proof.
To understand the above bound, consider a fixed time \(T\). When \(\sigma_{t}\) decreases, the bound becomes smaller. As a result, with a smaller random perturbation, it is more likely to reach the target function faster. When \(\epsilon\) increases, the bound also becomes smaller, which matches the intuition that a larger neighborhood is easier to reach than a smaller one.
**Remark 6**.: _The analysis in this section can be generalized to continuous functions by working through eigenfunctions as opposed to eigenvectors. We briefly discuss this in the special case where \(L^{2}(\mathcal{X})\) has a finite number of bases. Let the inner product be \(\left\langle f,g\right\rangle=\int_{\mathcal{X}}f(x)\cdot g(x)dx\) and the function space to have an orthonormal basis given by the set of functions \(\{u_{1},u_{2},\ldots,u_{n}\}\) such that_
\[\left\langle u_{i},u_{j}\right\rangle=\int_{\mathcal{X}}u_{i}(x)\cdot u_{j}(x) dx=\begin{cases}1&\text{if }i=j\\ 0&\text{if }i\neq j\end{cases}. \tag{89}\]
_Note that any function can be decomposed into a linear combination of the basis functions, i.e., \(f(x)=\sum_{j=1}^{n}a_{j}\cdot u_{j}(x)\), where the coefficients can be stacked into a column vector \(a=[a_{1},a_{2},\ldots,a_{n}]^{T}\). Define the matrix \(A\) representing the linear operator \(\mathcal{T}\) with the elements_
\[A_{ij}=\left\langle u_{i},\mathcal{T}(u_{j})\right\rangle=\int_{\mathcal{X}}u _{i}(x)\cdot\mathcal{T}\big{(}u_{j}(x)\big{)}dx. \tag{90}\]
_There exists a vector \(b=[b_{1},b_{2},\ldots,b_{n}]^{T}\) such that applying the operator \(\mathcal{T}\) on the decomposed form of \(f(x)\) yields_
\[\mathcal{T}\big{(}f(x)\big{)}=\sum_{j=1}^{n}a_{j}\cdot\mathcal{T}\big{(}u_{j}( x)\big{)}=\sum_{j=1}^{n}b_{j}\cdot u_{j}(x). \tag{91}\]
_Taking the inner product of both sides of the above equation with an arbitrary basis function \(u_{i}\) leads to_
\[\sum_{j=1}^{n}a_{j}\!\cdot\!\left\langle u_{i},\mathcal{T}\big{(}u_{j}\big{)} \right\rangle=\sum_{j=1}^{n}b_{j}\!\cdot\!\left\langle u_{i},u_{j}\right\rangle \Rightarrow\sum_{j=1}^{n}a_{j}\!\cdot\!A_{ij}=b_{i}. \tag{92}\]
_The above equation is the matrix multiplication \(Aa=b\), which is the matrix associated with \(\mathcal{T}\) acting upon the function \(f(x)\) expressed in the orthonormal basis. If \(f(x)\) is an eigenfunction of transformation \(\mathcal{T}\) with eigenvalue \(\lambda\), we have \(Aa=\lambda a\). Hence, the results of Theorem 8 can be applied to continuous functions in a function space with a finite number of bases. The extension to the case with an infinite, but countable, number of bases is similar under some technical assumptions._
## 4 Simulation Results
In this section, the adversarial attack on the computation of value iteration is simulated for an agent interacting with an environment depicted in Figure 3.
The agent can take any of the four actions Up, Down, Right, and Left in each of the non-terminal states. By taking an action, the agent moves one block toward the desired action 90% of the time, or moves one block to the right or left of the desired taken action uniformly at random 10% of the time. The agent bounces back to its original state before taking an action if movement in the direction described above is not possible due to the walls marked with diagonal strips or exiting the environment. The agent is incurred a cost of 0.02 by each move and there are two terminal states in which the agent receives an immediate reward of +1 and -1 as shown in Figure 3. In order to determine the optimal path for the agent starting from any of the states, the value function is calculated using synchronous value iteration. In our simulated example, an adversary contaminates the value function by expanding up to \(Q=1.8\) in a random direction, withholding the contraction, 20% of the time. As a result, the distance of the time-varying value function from the true value function based on the \(L^{2}\)-norm is affected negatively as depicted in Figure 3(a), where the starting function is the all-zero function in our simulations and the average and standard deviations are estimated by 1000 rounds of independent runs of the value iteration. Furthermore, the negative effect of the adversary is worsened by increasing the cardinality of the state space in the studied example. In order to show this, the number of intermediate blocks in Figure 3 is changed from 1 to 10, i.e., the number of states is changed from 9 to 27, and the distance between the
Figure 4: The effect of an adversary on the convergence of value iteration.
Figure 3: (a) the agent interacts with an environment, (b) the agent has a set of four actions in each state.
value function at the tenth iterate and the true value function is depicted in Figure (b)b. As shown in Figure (b)b, \(\mathbb{E}\big{[}d(V_{10}^{a},V^{*})\big{]}-d(V_{10},V^{*})\) has an increasing trend as the number of states increases, where \(V_{10}^{a}\) is value function at the tenth iterate in the presence of an adversary and \(V_{10}\) is the corresponding function in the absence of an adversary, and the dependence of value function on the number of states is eliminated to keep the notations simple.
## 5 Conclusion and Future work
Multiple models of stochastic time variation along with their corresponding notions of hitting time are studied in this paper. In particular, we develop a probabilistic Banach fixed-point theorem that proves the convergence of the value iteration method with a probabilistic contraction-expansion transformation with an associated confidence level, which finds applications to adversarial attacks on computation of the value iteration method. We prove that the hitting time of the value function in the value iteration method with a probabilistic contraction-expansion transformation is logarithmic in terms of the inverse of a desired precision. Furthermore, we develop upper bounds on the hitting time for optimization of unknown discrete and continuous time-varying functions whose noisy evaluations are revealed over time. The upper bound for a discrete function is logarithmic in terms of the cardinality of the function domain and the upper bound for a continuous function is super-quadratic (but sub-cubic) in terms of the inverse of a desired precision. In this framework, we show that convex functions are learned faster than non-convex functions. Finally, an upper bound on the hitting time is developed for a time-varying linear model with additive noise under the notion of shape dominance for discrete functions. Future research directions include: studying how an environment with time-varying parameters modeled by transition probabilities and rewards affects the Bellman transformation and its fixed point, obtaining upper bounds on the rate of change of the time-varying parameters such that the time-varying fixed points are achievable after a hitting time, and studying the effect of an adversary in applications of reinforcement learning whose computations are performed via edge computing.
|
2307.14636 | Final results of Borexino on CNO solar neutrinos | We report the first measurement of CNO solar neutrinos by Borexino that uses
the Correlated Integrated Directionality (CID) method, exploiting the
sub-dominant Cherenkov light in the liquid scintillator detector. The
directional information of the solar origin of the neutrinos is preserved by
the fast Cherenkov photons from the neutrino scattered electrons, and is used
to discriminate between signal and background. The directional information is
independent from the spectral information on which the previous CNO solar
neutrino measurements by Borexino were based. While the CNO spectral analysis
could only be applied on the Phase-III dataset, the directional analysis can
use the complete Borexino data taking period from 2007 to 2021. The absence of
CNO neutrinos has been rejected with >5{\sigma} credible level using the
Bayesian statistics. The directional CNO measurement is obtained without an
external constraint on the $^{210}$Bi contamination of the liquid scintillator,
which was applied in the spectral analysis approach. The final and the most
precise CNO measurement of Borexino is then obtained by combining the new
CID-based CNO result with an improved spectral fit of the Phase-III dataset.
Including the statistical and the systematic errors, the extracted CNO
interaction rate is $R(\mathrm{CNO})=6.7^{+1.2}_{-0.8} \, \mathrm{cpd/100 \,
tonnes}$. Taking into account the neutrino flavor conversion, the resulting CNO
neutrino flux at Earth is $\Phi_\mathrm{CNO}=6.7 ^{+1.2}_{-0.8} \times 10^8 \,
\mathrm{cm^{-2} s^{-1}}$, in agreement with the high metallicity Standard Solar
Models. The results described in this work reinforce the role of the event
directional information in large-scale liquid scintillator detectors and open
up new avenues for the next-generation liquid scintillator or hybrid neutrino
experiments. | D. Basilico, G. Bellini, J. Benziger, R. Biondi, B. Caccianiga, F. Calaprice, A. Caminata, A. Chepurnov, D. D'Angelo, A. Derbin, A. Di Giacinto, V. Di Marcello, X. F. Ding, A. Di Ludovico, L. Di Noto, I. Drachnev, D. Franco, C. Galbiati, C. Ghiano, M. Giammarchi, A. Goretti, M. Gromov, D. Guffanti, Aldo Ianni, Andrea Ianni, A. Jany, V. Kobychev, G. Korga, S. Kumaran, M. Laubenstein, E. Litvinovich, P. Lombardi, I. Lomskaya, L. Ludhova, I. Machulin, J. Martyn, E. Meroni, L. Miramonti, M. Misiaszek, V. Muratova, R. Nugmanov, L. Oberauer, V. Orekhov, F. Ortica, M. Pallavicini, L. Pelicci, Ã. Penek, L. Pietrofaccia, N. Pilipenko, A. Pocar, G. Raikov, M. T. Ranalli, G. Ranucci, A. Razeto, A. Re, N. Rossi, S. Schönert, D. Semenov, G. Settanta, M. Skorokhvatov, A. Singhal, O. Smirnov, A. Sotnikov, R. Tartaglia, G. Testera, E. Unzhakov, F. L. Villante, A. Vishneva, R. B. Vogelaar, F. von Feilitzsch, M. Wojcik, M. Wurm, S. Zavatarelli, K. Zuber, G. Zuzel | 2023-07-27T05:56:32Z | http://arxiv.org/abs/2307.14636v1 | # Final results of Borexino on CNO solar neutrinos
###### Abstract
In this paper, we report the first measurement of CNO solar neutrinos by Borexino that uses the Correlated Integrated Directionality (CID) method, exploiting the sub-dominant Cherenkov light in the liquid scintillator detector. The directional information of the solar origin of the neutrinos is preserved by the fast Cherenkov photons from the neutrino scattered electrons, and is used to discriminate between signal and background. The directional information is independent from the spectral information on which the previous CNO solar neutrino measurements by Borexino were based. While the CNO spectral analysis could only be applied on the Phase-III dataset, the directional analysis can use the complete Borexino data taking period from 2007 to 2021. The absence of CNO neutrinos has been rejected with \(>\)5\(\sigma\) credible level using the Bayesian statistics. The directional CNO measurement is obtained without an external constraint on the \({}^{210}\)Bi contamination of the liquid scintillator, which was applied in the spectral analysis approach. The final and the most precise CNO measurement of Borexino is then obtained by combining the new CID-based CNO result with an improved spectral fit of the Phase-III dataset. Including the statistical and the systematic errors, the extracted CNO interaction rate is \(R(\rm{CNO})=6.7^{+1.2}_{-0.8}\,\rm{pcd}/100\,\rm{tonnes}\). Taking into account the neutrino flavor conversion, the resulting CNO neutrino flux at Earth is \(\rm{\Phi_{CNO}}=6.7^{+1.2}_{-0.8}\times 10^{8}\,\rm{cm^{-2}s^{-1}}\), which is found to be in agreement with the high metallicity Standard Solar Models. The results described in this work reinforce the role of the event directional information in large-scale liquid scintillator detectors and open up new avenues for the next-generation liquid scintillator or hybrid neutrino experiments. A particular relevance is expected for the latter detectors, which aim to combine the advantages from both Cherenkov-based and scintillation-based detection techniques.
###### Abstract
The study of the production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) and a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction is presented. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\) reaction is investigated. The production of a \(\gamma\gamma\)-ray (\(\gamma\gamma\)) reaction in the \(\gamma\gamma\gamma\) reaction is investigated. The production of
###### Contents
* 1 Introduction
* 2 The Borexino experiment
* 3 Correlated and Integrated Directionality for CNO
* 3.1 CID strategy for the CNO measurement with the full dataset
* 3.2 N\({}^{\text{th}}\)-hit analysis approach
* 3.3 CID fit procedure
* 3.3.1 Fit in the RoI\({}_{\text{gvc}}\)
* 3.3.2 Fit in the RoI\({}_{\text{CNO}}\)
* 3.4 Systematic uncertainties
* 3.5 Results of the CID analysis
* 3.5.1 Effective gv\({}_{\text{ch}}\) calibration on the \({}^{\text{2}}\)Be edge
* 3.5.2 CNO measurement with CID
* 4 Combined CID and multivariate analysis
* 4.1 Results
* 5 Conclusions
## 1 Introduction
Solar neutrinos are produced in the core of the Sun by nuclear reactions in which hydrogen is transformed into helium. The dominant sequence of reactions is the so-called \(pp\) chain [1; 2] which is responsible for most of the solar luminosity, while approximately 1 % of the solar energy is produced by the so-called Carbon-Nitrogen-Oxygen (CNO) cycle. Even though the CNO cycle plays only a marginal role in the solar fusion mechanisms, it is expected to take over the luminosity budget for main sequence stars more massive, older, and hotter than the Sun [3]. Solar neutrinos have proven to be a powerful tool to study the solar core [4; 5; 6; 7] and, at the same time, have been of paramount importance in shedding light on the neutrino oscillation phenomenon [8; 9; 10; 11; 12; 13].
One important open question concerning solar physics regards the metallicity of the Sun, that is, the abundance of elements with \(Z>2\). In fact, different analyses of spectroscopic data yield significantly different metallicity results, that can be grouped in two classes: the so-called High-Metallicity (HZ) [14; 15] and Low-Metallicity (LZ) [16; 17; 18] models. The solar neutrino fluxes, in particular that from the CNO cycle reactions, can address this issue. Indeed, the SSM predictions of the CNO neutrino flux depend on the solar metallicity directly, via the abundances of C and N in the solar core, and indirectly, via its effect on the solar opacity and temperature profile.
Borexino delivered the first direct experimental proof of the existence of the CNO cycle in the Sun with a significance of \(\sim\)7 \(\sigma\), also providing a slight preference towards High-Metallicity models [4; 19]. This result was obtained with a multivariate analysis of the energy and radial distributions of selected events. To disentangle the CNO signal from the background, the multivariate fit requires an independent external constraint on the \(pep\) neutrino rate and on the \({}^{210}\)Bi rate; the latter is obtained by tagging \({}^{210}\)Bi-\({}^{210}\)Po coincidences in a temperature stabilized, layered scintillator fluid (see [4; 19] for more details). For this reason, the CNO measurement has been performed only on approximately one third of the Borexino data, the so-called Phase-III.
In this paper, we present new results on CNO neutrinos obtained exploiting the "Correlated and Integrated Directionality" (CID) technique, which uses the directional information encoded in the Cherenkov light emitted alongside the scintillation, to separate the solar signal from non-solar backgrounds. Borexino demonstrated the viability of this technique using \({}^{7}\)Be solar neutrinos [20; 21]. Here we apply the CID technique to the CNO analysis, obtaining two important results: we show that we can extract the evidence of solar CNO neutrinos on the entire Borexino dataset following an alternative approach with respect to the standard multivariate analysis and, consequently, without the help of the \({}^{210}\)Bi constraint; we also show that by combining the information coming from the directionality with the standard multivariate analysis performed on Phase-III data we obtain an improved measurement of the CNO neutrino interaction rate.
The paper is structured as follows. Section 2 describes the Borexino detector and summarizes the event reconstruction techniques. The CID analysis for the CNO neutrino measurement is illustrated in Sec. 3, outlining the methods, reporting the results, and detailing the main sources of systematic uncertainties. Finally, in Sec. 4 we show our best result on CNO neutrinos obtained combining the CID and the standard multivariate analysis.
## 2 The Borexino experiment
Borexino was a liquid scintillator (LS) neutrino detector [22] that ran until October 2021 with unprecedented radiopurity levels [23; 5], a necessary feature of its solar neutrino measurements. The detector was located deep underground at the Laboratori Nazionali del Gran Sasso (LNGS) in Italy, with about 3800 m water equivalent rock shielding suppressing the cosmic muon flux by a factor of \(\sim\)10\({}^{6}\).
The detector layout is schematically shown in Fig. 1. The Stainless Steel Sphere (SSS) with a 6.85 m radius supported 2212 8-inch photomultiplier tubes (PMTs) and contained 280 tonnes of pseudocumene (1,2,4-trimethylbenzene, PC) doped with 1.5% of PPO (2,5-diphenyloxazole) wavelength shifter, confined in a nylon inner vessel of 4.25 m radius. The density of the scintillator was (0.878 \(\pm\) 0.004) g cm\({}^{-3}\) with the electron density of (3.307 \(\pm\) 0.015) \(\times\) 10\({}^{31}\)\(e^{-}\)/100 tonnes. The PC-based buffer liquid in the region between the SSS and IV shielded the LS from external \(\gamma\) radiation and neutrons. The nylon Outer Vessel, that separated the buffer in two sub-volumes, prevented the inward diffusion of \({}^{222}\)Rn. The SSS itself is submerged in a domed, cylindrical tank filled with \(\sim\)1 kton of ultra-pure water, equipped with 208 PMTs. The water tank provided shielding external backgrounds and also served as an active Cherenkov veto for residual cosmic muons passing through the detector.
Borexino detected solar neutrinos via their elastic scattering on electrons of the LS, a process sensitive, with different probability, to all neutrino flavors. Electrons, and charged particles in general, deposit their energy in the LS, excite its molecules and the resulting scintillation light is
emitted isotropically. Using \(n\)\(\approx\)1.55 as scintillator index of refraction at 400 nm wavelength, sub-dominant but directional Cherenkov light is emitted when the electron kinetic energy exceeds 0.165 MeV. Cherenkov light is emitted over picosecond timescale while the fastest scintillation light component from the LS has an emission time constant at the nanosecond level. The fraction of light emitted as Cherenkov photons in Borexino was less than 0.5% for 1 MeV recoiling electrons.
The effective total light yield was \(\sim\)500 photoelectrons per MeV of electron equivalent deposited energy, normalized to 2000 PMTs [23]. The energy scale is intrinsically non-linear due to ionization quenching and the emission of Cherenkov radiation. The Geant4 based Monte Carlo (MC) software [24] simulates all relevant physics processes. It is tuned using the data obtained during calibration campaigns with radioactive sources [25]. Distinct energy estimators have been defined, based on different ways of counting the number of detected photons [23; 20]. The position reconstruction of each event is performed by using the time-of-flight corrected detection time of photons on hit PMT [23]. Particle identification is also possible in Borexino [23], in particular \(\alpha/\beta\) discrimination [4], by exploiting different scintillation light emission time profiles.
The Borexino data-taking period is divided into three phases: Phase-I (May 2007-May 2010), Phase-II (December 2011-May 2016), and Phase-III (July 2016-October 2021). Phase-II started after the detector calibration [25] and an additional purification of the LS, that enabled a comprehensive measurement of the \(pp\) chain solar neutrinos [5]. Phase-III is characterized by a thermally stable detector with greatly suppressed seasonal convective currents. This condition has made it possible to extract an upper limit constraint on the \({}^{210}\)Bi contamination in the LS, and thus, to provide the first direct observation of solar CNO neutrinos [4].
## 3 Correlated and Integrated Directionality for CNO
Cherenkov photons emitted by the electrons scattered in neutrino interactions retain information about the original direction of the incident neutrino. Typically, in water Cherenkov neutrino detectors, this information is accessed through an event-by-event direction reconstruction, as demonstrated by the measurements of \({}^{8}\)B neutrinos, at energies larger than 3.5 MeV [26; 7]. Instead, the Borexino experiment has provided a proof-of-principle for the use of this Cherenkov hit information in a LS detector and at neutrino energies below 1 MeV through the so-called "Correlated and Integrated Directionality" (CID) technique. A detailed explanation of the method can be found in [21; 20].
The CID method discriminates the signal originating in the Sun - due to solar neutrinos - from the background. Cherenkov light is sub-dominant in Borexino, but it is emitted almost instantaneously with respect to the slower scintillation light. Consequently, directional information is contained in the first hits of an event (after correcting for the time-of-flight of each photon). The CID analysis is based on the \(\cos\alpha\) observable: for a given PMT hit in an event, \(\alpha\) is the aperture angle between the Sun and the hit PMT at the reconstructed position of the event (see also Fig. 3 in [20]). For background events, the \(\cos\alpha\) distribution is nearly uniform regardless which hit is considered. For solar neutrino events, the \(\cos\alpha\) distribution is flat for scintillation photons which are emitted isotropically, but has a characteristic non flat distribution peaked at \(\cos\alpha\sim 0.7\) for Cherenkov hits correlated with the position of the Sun. Since we cannot distinguish Cherenkov and scintillation photons, in our previous work [21] we have used only the 1\({}^{\rm st}\) and 2\({}^{\rm nd}\) hits of each event, which have the largest probability of being Cherenkov hits. In the new analysis presented in this paper we fully exploit the directional information contained in the first several hits. This choice is supported by Monte Carlo simulations and sensitivity studies as discussed in Sec. 3.2.
The solar neutrino signal is obtained by fitting the \(\cos\alpha\) distributions of the selected first several hits, as a sum of signal and backgrounds contributions. The expected \(\cos\alpha\) distributions for signal and background are obtained from Monte Carlo simulations. As in Ref. [20], for each selected data event we simulate 200 MC events of solar neutrinos (represented by \({}^{7}\)Be or \(pep\) according to RoI, see below) and the same amount of the background events (represented by \({}^{210}\)Bi). These events are simulated with the same astronomical time as the data event and with the position smeared around the reconstructed vertex. From the fit we then obtain the total number of solar neutrinos \(N_{\nu}\) detected in the RoI. The fit also includes two nuisance parameters. The effective Cherenkov group velocity correction \(\rm{gv_{ch}}\) nuisance parameter accounts for small differences in the relative hit time distribution between scintillation and Cherenkov hits in data relative to the MC. The second parameter is the event position mis-reconstruction in the initial electron direction \(\Delta r_{\rm dir}\), an indirect effect of the Cherenkov hits, where the reconstructed position is slightly biased towards early hit PMTs of the corresponding event. Here \(\Delta r_{\rm dir}\) is a free parameter of the fit, while \(\rm{gv_{ch}}\) is obtained independently and is constrained in the fit.
Compared to the previous proof-of-principle analysis [21; 20], the current CID analysis has been improved in a variety of ways. The full detector live time can be used now thanks to a novel \(\rm{gv_{ch}}\) correction calibration. In the previous publication, \(\rm{gv_{ch}}\) has been obtained using \({}^{40}\)K \(\gamma\) calibration data (see [20]). In this work instead we calibrate
Figure 1: Scheme of the Borexino detector.
\(\mathrm{gv_{ch}}\) by exploiting the \({}^{7}\)Be solar neutrino events which allows us to extend the analysis to the full Borexino dataset, as explained in Sec. 3.1. Additionally, indirect Cherenkov information from the systematic influence on the vertex reconstruction and consequently on the \(\cos\alpha\) distribution was exploited by the inclusion in the analysis of later hits with negligible contribution of Cherenkov photons, see Sec. 3.2. Technical details of the CID fitting procedure can be found in Sec. 3.3, while different systematic effects are discussed in Sec. 3.4. The final CID results regarding the CNO measurement are reported in Sec. 3.5.
### CID strategy for the CNO measurement with the full dataset
In the previous Borexino works [21; 20], the calibration of the Cherenkov light group velocity \(\mathrm{gv_{ch}}\) has been performed using \(\gamma\) sources deployed during the Borexino calibration campaign in 2009 [20]. The solar neutrino analysis was performed on the Phase-I dataset, that has been taken close in time to the source calibration of the detector [25]. The \(\mathrm{gv_{ch}}\) found in this way was used to obtain the first measurement of \({}^{7}\)Be solar neutrinos with the CID method [21]. For the CID measurement of CNO in this paper, the entire Borexino dataset is used (from 2007 until 2021). Since the sub-nanosecond stability of the detector time response cannot be guaranteed for long periods, and no more calibrations have been performed after 2009, we developed a method to calibrate \(\mathrm{gv_{ch}}\) on the \({}^{7}\)Be shoulder data. This is done by using the same Rol as in [21; 20] (here called \(\mathrm{Rol_{gv}}\) electron equivalent energy range of \(0.5\,\mathrm{MeV}\lesssim T_{e}\lesssim 0.8\,\mathrm{MeV}\)) and performing the CID analysis where the \({}^{7}\)Be is constrained to the Standard Model predictions [2]. The \(\mathrm{gv_{ch}}\) correction in this way is then used in the CID analysis of the \(\mathrm{Rol_{CNO}}\), in which the CNO contribution is maximized, and which is fully independent from \(\mathrm{Rol_{gv}}\). This step has been found to be justified according to MC studies, as the wavelength distribution of the detected Cherenkov photons produced by electrons from \(\mathrm{Rol_{gv}}\) and \(\mathrm{Rol_{CNO}}\) is the same. With this new strategy, the Cherenkov light \(\mathrm{gv_{ch}}\) can be calibrated on the same data-taking period as the one used for the CNO analysis. Two analyses have been performed in parallel for Phase-I (May 2007 to May 2010, 740.7 days) and Phase-II+III (December 2011 to October 2021, 2888.0 days). The \(\mathrm{gv_{ch}}\) correction obtained for Phase-I can be compared to the one previously obtained from the \({}^{40}\)K \(\gamma\) source [20]. Additionally, the analyses of the two independent data-sets allows for the investigation of any variation of the detector response over time. The \(\mathrm{Rol_{gv}}\) and the \(\mathrm{Rol_{CNO}}\) are shown for the Phase-II+III dataset in Fig. 2 for illustrative purposes. The results on \(\mathrm{gv_{ch}}\) are provided in Sec. 3.5.1.
In the final analysis, the Three-Fold-Coincidence algorithm [4] is applied to the \(\mathrm{Rol_{CNO}}\) to suppress the cosmogenic \({}^{11}\)C background, preserving the exposure with a signal survival fraction of \(55.77\%\pm 0.02\%\) for Phase-I and \(63.97\%\pm 0.02\%\) for Phase-II+III. The radial (\(R_{\mathrm{FV}}\)) and \(T_{e}\) energy cuts of \(\mathrm{Rol_{CNO}}\) were optimized considering the expected number of solar neutrinos over the statistical uncertainty of the total number of events. The optimized cuts are \(R_{\mathrm{FV}}<3.05\,(2.95)\,\mathrm{m}\) and \(0.85\,(0.85)\,\mathrm{MeV}<T_{e}<1.3\,(1.29)\,\mathrm{MeV}\)) for the Phase-I (Phase-II+III). In addition, all other cuts including the muon veto and data quality cuts have been applied as in Refs. [4; 19]. The overall exposures for the CID CNO analysis are \(740.7\,\mathrm{days}\times 104.3\,\mathrm{tonnes}\times 55.77\%\) for Phase-I and \(2888.0\,\mathrm{days}\times 94.4\,\mathrm{tonnes}\times 63.97\%\) for Phase-II+III. The total exposure of Phase-II+III (\(477.81\,\mathrm{years}\times\mathrm{tonnes}\)) is about four times larger than that of Phase-I (\(118.04\,\mathrm{years}\times\mathrm{tonnes}\)).
### N\({}^{\text{th}}\)-hit analysis approach
As mentioned above, the CID analysis is performed on the first several early hits of ToF corrected hit times from each event in the RoI. In this subsection we describe the optimization of the number of hits from each event to be used in the CID analysis. The procedure is based on the comparison of the MC-produced \(\cos\alpha\) distributions of signal and background.
First, PMT hits of each individual event are sorted according to their ToF-corrected hit times and are labeled in this order as "N\({}^{\text{th}}\)-hit", with N = 1, 2,... up to the total number of hits. Second, the \(\cos\alpha\) distributions are constructed for each N\({}^{\text{th}}\)-hit for both the signal and background MC. Third, for each N\({}^{\text{th}}\)-hit \(\cos\alpha\) distribution a number of 10,000 to MC samples are simulated with the number of events observed as in the real data. Next, we perform a direct signal to background comparison, based on a standard \(\chi^{2}\)-test. Figure 3 shows the resulting \(\Delta\chi^{2}\) for Phase-II+III in the \(\mathrm{Rol_{CNO}}\) averaged over the 10,000 toy datasets as a function of N\({}^{\text{th}}\)-hit. A larger average \(\Delta\chi^{2}\) corresponds to a greater difference between the MC signal and background and thus a larger expected sensitivity for the CID fit, independent of the true signal to background ratio. While only the earliest \(\sim\)4 N\({}^{\text{th}}\)-hits have a relevant, _direct_ contribution of Cherenkov hits to the \(\cos\alpha\) distribution of the neutrino signal, later N\({}^{\text{th}}\)-hits also contribute to the CID sensitivity due to the _indirect_ Cherenkov influence on \(\Delta\sigma_{\text{dir}}\). A more in-depth explanation of the \(\Delta\sigma_{\text{dir}}\) effect is shown in Fig. 9 in [20].
A possible impact of the \(\mathrm{gv_{ch}}\) and \(\Delta\sigma_{\text{dir}}\) nuisance parameters on the N\({}^{\text{th}}\)-hit selection has been investigated and is presented in Fig. 3. The first hits of the events provide the largest
Figure 2: Illustration of the two RoIs used in the analysis on the energy spectrum of the Phase-II+III data in a fiducial volume of \(2.95\,\mathrm{m}\) radius. Monte Carlo PDFs of different solar neutrino components are scaled to high-metallicity SSM prediction [2]. The grey band shows the \({}^{7}\)Be-\(\nu\) edge region used for the estimation of \(\mathrm{gv_{ch}}\) correction (\(\mathrm{Rol_{gv}}\)), while the CNO region used to measure the CNO-\(\nu\) rate is shown in yellow band (\(\mathrm{Rol_{CNO}}\)).
values thanks to the direct Cherenkov light. A decrease of \(\mathrm{gv_{ch}}\) is decreasing the group velocity of Cherenkov photons and thus their contribution at early hits. The impact of \(\Delta\sigma_{\mathrm{dir}}\) can be seen for \(\mathrm{N^{th}}\)-hit \(>4\), where the contribution of direct Cherenkov hits becomes negligible relative to the scintillation hits, but the signal and background MC \(\cos\alpha\) histograms are still different from each other (\(\Delta\chi^{2}>0\)).
In conclusion, the early hits selection for the CID analysis in both \(\mathrm{RoI_{gvc}}\) and \(\mathrm{RoI_{CNO}}\) is then performed from the first hit up to the \(\mathrm{N^{th}}\)-hit(max) \(=15,17\) for Phase-I and Phase-II+III, respectively. This is an optimization where all direct and indirect Cherenkov information is used, while at the same time this selection keeps the contribution from delayed scintillation photons, undergoing various optical process during the propagation through the detector, relatively small.
The Cherenkov-to-scintillation photon ratio as a function of \(\mathrm{N^{th}}\)-hit has also been checked explicitly, as is shown in Fig. 4 for \(\mathrm{RoI_{CNO}}\). As expected, it can be seen that the early \(\mathrm{N^{th}}\)-hits benefit from the largest Cherenkov-to-scintillation ratio of \(\sim 13\%\) for the first hit. The overall total Cherenkov-to-scintillation ratio is small and found to be \(0.475\%\) in the MC.
### CID fit procedure
The fitting strategy follows the procedure developed in our previous CID analysis [20]. The data \(\cos\alpha\) distributions from the selected RoI, constructed for each \(\mathrm{N^{th}}\)-hit from the first up to the \(\mathrm{N^{th}}\)-hit(max), are fitted simultaneously with the MC produced, expected \(\cos\alpha\) distributions of the neutrino signal and background, where the signal \(\cos\alpha\) distribution depends on \(\mathrm{gv_{ch}}\) and \(\Delta\sigma_{\mathrm{dir}}\). The nuisance parameter \(\Delta\sigma_{\mathrm{dir}}\) cannot be calibrated in Borexino and is left free to vary without a dedicated pull term. The number of \(\cos\alpha\) histogram bins used in the analyses is \(i=60\) for all energy regions and phases, as values of \(i<30\) reduce the expected CID sensitivity.
#### 3.3.1 Fit in the \(\mathrm{RoI_{gvc}}\)
The CID analysis in \(\mathrm{RoI_{gvc}}\) used for the \(\mathrm{gv_{ch}}\) calibration is based on the \(\chi^{2}\)-test:
\[\begin{split}\chi^{2}_{\mathrm{gv_{ch}}}(N_{\nu},\mathrm{gv_{ch} },\Delta\sigma_{\mathrm{dir}})=\\ =\sum_{n=1}^{\mathrm{N^{th}}\mathrm{-hit(max)}}\sum_{i=1}^{l} \left[\frac{\left(\mathcal{N}\cdot M_{i}^{n}-D_{i}^{n}\right)^{2}}{\mathcal{N }\cdot M_{i}^{n}+\mathcal{N}^{2}\cdot M_{i}^{n}}\right]-2\ln\left(P(N_{\nu}) \right),\end{split} \tag{1}\]
where \(D_{i}^{n}\) and \(M_{i}^{n}\) are the numbers of \(\cos\alpha\) histogram entries at bin \(i\) for a given \(\mathrm{N^{th}}\)-hit \(n\), for data and MC, respectively. The term \(\mathcal{N}\) is the scaling factor between the MC and the data event statistics and the term "\(\mathcal{N}^{2}\cdot M_{i}^{n}\)" in the denominator takes into account the finite statistics of MC. The explicit dependence of the fit on \(N_{\nu}\), \(\mathrm{gv_{ch}}\), and \(\Delta\sigma_{\mathrm{dir}}\) can be expressed by decomposing the MC contribution to the one from the signal \(S\) and the background \(B\):
\[M_{i}^{n}=\frac{N_{\nu}}{N_{\mathrm{data}}}\cdot M_{\mathrm{S},i}^{n}(\Delta \sigma_{\mathrm{dir}},gv_{\mathrm{ch}})+(1-\frac{N_{\nu}}{N_{\mathrm{data}}}) \cdot M_{\mathrm{B},i}^{n}. \tag{2}\]
The number of neutrino events \(N_{\nu}\) and \(\Delta\sigma_{\mathrm{dir}}\) are treated as nuisance parameters to produce the \(\chi^{2}(\mathrm{gv_{ch}})\) profile, where \(N_{\nu}\) is constrained by the SSM expectation. For this, the neutrino prior probability distribution \(P(N_{\nu})\) is given by the sum of the Gaussian probability distributions with mean and sigma from the high-metallicity (HZ) SSM and low-metallicity (LZ) SSM [2] predictions on the number of \({}^{7}\)Be+_pep-\(\nu\)_ in \({}^{7}\)Be-\(\nu\) shoulder region, which is then convoluted with a uniform distribution of CNO-\(\nu\) between zero and the HZ-SSM CNO expectation + \(5\sigma\). In this way, by leaving CNO reasonably free to vary, we avoid a potential correlation of the \(\mathrm{gv_{ch}}\) calibration and the subsequent measurement of the CNO-\(\nu\) rate using this \(\mathrm{gv_{ch}}\) constraint.
#### 3.3.2 Fit in the \(\mathrm{RoI_{CNO}}\)
The \(\chi^{2}\)-test for the measurement of number of solar neutrinos (\(N_{\nu}\)) in \(\mathrm{RoI_{CNO}}\) is:
\[\begin{split}\chi^{2}_{\nu}(N_{\nu},\mathrm{gv_{ch}},\Delta\sigma _{\mathrm{dir}})=\\ =\sum_{n=1}^{\mathrm{N^{th}}\mathrm{-hit(max)}}\sum_{i=1}^{l} \left[\frac{\left(\mathcal{N}\cdot M_{i}^{n}-D_{i}^{n}\right)^{2}}{\mathcal{N} \cdot M_{i}^{n}+\mathcal{N}^{2}\cdot M_{i}^{n}}\right]+\Delta\chi^{2}_{ \mathrm{gv_{ch}}}\left(\mathrm{gv_{ch}}\right)\end{split} \tag{3}\]
Figure 4: Cherenkov-to-scintillation PMT hit ratio as a function of the time-of-flight sorted \(\mathrm{N^{th}}\)-hits for the _pep_ neutrino Monte Carlo of Phase II+III in the \(\mathrm{RoI_{CNO}}\) (\(0.85\,\mathrm{MeV}-1.3\,\mathrm{MeV}\)).
Figure 3: \(\Delta\chi^{2}\) between the Phase-II+III \(\mathrm{RoI_{CNO}}\) neutrino signal and background MC \(\cos\alpha\) distributions for different selections of nuisance parameters. \(\Delta\sigma_{\mathrm{dir}}=2.7\,\mathrm{cm}\) corresponds to the nominal value observed in the neutrino MC.
using \(\mathrm{gv_{ch}}\) and \(\Delta r_{\mathrm{dir}}\) as nuisance parameters. The \(\mathrm{gv_{ch}}\) parameter is now constrained by the previous calibration in \(\mathrm{RoI_{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\))))) \)}}}}\) \) \) \) \) \) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
time from May 2007 to October 2021. The \(\mathrm{g}\mathrm{v}_{\mathrm{ch}}\) values presented in Sec. 3.5.1 are used as independent pull terms in the Eq. 3 for the fit in \(\mathrm{Ro}\mathrm{I}_{\mathrm{CNO}}\) of their respective phases. This takes into account the potential systematic differences of the detector response between Phase-I and Phase-II+III. The resulting number of solar neutrino events \(N_{\nu}\) in the \(\mathrm{Ro}\mathrm{I}_{\mathrm{CNO}}\) can be converted into the number of CNO neutrinos detected in the same energy region after constraining the contributions from \(pep\) and \({}^{8}\)B neutrinos, but without any a-priori knowledge of the backgrounds. This number of CNO events can be further transformed into the measurement of the CNO-\(\nu\) interaction rate in Borexino and the CNO flux at Earth.
The best fit values for the number of solar neutrinos in \(\mathrm{Ro}\mathrm{I}_{\mathrm{CNO}}\) are \(N_{\nu}=691^{+235}_{-224}\) (stat) for Phase-I and \(N_{\nu}=2828^{+518}_{-494}\) (stat) for Phase-II+III without inclusion of any systematic uncertainties or corrections. The compatibility between the data and the MC model is good with \(\chi^{2}/\mathrm{n}\mathrm{d}\mathrm{f}\mathrm{t}\mathrm{t}\mathrm{t}\mathrm{t} =884.8/897\), \(p\mathrm{~{}value}=0.61\) for Phase-I and \(\chi^{2}/\mathrm{n}\mathrm{d}\mathrm{f}\mathrm{t}\mathrm{t}\mathrm{t} =1000.7/1017\), \(p\mathrm{~{}value}=0.64\) for Phase-II+III. The MC model is able to reproduce the data \(\cos\alpha\) distribution, which has also been investigated for the individual \(\mathrm{N}\mathrm{{}^{th}}\)-hits \(\cos\alpha\) histograms.
Figure 6 illustrates the best fit results (red) relative to a pure background hypothesis (blue), in which the CID \(\cos\alpha\) histograms of data (black) are shown for the sum of Phase-I + Phase-II+III, as well as for the sum of the early first to fourth \(\mathrm{N}\mathrm{{}^{th}}\)-hits (top) and the sum of the later \(\mathrm{N}\mathrm{{}^{th}}\)-hits from the fifth to \(\mathrm{N}\mathrm{{}^{th}}\) - hit(max) (bottom). The actual fit is performed on Phase-I and Phase-II+III independently. The same observations made for Fig. 5 hold also true for Fig. 6, where the early hits show the Cherenkov peak and the later hits show the impact of \(\Delta\sigma_{\mathrm{dir}}\).
### Fit response bias correction
Toy-MC analyses found that the fit of the number of solar neutrinos in \(\mathrm{Ro}\mathrm{I}_{\mathrm{CNO}}\) shows a small systematic shift between the injected number of neutrinos and the best fit number of neutrinos, due to the correlation between the nuisance parameters (\(\mathrm{g}\mathrm{v}_{\mathrm{ch}}\), \(\Delta\sigma_{\mathrm{dir}}\)) and the relatively low total number of neutrino events. This fit response bias is induced by the two nuisance parameters as they only impact the shape of the neutrino signal MC \(\cos\alpha\) distribution but not that of background. We note that this effect was found to be negligible in \(\mathrm{Ro}\mathrm{I}_{\mathrm{gvc}}\) due to the relative large number of signal events and the large signal to background ratio.
The value of the fit response bias in \(\mathrm{Ro}\mathrm{I}_{\mathrm{CNO}}\) is estimated
\begin{table}
\begin{tabular}{c|c c} \hline \hline Source of \(\mathrm{g}\mathrm{v}_{\mathrm{ch}}\) uncertainty & Phase-I & Phase-II+III \\ \hline PMT selection & 2.1\% & 1.6\% \\ PMT time corrections & 3.7\% & 2.1\% \\ MLP event selection & 1.0\% & 1.0\% \\ Fiducial mass & \(\left(\begin{subarray}{c}+0.2\\ -1.2\end{subarray}\right)\) \% & \(\left(\begin{subarray}{c}+0.2\\ -1.2\end{subarray}\right)\) \% \\ Fraction of neutrinos in RoI & 1.3\% & 0.9\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Systematic uncertainties of the \(\mathrm{g}\mathrm{v}_{\mathrm{ch}}\) measurement in the \(\mathrm{Ro}\mathrm{I}_{\mathrm{gvc}}\), relative to the best fit value.
\begin{table}
\begin{tabular}{c|c c} \hline \hline Source of uncertainty & Phase-I & Phase-II+III \\ \hline \multicolumn{3}{c}{For \(\mathrm{N}_{\nu}\)} \\ \hline PMT selection & 1.3\% & 0.6\% \\ PMT time corrections & 4.2\% & 2.4\% \\ Low number of signal events & 2.2\% & – \\ CNO-\(\nu\) vs. \(pep\)-\(\nu\) MC & 2.2\% & 2.0\% \\ \hline \hline \multicolumn{3}{c}{For \(\mathrm{N}_{\mathrm{CNO}}\)} \\ \hline \(pep\)+\({}^{8}\)B-\(\nu\) constraint & 4.6\% & 1.8\% \\ \hline \hline \multicolumn{3}{c}{For \(\mathrm{R}_{\mathrm{CNO}}\)} \\ \hline Fiducial mass & \(\left(\begin{subarray}{c}+0.2\\ +1.2\end{subarray}\right)\) \% & \(\left(\begin{subarray}{c}+0.2\\ -1.2\end{subarray}\right)\) \% \\ Fraction of CNO-\(\nu\) in RoI & 1.4\% & 1.4\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Systematic uncertainties on the number of solar neutrino events \(N_{\nu}\) in \(\mathrm{Ro}\mathrm{I}_{\mathrm{CNO}}\), relative to the best fit value. The uncertainty from \(pep\)+\({}^{8}\)B-\(\nu\) constraint is relevant only for \(\mathrm{N}_{\mathrm{CNO}}\). The last two rows are relevant only for the CNO-\(\nu\) rate (\(\mathrm{R}_{\mathrm{CNO}}\)) calculation.
Figure 5: The CID data (black) and the best fit results (red) for the measurement of the \(\mathrm{g}\mathrm{v}_{\mathrm{ch}}\) parameter. While the analysis is done separately for Phase-I and Phase-II+III, here the sum of Phase-I+II+III is shown for illustration purposes. There are in total 78632 events in the \(\mathrm{Ro}\mathrm{I}_{\mathrm{gvc}}\). The best fit for the constrained number of neutrino events is \(N_{\nu}=50063\), while the best fit values for the parameter of interest are \(\mathrm{g}\mathrm{v}_{\mathrm{ch}}=0.140\,\mathrm{ns}\,\mathrm{m}^{-1}\) for Phase-I and \(\mathrm{g}\mathrm{v}_{\mathrm{ch}}=0.089\,\mathrm{ns}\,\mathrm{m}^{-1}\) for Phase-II+III. For comparison, the background MC (blue) scaled to the same total number of events is shown. (a) The sum of the first to fourth \(\mathrm{N}\mathrm{{}^{th}}\)-hits \(\cos\alpha\) histograms shows the Cherenkov peak. (b) The sum of the fit to the \(\mathrm{N}\mathrm{{}^{th}}\)-hit(max) \(\cos\alpha\) histograms shows the effect the \(\Delta\sigma_{\mathrm{dir}}\) parameter on the later hits.
using the Bayesian posterior distribution of \(N_{\nu}\)[27], which is produced through a toy-MC rejection sampling, described in summary below. The prior distribution for the number of neutrino events is chosen to be uniform between zero and the number of selected data events (2990 for Phase-I and 5974 for Phase-II+III), the prior distribution of \(\Delta r_{\rm dir}\) is also uniform, and the prior distribution of \(\rm gv_{ch}\) is given by the measurement at the \({}^{7}\)Be-v edge \(\rm RoI_{gvc}\)\(\left(P\left(gv_{ch}\right)\propto\exp\left(-\frac{1}{2}\Delta\chi^{2}(gv_{ch}) \right)\right)\). The pseudo-data inputs \(\left(N_{\nu}^{\rm sim},gv_{ch}^{\rm sim},\Delta r_{\rm dir}^{\rm sim}\right)\) are sampled from the MC signal and background \(\cos\alpha\) distributions following these model parameter prior distributions. The analysis is then performed in the same way as for the real data and results in best fit values of the pseudo-data \(\left(N_{\nu}^{\rm fit},gv_{ch}^{\rm fit},\Delta r_{\rm dir}^{\rm fit}\right)\). The real data result now defines a multivariate Gaussian distribution \(\rm P_{accept}(N_{\nu},gv_{ch},\Delta r_{\rm dir})\) with a mean value given by its best fit values and with a standard deviation given by the systematic uncertainty of the PMT time corrections. The sampled true values of the triplet \(\left(N_{\nu}^{\rm sim},gv_{ch}^{\rm sim},\Delta r_{\rm dir}^{\rm sim}\right)\) are then saved only with a probability of \(\rm P_{accept}(N_{\nu}^{\rm fit},gv_{ch}^{\rm fit},\Delta r_{\rm dir}^{\rm fit})\), given by the best fit result of the pseudo-data, otherwise they are rejected. The resulting distributions of the true values for \(\left(N_{\nu},gv_{ch},\Delta r_{\rm dir}\right)\) then correspond to their Bayesian posterior distributions.
The fit response bias is illustrated in Fig. 7 for Phase II+III. The likelihood distribution \(P(N_{\nu})\propto\exp\left(-\frac{1}{2}\Delta\chi^{2}(N_{\nu})\right.\) given by the \(\chi^{2}\) fit of data with Eq. 3 and averaged over the 1000 fits with different PMT time offsets is shown in blue. The black distribution is given by the simulation of 20k pseudo-data analyses, selected through the rejection sampling MC described above. The red distribution is produced by shifting \(P(N_{\nu})\) by a value of \(\Delta N_{\nu}=-109\pm 4\) and this distribution is well in agreement with the black rejection sampled distribution. It is therefore used as the posterior distribution of the CID analyses. We note that for the Phase-I the situation is similar and the shift is found to be \(-50\pm 4\) events.
### Inclusion of systematics
The final result of the CID analysis for the number of solar neutrinos is given by the Bayesian posterior distribution of \(N_{\nu}\), marginalized over the nuisance parameters and convoluted with the systematic uncertainties. The relevant systematic uncertainties are shown in Table 2 and assumed to be normally distributed.
The posterior distributions \(P\left(N_{\nu}\right)\) in Phase-I and Phase-II+III including these systematics are shown in Fig. 8. The resulting number of solar neutrinos detected in the \(\rm RoI_{CNO}\) is \(N_{\nu}=643^{+235}_{-230}\,(\rm stat)^{+37}_{-30}\,(\rm sys)\) for Phase-I and \(N_{\nu}=2719^{+518}_{-494}\,(\rm stat)^{+85}_{-85}\,(\rm sys)\) for Phase-II+III, including all systematics and correcting for the fit response bias. The quoted uncertainties are calculated from the posterior distributions using an 68% equal-tailed credible interval (CI). The one-sided zero neutrino hypothesis can be excluded with \(P(N_{\nu}=0)=2.8\times 10^{-5}\) (\(\sim 4.2\sigma\)) for Phase-I and \(P(N_{\nu}=0)=6.4\times 10^{-11}\) (\(\sim 6.5\sigma\)) for Phase-II+III.
Figure 6: Illustration of the CID data (black) and the best fit results (red) summed for the Phase-I + Phase-II+III with a total of 8964 events in the \(\rm RoI_{CNO}\). The best fit of the total number of neutrino events is \(N_{\nu}=3519\) without any systematic correction. For comparison, the background MC (blue) scaled to the same total number of events is shown. (a) The sum of the first to fourth \(\rm N^{th}\)-hits \(\cos\alpha\) histograms shows the Cherenkov peak. (b) The sum of the fifth to the \(\rm N^{th}\)-hit(max) \(\cos\alpha\) histograms shows the effect the \(\Delta r_{\rm dir}\) parameter on these later hits.
Figure 7: Illustration of the fit response bias for Phase II+III in the \(\rm RoI_{CNO}\). The data fit result, i.e. the likelihood \(P(\nu)\) given by the \(\Delta\chi^{2}\) profile of Eq. 3 and averaged over 1000 fits with different PMT time offsets is shown in blue. The posterior distribution of 20k pseudo-data analyses, selected through rejection sampling, is shown in black. The red line corresponds to the posterior distribution that includes only the systematics from the PMT time alignment correction.
### CID results on CNO
The interpretation of the CID results requires the correct treatment of the physical boundaries of the analysis, i.e. \(0\leq N_{\nu}\leq 2990\) (5974) for Phase-I (Phase-II+III), respectively. This is done in a Bayesian interpretation, based on the posterior distribution \(P\left(N_{\nu}\right)\) shown in Fig. 8.
Next, the distribution of the number of CNO-\(\nu\) events is estimated by constraining the expected number of \(pep\) and \({}^{8}\)B neutrino events (\(N_{pep+^{8}\text{B}}\)) where the constraint on the number of \(pep\) neutrinos uses the SSM predictions [2] and \({}^{8}\)B is constrained using the high precision flux measurement of Super-Kamiokande [6] including model uncertainties, the difference between HZ-SSM and LZ-SSM predictions, as well as the Borexino FV and energy systematic uncertainties from Table 2. This is done through the convolution of the \(N_{\nu}\) posterior distributions from Fig. 8 with the predicted \(P(N_{pep+^{8}\text{B}})\) probability distribution: \(P(N_{\text{CNO}})=P(N_{\nu})*P(-N_{pep+^{8}\text{B}})\). The resulting \(P(N_{\text{CNO}})\) posterior distributions are shown in Fig. 9. The CID measurement for the number of CNO-\(\nu\) events is then \(N_{\text{CNO}}=270^{+218}_{-169}\,\text{(stat)}^{+33}_{-25}\,\text{(sys)}\) for Phase-I and \(N_{\text{CNO}}=1146^{+518}_{-486}\,\text{(stat)}^{+492}_{-89}\,\text{(sys)}\) for Phase-II+III, where the uncertainty corresponds to the equal-tail 68% CI within the physical boundaries, including all systematics.
It has been observed that Phase-I and Phase-II+III do not show prohibitively different behavior for the full CID analysis-chain and the MC model is well able to reproduce the data \(\cos\alpha\) histograms for both phase selections and for each selected energy region. It is then reasonable to combine the conditionally independent results of Phase-I and Phase-II+III, through the convolution of both posterior distributions \(P\left(N_{\text{CNO}}\right)^{\text{I+II+III}}=P\left(N_{\text{CNO}}\right)^{ \text{I}}*P\left(N_{\text{CNO}}\right)^{\text{II+III}}\). The probability that exactly zero CNO-\(\nu\) events contribute to the measured data CID \(\cos\alpha\) distribution is \(P(N_{\text{CNO}}=0)=1.35\times 10^{-3}\) for Phase-I, \(P(N_{\text{CNO}}=0)=5.87\times 10^{-5}\) for Phase-II+III, and \(P(N_{\text{CNO}}=0)=7.93\times 10^{-8}\) for the combined result. This corresponds to a one-sided exclusion of the zero-CNO hypothesis at about 5.3\(\sigma\) credible level for the combination of Phase-I and Phase-II+III.
The CNO-\(\nu\) rate probability density function is calculated from the measured posterior distribution of CNO-\(\nu\) events, using the exposure of the respective phases. The effective exposure is given by the product of the fiducial mass, the detector live time, the TFC-exposure, the trigger efficiency, and the fraction of CNO-\(\nu\) events within the selected energy region. The final CID result for the CNO-\(\nu\) rate, using the full dataset of Phase-I + Phase-II+III, is \(R_{\text{CNO}}^{\text{CID}}=7.2\pm 2.5\,\text{(stat)}\pm 0.4\,\text{(sys)}^{+1. }_{-0.8}\,\text{(nuisance)}\,\text{cpd/}100\,\text{tonnes}=7.2^{+2.8}_{-2.7}\, \text{cpd/}100\,\text{tonnes}\) The quoted uncertainties now also show the systematic uncertainties from Table 2 separately from the influence of the nuisance parameters \(\text{g}_{\text{v}_{\text{ch}}}\) and \(\Delta\tau_{\text{dir}}\). The quoted statistical uncertainty corresponds to a hypothetical, perfect calibration of these CID nuisance parameters. The results are summarized in Table 3.
These CID results are well in agreement with the HZ-SSM prediction of (\(4.92\pm 0.78\)) \(\text{cpd/}100\,\text{tonnes}\) (\(0.6\,\sigma\)), while the LZ-SSM prediction (\(3.52\pm 0.52\)) \(\text{cpd/}100\,\text{tonnes}\) (\(1.1\,\sigma\)) is 1.7 times less likely to be true, given the results of the Borexino CID analysis.
Figure 8: The CID measured posterior probability distributions for the number of solar neutrinos \(N_{\nu}\) in the RoICNO for Phase-I (blue) and Phase-II+III (red). All systematic effects are included.
Figure 10: The combined CID Phase-I + Phase-II+III CNO-\(\nu\) rate posterior distribution is shown in red. The blue, violet and grey bands show the 68% CI, for the low metallicity SSM B16-AGSS09met (\(3.52\pm 0.52\)) \(\text{cpd/}100\,\text{tonnes}\)), the high metallicity SSM B16-GS98 (\(4.92\pm 0.78\)) \(\text{cpd/}100\,\text{tonnes}\)) predictions [2, 28], and the combined CID result, respectively. All systematic effects are included.
Figure 9: The CID measured posterior probability for the number of CNO-\(\nu\) events after constraining \(pep\) and \({}^{8}\)B neutrinos for Phase-I (blue) and Phase-II+III (red). All systematic effects are included.
## 4 Combined CID and multivariate analysis
In this Section we combine the CID analysis with the standard multivariate fit of Phase-III to improve the result on CNO neutrinos. This is done by including the posterior distributions of solar neutrinos from the CID analysis, shown in Fig. 8, in the multivariate analysis likelihood. By statistically subtracting the sub-dominant \({}^{8}\)B neutrinos contribution and converting \(N_{\text{CNO+rep}}\) to the corresponding interaction rate, it is possible to use these posterior distributions as external likelihood terms in the minimization routine. Following this procedure, two multiplicative pull terms constraining the number of CNO and \(pep\) neutrino events are used: the first one is related to the Phase-I (\(\mathcal{L}_{\text{CID}}^{\text{P-I}}\)), while the second one refers to Phase-II+III datasets (\(\mathcal{L}_{\text{CID}}^{\text{P-IIII}}\)).
The overall combined likelihood used for this analysis becomes:
\[\mathcal{L}_{\text{MV+CID}}=\mathcal{L}_{\text{MV}}\cdot\mathcal{L}_{\text{ pop}}\cdot\mathcal{L}_{\text{{}^{11}\text{B}i}}\cdot\mathcal{L}_{\text{CID}}^{\text{P-I}} \cdot\mathcal{L}_{\text{CID}}^{\text{P-II+III}} \tag{4}\]
where the first three terms correspond to an improved version of the standard multivariate analysis described in [19]. This approach couples the one-dimensional Poisson likelihood for the TFC-Tagged dataset with a two-dimensional one (energy and radius) for the TFC-subtracted, to enhance the separation between signal and backgrounds. We have improved the binning optimization and used an updated version of Monte Carlo.
The \(pep\) neutrinos interaction rate is constrained with 1.4% precision to the 2.74\(\pm\)0.04 cpd/100 tonnes value, by combining the Standard Solar Model predictions [2], the most current flavor oscillation parameters set [29] and the solar neutrino data [30; 31]. This constraint is applied with the Gaussian pull term \(\mathcal{L}_{\text{pop}}\). An upper limit on the \({}^{210}\)Bi rate of (10.8 \(\pm\) 1.0 cpd/100 tonnes) is applied with the half Gaussian term \(\mathcal{L}_{\text{{}^{11}\text{B}i}}\). This upper limit is obtained from the rate of the \({}^{210}\)Bi daughter \({}^{210}\)Po (see [4; 19] for more details).
### Results
As in [19], the energy RoI for the multivariate analysis is 0.32 MeV \(<T_{e}<\) 2.64 MeV for electron recoil kinetic energy. The reconstructed energy spectrum scale is quantified in the \(N_{h}\) estimator, representing total number of detected hits for a given event (see Sec. 2). The dataset is the same one analyzed in [19], in which the exposure amounts to 1431.6 days \(\times\) 71.3 tonnes.
Along with CNO solar neutrinos, the free parameters of the fit are divided into three categories: internal (\({}^{85}\)Kr and \({}^{210}\)Po) and external (\({}^{208}\)Tl, \({}^{214}\)Bi, and \({}^{40}\)K) backgrounds, cosmogenic backgrounds (\({}^{11}\)C, \({}^{6}\)He, and \({}^{10}\)C), and solar neutrinos (\({}^{8}\)Be). Since \({}^{8}\)B solar neutrinos exhibit a flat and marginal contribution, the corresponding interaction rate is fixed at high-metallicity expectations from Solar Standard Model. As discussed in Eq. 4, the interaction rates of \(pep\) neutrinos and \({}^{210}\)Bi background are constrained with likelihood pull terms, and CID results reported in Sec. 3.5 are accounted for as additional external constraints. The result of the fit for the energy and radial projections is shown in Fig. 11.
The multivariate fit returns an interaction rate of CNO neutrinos of \(6.7^{+1.2}_{-0.7}\,\text{cpd}/100\,\text{tonnes}\) (statistical error only). The agreement between the model and data is quantified with a \(p\) value of 0.2.
To account for sources of systematic uncertainty, the same Monte Carlo method described in [4; 19] has been adopted. In a nutshell, hundred thousands Monte Carlo pseudo-experiments were generated, including relevant effects able to introduce a systematic error, such as the energy response non linearity and non uniformity, the time variation of the scintillator light yield and the different theoretical models for the \({}^{210}\)Bi spectral shape. The analysis is performed on these pseudo datasets assuming the standard response to study how this impacts the final result on CNO, yielding a total systematic uncertainty of \({}^{+0.31}_{-0.24}\,\text{cpd}/100\,\text{tonnes}\). Other
Figure 11: Multivariate fit results for the TFC-subtracted dataset, projected over the energy (top panel) and the radius (bottom panel) dimensions. For both projections, the sum of the individual components from the fit (magenta) is superimposed on the data (grey points). CNO neutrinos, \({}^{210}\)Bi and pep neutrinos contributions are displayed in solid red, dashed blue and dotted green lines, respectively, while the other spectral components (\({}^{8}\)Be and \({}^{8}\)B neutrinos, other backgrounds) are shown in grey. The analysis has been performed using \(N_{h}\) as energy estimator and the conversion to keV energy scale was performed only for the plotting purposes. The radial fit components, that are the uniform and the external backgrounds contributions, are shown in solid blue and dashed grey lines respectively.
\begin{table}
\begin{tabular}{c|c c} \hline \hline CID results & \(P(N_{\text{CNO}}=0)\) & \(R_{\text{CNO}}\left[\frac{\text{cpd}}{100\,\text{tonnes}}\right]\) \\ \hline Phase-I & \(1.35\times 10^{-3}\) & \(6.4^{+5.2}_{-4.1}\) \\ Phase-II+III & \(5.87\times 10^{-5}\) & \(7.3^{+3.4}_{-3.2}\) \\ \hline Combined & \(7.93\times 10^{-8}\) & \(7.2^{+2.8}_{-2.7}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: CID CNO-\(\nu\) results with systematic uncertainties.
sources of systematic error are included in the estimation of the upper limit on \({}^{210}\)Bi contamination, as discussed more in detail in [4; 19].
The negative log-likelihood profile as a function of the CNO rate is reported in Fig. 12. The solid and dashed black lines show the results with and without systematic uncertainty, respectively. The result without CID constraint reported in [19] is included (blue line), for comparison. The improvement is clear especially for the upper value of the CNO rate. The CNO interaction rate is extracted from the 68% quantile of the likelihood profile convoluted with the resulting systematic uncertainty, as \(R^{\rm MF+CID}(\rm CNO)=6.7^{+1.2}_{-0.8}\) cpd/100 tonnes. The significance to the no-CNO hypothesis reaches about \(8\sigma\) C.L., while the resulting CNO flux at Earth is \(\Phi(\rm CNO)=6.7^{+1.2}_{-0.8}\times 10^{8}\) cm\({}^{-2}\) s\({}^{-1}\). Following the same procedure used in [19], we use this result together with the \({}^{8}\)B flux obtained from the global analysis of all solar data to determine the abundance of C + N with respect to H in the Sun with an improved precision, for which we find \(\rm N_{CN}=5.81^{+1.22}_{-0.94}\times 10^{-4}\). This error includes both the statistical uncertainty due to the CNO measurement, and the systematic errors due to the additional contribution of the SSM inputs, to the \({}^{8}\)B flux measurement, and to the \({}^{13}\)N/\({}^{15}\)O fluxes ratio. Similar to what was inferred from our previous publication, this result is in agreement with the High Metallicity measurements [14; 15], and features a \(2\sigma\) tension with Low Metallicity ones [16; 17; 18]. Similarly, if we combine the new CNO result with the other Borexino results on \({}^{7}\)Be and \({}^{8}\)B in a frequentist hypothesis test based on a likelihood ratio test statistics we find that, assuming the HZ-SSM to be true, our data disfavours LZ-SSM at \(3.2\sigma\) level.
## 5 Conclusions
In this work we have presented the results on CNO solar neutrinos obtained using the "Correlated and Integrated Directionality" (CID) technique.
We have shown that the CID technique can be used to extract the CNO signal without any a priori assumptions on the backgrounds, in particular that of \({}^{210}\)Bi. The Phase-I (May 2007 to May 2010, 740.7 days) and Phase-II+III (December 2011 to October 2021, 2888.0 days) datasets have been analyzed independently to investigate possible variations of the detector response over time. By adopting the Bayesian statistics, we have combined the conditionally independent results of Phase-I and Phase-II+III: the resulting CNO rate obtained with CID only is \(7.2^{+2.8}_{-2.7}\) cpd/100 tonnes. The no-CNO hypothesis including the _pep_ constraint only is rejected at \(5.3\sigma\) level. This result, albeit less precise than the one published by Borexino using the standard multivariate analysis, is the first obtained without the application of a \({}^{210}\)Bi constraint.
We have also obtained an improved CNO solar neutrino result by combining the standard multivariate analysis with the CID technique. The CID technique helps in separating the solar signal from non solar backgrounds, improving the significance and precision of the CNO measurement with respect to the result previously published by Borexino. The resulting CNO interaction rate is \(6.7^{+1.2}_{-0.8}\) cpd/100 tonnes and the significance against the absence of a CNO signal, considered as the null hypothesis, is about \(8\sigma\). The C+N abundance with respect to H is calculated from this result following the procedure adopted in [19] and is found to be \(\rm N_{CN}=5.81^{+1.22}_{-0.94}\times 10^{-4}\), compatible with the SSM-HZ metallicity measurements.
In conclusion, we have shown that the directional information of the Cherenkov radiation can be effectively combined with the spectral information coming from scintillation, for solar neutrino studies. This combined detection approach provides a measurement that is more powerful than the individual methods on their own. The sensitivity of the CID method could be significantly improved in future liquid scintillator-based detectors by optimizing the Cherenkov-to-scintillation ratio and by performing dedicated calibrations campaigns.
_Acknowledgments:_ We acknowledge the generous hospitality and support of the Laboratori Nazionali del Gran Sasso (Italy). The Borexino program is made possible by funding from Istituto Nazionale di Fisica Nucleare (INFN) (Italy), National Science Foundation (NSF) (USA), Deutsche Forschungsgemeinschaft (DFG), Cluster of Excellence PRISMA+ (Project ID 39083149), and recruitment initiative of Helmholtz-Gemeinschaft (HGF) (Germany), Russian Foundation for Basic Research (RFBR) (Grants No. 19-02-00097A), Russian Science Foundation (RSF) (Grant No. 21-12-00063) and Ministry of Science and Higher Education of the Russian Federation (Project FSWU-2023-0073) (Russia), and Narodowe Centrum Nauki (NCN) (Grant No. UMO 2017/26/M/ST2/00915) (Poland). We gratefully acknowledge the computing services of Bologna INFN-CNAF data centre and U-Lite Computing Center and Network Service at LNGS (Italy).
|
2310.17928 | Two-dimensional Rayleigh-Bénard convection without boundaries | We study the effects of Prandtl number $Pr$ and Rayleigh number $Ra$ in
two-dimensional Rayleigh-B\'enard convection without boundaries, i.e. with
periodic boundary conditions. In the limits of $Pr \to 0$ and $\infty$, we find
that the dynamics are dominated by vertically oriented elevator modes that grow
without bound, even at high Rayleigh numbers and with large scale dissipation.
For finite Prandtl number in the range $10^{-3} \leq Pr \leq 10^2$, the Nusselt
number tends to follow the `ultimate' scaling $Nu \propto Pr^{1/2} Ra^{1/2}$,
and the viscous dissipation scales as $\epsilon_\nu \propto Pr^{1/2}
Ra^{-1/4}$. The latter scaling is based on the observation that enstrophy
$\langle \omega^2 \rangle \propto Pr^0 Ra^{1/4}$. The inverse cascade of
kinetic energy forms the power-law spectrum $\hat E_u(k) \propto k^{-2.3}$,
while the direct cascade of potential energy forms the power-law spectrum $\hat
E_\theta(k) \propto k^{-1.2}$, with the exponents and the turbulent convective
dynamics in the inertial range found to be independent of Prandtl number.
Finally, the kinetic and potential energy fluxes are not constant in the
inertial range, invalidating one of the assumptions underlying Bolgiano-Obukhov
phenomenology. | Philip Winchester, Vassilios Dallas, Peter D. Howell | 2023-10-27T06:58:53Z | http://arxiv.org/abs/2310.17928v2 | # Two-dimensional Rayleigh-Benard convection without boundaries
###### Abstract
We study the effects of Rayleigh number \(\mathit{Ra}\) and Prandtl number \(\mathit{Pr}\) in two-dimensional Rayleigh-Benard convection without boundaries, i.e., with periodic boundary conditions. In the limits of \(\mathit{Pr}\to 0\) and \(\infty\), we find that the dynamics is dominated by unidirectional solutions that grow without bound, even at high Rayleigh numbers and with large scale dissipation. In the Prandtl number range \(10^{-3}\leq\mathit{Pr}\leq 10^{2}\), the Nusselt number follows the 'ultimate' scaling \(\mathit{Nu}\propto\mathit{Pr}^{1/2}\mathit{Ra}^{1/2}\), and the viscous dissipation \(\epsilon_{\nu}\propto\mathit{Pr}^{1/2}\mathit{Ra}^{-1/4}\). This latter scaling is based on the observation that enstrophy \(\langle\omega^{2}\rangle\propto\mathit{Pr}^{0}\mathit{Ra}^{1/4}\). The inverse cascade of kinetic energy forms the power-law velocity spectrum \(\widetilde{E}_{u}(k)\propto k^{-2.3}\), while the direct cascade of potential energy forms the power-law temperature spectrum \(\widetilde{E}_{\theta}(k)\propto k^{-1.2}\), with the exponents and the turbulent convective dynamics in the inertial range found to be independent of Prandtl number. Finally, the kinetic and potential energy fluxes are not constant in the inertial range, invalidating one of the assumptions underlying Bolgiano-Obukhov phenomenology.
## I Introduction
The fundamental challenge to understand thermally driven flow in the strongly nonlinear regime has puzzled the fluid dynamics community for more than a century [1]. The typical set-up that is studied theoretically consists of a fluid confined between two horizontal plates that is heated from below and cooled at the top. This is the so called Rayleigh-Benard convection (RBC) problem after Rayleigh's proposed model [2] for Benard's experiment on buoyancy-driven thermal convection [3; 4]. The classical problem depends on two dimensionless parameters: the Rayleigh number \(\mathit{Ra}\), which measures the driving effect of buoyancy relative to the stabilising effects of viscosity and thermal diffusivity; and the Prandtl number \(\mathit{Pr}\), which is the ratio of kinematic viscosity to thermal diffusivity. The global flow properties may be characterised by the Nusselt number \(\mathit{Nu}\), a further dimensionless parameter that measures the total heat flux relative to the purely conductive heat flux. For the strongly nonlinear regime of thermal convection, which is of paramount importance for geophysical and astrophysical applications, there are two competing theories for the behaviour of \(\mathit{Nu}\) as \(\mathit{Ra}\) tends to infinity for arbitrary \(\mathit{Pr}\). These two proposed asymptotic scaling laws are conventionally called the 'classical' and 'ultimate' theories.
The classical theory claims \(\mathit{Nu}\propto\mathit{Pr}^{0}\mathit{Ra}^{1/3}\)[5] while the ultimate theory claims \(\mathit{Nu}\propto\mathit{Pr}^{1/2}\mathit{Ra}^{1/2}\)[6; 7], with the latter including logarithmic corrections in \(\mathit{Ra}\)[6]. The classical theory is based on the argument that, as \(\mathit{Ra}\to\infty\), the dimensional heat flux should become independent of the depth of the cell [5]. For the ultimate theory, Kraichnan [6] argued that the velocity boundary layers undergo a transition to shear turbulence near the rigid plates, which increases the system's efficiency in transporting the heat, while Spiegel [7] argued that the heat flux becomes independent of the molecular properties of the fluid, with both deriving the same scaling law. The Rayleigh number at which the transition to the ultimate scaling is presumed to occur is not known, and laboratory experiments and numerical simulations [8; 9; 10; 11; 12; 13] have reported different power-laws for a wide range of Rayleigh numbers.
For stably stratified turbulence, Bolgiano [14] and Obukhov [15] proposed the so-called Bolgiano-Obukhov (BO) phenomenology, in which buoyancy balances inertia in the momentum equation and the potential energy flux is approximately constant for length scales larger than the Bolgiano scale. These assumptions lead to kinetic and potential energy spectra of the forms \(\widetilde{E}_{u}(k)\propto k^{-11/5}\) and \(\widetilde{E}_{\theta}(k)\propto k^{-7/5}\), respectively, where \(k\) is the wavenumber. Despite originally being proposed for stably stratified flows, BO scaling has been reported for three-dimensional (3D) [16; 17; 18; 19] and 2D RBC [20; 21; 22]. On the other hand, Kolmogorov (K41) scaling [23], with \(\widetilde{E}(k)\propto k^{-5/3}\), has been observed for both kinetic and potential energy spectra in simulations of 3D RBC [24; 25; 26].
Boundary conditions and boundary layers strongly affect the turbulence properties and play central roles in the two competing theories for the Nusselt number scaling. In this study, we choose the most obvious theoretical approach to bypass boundary layer effects by considering a fully periodic domain [21; 24; 27; 28] for the Rayleigh-Benard problem, with an imposed constant, vertical temperature gradient. This set-up, however, brings no limit to the resulting flows'
energy due to unbounded runaway solutions [29]. To avoid the unbounded growth of energy we include large scale dissipation to mimic the effect of friction when there are boundaries and to be able to reach statistically stationary solutions.
The majority of the literature on two-dimensional (2D) RBC focuses on the Rayleigh number dependence of the dynamics, with only a few studies (e.g., [30; 31; 32]) considering the effects of the Prandtl number. In this paper, we extensively study the effects of varying both the Prandtl and the Rayleigh number using numerical simulations of 2D RBC in a periodic domain driven by constant temperature gradient, while also considering hyperviscous simulations to permit the large scale separation which is crucial for the analysis of the multi-scale dynamics. Sec. II contains the dynamical equations, numerical methods and the definitions of the spectral and global variables under study. The results of our simulations are presented in Sec. III. After briefly investigating the behaviour of the system in the limits of zero and infinite Prandtl number, we analyse how the global variables scale with Prandtl and Rayleigh numbers and then discuss the effects of these dimensionless parameters on the spectral dynamics. Finally, we summarise our conclusions in Sec. IV.
## II Problem description
### Governing equations
We consider two-dimensional Rayleigh-Benard convection of a fluid heated from below in a periodic square cell \((x,y)\in[0,L]^{2}\). The temperature \(T(x,y,t)\) is decomposed as \(T=-\Delta Ty/L+\theta\), where \(\Delta T/L\) is the constant imposed temperature gradient and the temperature perturbation \(\theta(x,y,t)\) satisfies periodic boundary conditions. As usual, for simplicity we employ the Oberbeck-Boussinesq approximation [33; 34; 35], in which the kinematic viscosity \(\nu\) and the thermal diffusivity \(\kappa\) are taken to be constant while temperature dependence of the fluid density \(\rho\) is neglected except in the buoyancy term of the momentum equation.
The governing equations of the problem in two dimensions can be written in terms of \(\theta(x,y,t)\) and the streamfunction \(\psi(x,y,t)\) as follows:
\[\partial_{t}\nabla^{2}\psi+\{\psi,\nabla^{2}\psi\} =\alpha g\partial_{x}\theta+(-1)^{n+1}\nu\nabla^{2n+2}\psi+\mu\psi, \tag{1a}\] \[\partial_{t}\theta+\{\psi,\theta\} =\frac{\Delta T}{L}\partial_{x}\psi+(-1)^{n+1}\kappa\nabla^{2n}\theta, \tag{1b}\]
where \(\{A,B\}=\partial_{x}A\partial_{y}B-\partial_{y}A\partial_{x}B\) is the standard Poisson bracket, \(\alpha\) is the thermal expansion coefficient and \(g\) is the gravitational acceleration. In the presence of an inverse cascade the kinetic energy of the large-scale modes grow to extreme values forming a condensate whose amplitude growth is balanced by the viscous forces [36; 37]. So, to reach a turbulent stationary regime we have supplemented our system with a large scale dissipative term \(\mu\psi\) that is responsible for saturating the inverse cascade when present. We consider both normal and hyper viscosity by raising the Laplacian to the power of \(n=1\) and \(n=4\), respectively. The hyperviscous case, albeit not physically realisable, gives a wider inertial range, as diffusive and viscous terms kick in abruptly at much smaller scales compared to the normal viscosity case. In the limit of \(\nu\to 0\), \(\kappa\to 0\), and \(\mu\to 0\) the quantity that is conserved is \(E_{u}-\frac{\alpha L}{\Delta T}E_{\theta}\), where the kinetic energy \(E_{u}\) and the potential energy \(E_{\theta}\) are defined by
\[E_{u}=\frac{1}{2}\langle|\nabla\psi|^{2}\rangle,\qquad E_{\theta}=\frac{1}{2} \langle\theta^{2}\rangle, \tag{2}\]
with the angle brackets \(\langle\cdot\rangle\) here denoting the spatiotemporal average.
Equations (1) depend on three dimensionless parameters, namely
\[Pr=\frac{\nu}{\kappa},\qquad Ra=\frac{\alpha g\Delta TL^{4n-1}}{\nu\kappa}, \qquad Rh=\mu\left(\frac{L^{5}}{\alpha g\Delta T}\right)^{1/2}, \tag{3}\]
which are the Prandtl number, Rayleigh number and friction Reynolds number, respectively, in accordance with
\[\mathbf{x}\sim L,\quad t\sim\frac{L^{2n}}{\kappa},\quad\psi\sim\frac{\kappa}{ L^{2n-2}},\quad\theta\sim\Delta T. \tag{4}\]
We perform direct numerical simulations (DNS) of Eqs. (1) using the pseudospectral method [38]. We decompose the stream function into basis functions with Fourier modes in both the \(x\) and \(y\) directions, viz.
\[\psi(\mathbf{x},t)=\sum_{\mathbf{k}=-N/2}^{N/2}\widehat{\psi}_{\mathbf{k}}(t) e^{i\mathbf{k}\cdot\mathbf{x}}, \tag{5}\]
where \(\widehat{\psi}_{\mathbf{k}}\) is the amplitude of the \(\mathbf{k}=(k_{x},k_{y})\) mode of \(\psi\), and \(N\) denotes the number of aliased modes in the \(x\)- and \(y\)-directions. We decompose \(\theta\) in the same way. A third-order Runge-Kutta scheme is used for time advancement and the aliasing errors are removed with the two-thirds dealiasing rule [39]. In both the normal and hyperviscous simulations, we find that \(\mathit{Rh}=(2\pi)^{5/2}\) yields a saturated turbulent state that dissipates enough kinetic energy at large scales such that the kinetic energy spectrum peaks at \(k=2\) without over-damping the system. So, we fix \(\mathit{Rh}=(2\pi)^{5/2}\simeq 100\) while varying the Rayleigh and Prandtl numbers in the ranges \(6.2\times 10^{7}\leq\mathit{Ra}\leq 6.2\times 10^{11}\) and \(10^{-3}\leq\mathit{Pr}\leq 10^{2}\). To model large Rayleigh number dynamics, we set \(\mathit{Ra}=9.4\times 10^{49}\) in our hyperviscous simulations. Fig. 1 shows the parameter values simulated in the \((\mathit{Ra},\mathit{Pr})\)-plane as well as the resolution, \(N\), used in each case. Time-averaged quantities are computed over 1000 realisations once the system has reached a statistically stationary regime, sufficiently separated in time (at least 5000 numerical time steps) to ensure statistically independent realisations.
### Global and spectral quantities
Next we briefly outline the global and spectral flow properties that will be explored in our numerical simulations below. The energy spectra of the velocity field \(\widehat{E}_{u}(k,t)\) and the temperature field \(\widehat{E}_{\theta}(k,t)\), referred to as the kinetic energy and potential energy spectra, are defined as
\[\widehat{E}_{u}(k,t) =\frac{1}{2}\sum_{k\leq|\mathbf{k}|<k+\Delta k}k^{2}\left|\widehat {\psi}_{\mathbf{k}}(t)\right|^{2}, \tag{6a}\] \[\widehat{E}_{\theta}(k,t) =\frac{1}{2}\sum_{k\leq|\mathbf{k}|<k+\Delta k}\left|\widehat{ \theta}_{\mathbf{k}}(t)\right|^{2}, \tag{6b}\]
where the sum is performed over the Fourier modes with wavenumber amplitude \(k=|\mathbf{k}|=\sqrt{k_{x}^{2}+k_{y}^{2}}\) in a shell of width \(\Delta k=2\pi/L\). Using the Fourier transform, one can derive the evolution equations of kinetic and potential energy spectra from Eqs. (1), namely
\[\partial_{t}\widehat{E}_{u}(k,t) =-\partial_{k}\Pi_{u}(k,t)-D_{\nu}(k,t)-D_{\mu}(k,t)+\alpha gF_{ B}(k,t), \tag{7a}\] \[\partial_{t}\widehat{E}_{\theta}(k,t) =-\partial_{k}\Pi_{\theta}(k,t)-D_{\kappa}(k,t)+\frac{\Delta T}{ L}F_{B}(k,t). \tag{7b}\]
The energy flux \(\Pi\) is a measure of the nonlinear cascades in turbulence [40]. The energy flux for a circle of radius \(k\) in the 2D wavenumber space is the total energy transferred from the modes within the circle to the modes outside
Figure 1: Parameter values simulated in the \((\mathit{Ra},\mathit{Pr})\)-plane with the resolution, \(N\), used at each instant colour-coded in the legend. The black dashed line separates runs with normal viscosity (below line) from those with hyperviscosity (above line).
the circle. Consequently, we define the flux of kinetic energy \(\Pi_{u}(k,t)\) and potential energy \(\Pi_{\theta}(k,t)\) as
\[\Pi_{u}(k,t) =\sum_{k^{\prime}\leq k}T_{u}(k^{\prime},t), \tag{8a}\] \[\Pi_{\theta}(k,t) =\sum_{k^{\prime}\leq k}T_{\theta}(k^{\prime},t), \tag{8b}\]
where \(T_{u}(k,t)\) and \(T_{\theta}(k,t)\) are the non-linear kinetic and potential energy transfer across \(k\):
\[T_{u}(k,t) =-\sum_{k\leq|\mathbf{k}|<k+\Delta k}\widehat{\psi}_{\mathbf{k}} ^{*}(t)\widehat{\{\psi,\nabla^{2}\psi\}}_{\mathbf{k}}(t), \tag{9a}\] \[T_{\theta}(k,t) =\sum_{k\leq|\mathbf{k}|<k+\Delta k}\widehat{\theta}_{\mathbf{k} }^{*}(t)\widehat{\{\psi,\theta\}}_{\mathbf{k}}(t). \tag{9b}\]
The notation \(\widehat{\{\cdot\}}_{\mathbf{k}}\) represents the Fourier mode of the Poisson bracket expanded using Eq. (5), and the asterisk denotes complex conjugation.
The spectra of the small-scale viscous dissipation \(D_{\nu}(k,t)\), the large-scale friction \(D_{\mu}(k,t)\) and the thermal dissipation \(D_{\kappa}(k,t)\) are defined as
\[D_{\nu}(k,t) =2\nu k^{2n}\widehat{E}_{u}(k,t), \tag{10a}\] \[D_{\mu}(k,t) =2\mu k^{-2}\widehat{E}_{u}(k,t),\] (10b) \[D_{\kappa}(k,t) =2\kappa k^{2n}\widehat{E}_{\theta}(k,t), \tag{10c}\]
and the buoyancy term \(F_{B}\) is given by
\[F_{B}(k,t)=\sum_{k\leq|\mathbf{k}|<k+\Delta k}ik_{x}\widehat{\psi}^{*}{}_{ \mathbf{k}}(t)\widehat{\theta}_{\mathbf{k}}(t). \tag{11}\]
The Nusselt number is a dimensionless measure of the averaged vertical heat flux, defined mathematically by
\[\mathit{Nu}=1+\frac{\langle\psi_{\pi}\theta\rangle}{\kappa\Delta T/L^{2n-1}}. \tag{12}\]
Using the above definition, one can derive the following exact relations for the kinetic and potential energy balances in the statistically stationary regime (as in [41]):
\[\epsilon_{u} =\epsilon_{\nu}+\epsilon_{\mu}=\frac{\nu^{3}}{L^{6n-2}}(\mathit{ Nu}-1)\frac{\mathit{Ra}}{Pr^{2}} \tag{13a}\] \[\epsilon_{\theta} =\epsilon_{\kappa}=\frac{\kappa\Delta T^{2}}{L^{2n}}(\mathit{Nu}-1) \tag{13b}\]
where \(\epsilon_{u}=\alpha g\langle\psi\theta_{x}\rangle\) is the injection rate of kinetic energy due to buoyancy, \(\epsilon_{\nu}=\nu\langle\psi\nabla^{2(n+1)}\psi\rangle\) is the viscous dissipation rate, \(\epsilon_{\mu}=\mu\langle\psi^{2}\rangle\) is the large scale dissipation rate, \(\epsilon_{\theta}=\frac{\Delta T}{L}\langle\psi_{x}\theta\rangle\) is the injection rate of potential energy due to buoyancy and \(\epsilon_{\kappa}=\kappa\langle\theta\nabla^{2n}\theta\rangle\) is the thermal dissipation rate.
### Runaway solutions
Upon linearising (1) about the conductive state (\(\psi=\theta=0\)), we find that infinitesimal solutions with \(\psi(\mathbf{x},t)=e^{i\mathbf{k}\cdot\mathbf{x}+\sigma t}\) are possible provided the normalised linear growth rate \(\sigma\) satisfies the relation
\[\left(\sigma+k^{2n}\right)\left(\sigma+Prk^{2n}+\frac{\mathit{Rh}\sqrt{RaPr}}{ k^{2}}\right)=\frac{\mathit{RaPr}(2\pi k_{x})^{2}}{k^{2}}. \tag{14}\]
Eq. (14) has two real roots for \(\sigma\), of which one is positive if and only if
\[\mathit{Rh}\sqrt{\frac{\mathit{Ra}}{Pr}}<\frac{\mathit{Ra}(2\pi k_{x})^{2}}{k ^{2n}}-k^{2n+2}. \tag{15}\]
One can show that \(\sigma\) is a monotonic decreasing function of \(k_{y}\), so the most dangerous modes are independent of \(y\), with
\[\psi(\mathbf{x},t)=e^{ik_{x}x+\sigma t},\qquad k_{x}\in\mathbb{Z}_{>0}. \tag{16}\]
Indeed, such a unidirectional mode satisfies the nonlinear governing equations (1) _exactly_ (because the nonlinear Poisson bracket terms are identically zero).
Although the maximum growth rate does not necessarily occur at the minimum wavenumber \(k_{x}=1\), it is straightforward to show that (15) is satisfied for _some_ (\(k_{x},k_{y}\)) if and only if it is satisfied at \((k_{x},k_{y})=(1,0)\). In other words, exact solutions of the problem (1) that grow exponentially without bound exist whenever
\[Rh\sqrt{\frac{Ra}{Pr}}<\frac{Ra}{(2\pi)^{2(n-1)}}-(2\pi)^{2n+2}. \tag{17}\]
## III Results
### Zero and infinite Prandtl number
Before considering the \(Pr\to\infty\) limit, to simplify the analysis let's write the non-dimensional form of Eqs. (1) in accordance with Eqs. (4), which yield
\[\partial_{t}\nabla^{2}\psi+\{\psi,\nabla^{2}\psi\} =RaPr\theta_{x}+Pr(-1)^{n+1}\nabla^{2(n+1)}\psi+Rh\sqrt{\frac{Ra} {Pr}}\,\psi, \tag{18a}\] \[\partial_{t}\theta+\{\psi,\theta\} =\psi_{x}+(-1)^{n+1}\nabla^{2n}\theta. \tag{18b}\]
As we take \(Pr\) to infinity, we ensure that \(Rh\sqrt{Ra/Pr}\) is finite to maintain the effects of the large-scale dissipation. The system (18) thus reduces to
\[Ra\theta_{x} =\left[(-1)^{n}\nabla^{2(n+1)}-Rh\sqrt{\frac{Ra}{Pr}}\right]\psi, \tag{19a}\] \[\partial_{t}\theta+\{\psi,\theta\} =\psi_{x}+(-1)^{n+1}\nabla^{2n}\theta. \tag{19b}\]
In the complementary limit of zero Prandtl number, we have to rescale the variables according to
\[\{\psi,\theta,t\}\mapsto\{Pr\,\psi,\mathit{Pr}\,\theta,t/\mathit{Pr}\} \tag{20}\]
before letting \(\mathit{Pr}\to 0\), which removes the time derivative and advective term from the heat transport equation (18b). As in the \(\mathit{Pr}\to\infty\) case, we maintain the effects of the large-scale dissipation by ensuring that \(Rh\sqrt{Ra/Pr}\) is finite. This process reduces the system (18) to
\[\partial_{t}\nabla^{2}\psi+\{\psi,\nabla^{2}\psi\} =\mathit{Ra}\theta_{x}+(-1)^{n+1}\nabla^{2(n+1)}\psi+Rh\sqrt{ \frac{\mathit{Ra}}{Pr}}\psi, \tag{21a}\] \[\psi_{x} =(-1)^{n}\nabla^{2n}\theta. \tag{21b}\]
In Figure 2 we plot time series of the kinetic energy, \(E_{u}\) in both the \(\mathit{Pr}\to 0\) and \(\mathit{Pr}\to\infty\) limits. We set \(\mathit{Ra}=10^{7}\), \(n=1\), for four different values of \(\mathit{Rh}\sqrt{Ra/Pr}\), and the simulations are initialised with random initial data. The flow converges to a statistically steady state only in the case where \(\mathit{Pr}\to\infty\) and \(\mathit{Rh}\sqrt{Ra/Pr}=9\times 10^{6}\), a value only \(10\%\) smaller than the maximum value given by (17) at which all convection is suppressed. For all other parameter values attempted, a runaway solution takes over the dynamics and the energy grows without bound.
At present, we are not able to reliably obtain turbulent saturated states in the extreme Prandtl number limits, unless the large-scale dissipation is made very strong. Instead, for the remainder of the paper we restrict our attention to finite Prandtl number, for which we find that \(Rh=(2\pi)^{5/2}\) is sufficient to prevent runaway solutions and allow the system to reach turbulent saturated states.
### Finite Prandtl number: global variables
In Fig. 3(a)-(c) we show how the global quantities in the kinetic and potential energy balances (13) vary with \(\mathit{Pr}\) keeping \(\mathit{Ra}=6.2\times 10^{11}\), while in (d)-(f) we show how the same quantities vary with \(\mathit{Ra}\) while keeping \(\mathit{Pr}=1\). In both cases we use normal viscosity (\(n=1\)) and keep \(\mathit{Rh}=(2\pi)^{5/2}\) constant. Firstly, Fig. 3(a) and (d) show that the kinetic and potential energies remain virtually constant while \(\mathit{Pr}\) and \(\mathit{Ra}\) vary by at least four orders of magnitude. Secondly, Fig. 3(b) and (e) show that \(\epsilon_{u}\approx\epsilon_{\mu}\), which indicates that the majority of the kinetic energy, injected by buoyancy in the flow, is dissipated at large scales. This effect is caused by the inverse cascade of kinetic energy, which will be investigated in more detail in Sec. III.3. Note that \(\epsilon_{u}\) and \(\epsilon_{\mu}\) are almost independent of both \(\mathit{Pr}\) and \(\mathit{Ra}\), while the viscous dissipation scales like
\[\epsilon_{\nu}\propto\mathit{Pr}^{1/2}\mathit{Ra}^{-1/4}. \tag{22}\]
Finally, in Fig. 3(c) and (f), we see that \(\epsilon_{\theta}=\epsilon_{\kappa}\), as required by the potential energy balance (13b), and both quantities, are also virtually independent of both \(\mathit{Pr}\) and \(\mathit{Ra}\).
To explain the scaling of the viscous dissipation rate (22), we now look at how the enstrophy \(\langle\omega^{2}\rangle\) varies with the Prandtl and Rayleigh numbers, where \(\omega=\nabla^{2}\psi\) is the vorticity of the flow. In Fig. 4 we plot enstrophy as a function of \(\mathit{Pr}\) with \(\mathit{Ra}=6.2\times 10^{11}\) fixed (a), and as a function of \(\mathit{Ra}\) with \(\mathit{Pr}=1\) fixed (b). We keep \(\mathit{Rh}=(2\pi)^{5/2}\) fixed throughout. From Fig. 4(a) we observe that enstrophy can be considered approximately independent of \(\mathit{Pr}\), because it varies by less than a factor of two over the five decades of Prandtl numbers considered, while Fig. 4(b) demonstrates that enstrophy scales like \(\mathit{Ra}^{1/4}\), i.e.,
\[\langle\omega^{2}\rangle\propto\mathit{Pr}^{0}\mathit{Ra}^{1/4}. \tag{23}\]
With normal viscosity, by definition we have \(\epsilon_{\nu}=\nu\langle\psi\nabla^{4}\psi\rangle=\nu\langle\omega^{2}\rangle\) and, writing \(\nu\propto(\mathit{Pr}/\mathit{Ra})^{1/2}\), we thus obtain the scaling of Eq. (22) that is observed in Fig. 3.
According to Fig. 3, the normalised energy injection rates \(\epsilon_{u}\) and \(\epsilon_{\theta}\) are both approximately constant in the range of \(\mathit{Pr}\) and \(\mathit{Ra}\) we considered. With \(n=1\) and \(\mathit{Ra}\mathit{Pr}\gg 1\), the net energy balances (13) thus produce the Nusselt number scaling
\[\mathit{Nu}\propto\mathit{Pr}^{1/2}\mathit{Ra}^{1/2}. \tag{24}\]
Figure 2: Time series of the kinetic energy, \(E_{u}\), in the \(\mathit{Pr}\to 0\) and \(\mathit{Pr}\rightarrow\infty\) limits at \(\mathit{Ra}=10^{7}\), \(n=1\), and four different values of \(\mathit{Rh}\sqrt{\mathit{Ra}/\mathit{Pr}}\). The time series are initiated with random initial conditions and a resolution \(N=256\) is used.
This relation is in agreement with the 'ultimate' scaling. In Fig. 5, we plot the Nusselt number compensated by the classical scaling, \(\mathit{Nu}\propto Pr^{0}Ra^{1/3}\), and by the ultimate scaling, \(\mathit{Nu}\propto Pr^{1/2}Ra^{1/2}\). The ultimate scaling provides a much more convincing collapse of the data, with the fit becoming increasingly accurate as \(\mathit{Ra}\) increases. Indeed, the ultimate scaling was expected to be exhibited by our simulations as we have effectively removed the boundary layers by applying periodic boundary conditions on the computational domain.
Figure 4: Enstrophy \(\langle\omega^{2}\rangle\) as a function of \(\mathit{Pr}\) with \(\mathit{Ra}=6.2\times 10^{11}\) fixed (a), and as a function of \(\mathit{Ra}\) with \(\mathit{Pr}=1\) fixed (b). We keep \(n=1\) (normal viscosity) and \(\mathit{Rh}=(2\pi)^{5/2}\) fixed in all plots.
Figure 3: Spatiotemporal averages of the kinetic and potential energies and the terms in the kinetic and potential energy balances (13) as functions of \(\mathit{Pr}\) with \(\mathit{Ra}=6.2\times 10^{11}\) fixed (a–c) and as functions of \(\mathit{Ra}\) with \(\mathit{Pr}=1\) fixed (d–f). We keep \(n=1\) (normal viscosity) and \(\mathit{Rh}=(2\pi)^{5/2}\) fixed in all plots.
### Finite Prandtl number: spectra
In this section, we examine the time-averaged kinetic and potential energy spectra and spectral fluxes. To eliminate finite Rayleigh number effects and to have large enough scale separation, we focus on the hyperviscous simulations (i.e., \(n=4\) in Eqs. (1)) at \(\mbox{\it Ra}=9.4\times 10^{49}\) and \(\mbox{\it Rh}=(2\pi)^{5/2}\). Results with normal viscosity (i.e. \(n=1\) in Eqs. (1)) at \(\mbox{\it Ra}=6.2\times 10^{11}\) are also presented for comparison. According to the Bolgiano-Obukhov (BO) scaling [14; 15], the ratio of the kinetic to the potential energy spectra \(\widehat{E}_{u}(k)/\widehat{E}_{\theta}(k)\propto k^{-4/5}\), since \(\widehat{E}_{u}\propto k^{-11/5}\) and \(\widehat{E}_{\theta}\propto k^{-7/5}\). In Fig. 6 we plot \(\widehat{E}_{u}(k)/\widehat{E}_{\theta}(k)\) compensated by \(k^{4/5}\) for the hyperviscous simulations and, in the inset, for the runs with normal viscosity. Instead of finding a wavenumber range where this scaling is valid, we observe a \(k^{-0.3}\) power-law in the inertial range for all Prandtl numbers considered, leading us to the conclusion that BO scaling is not followed in our simulations. For the normal viscosity simulations we find the \(k^{-0.3}\) power-law again to be followed, but within a narrower wavenumber range, as expected. Fig. 6 also shows that, for \(\mbox{\it Pr}\ll 1\), the kinetic energy is much larger than the potential energy at large wavenumbers. This is expected as the small scales are dominated by thermal diffusivity when \(\mbox{\it Pr}\ll 1\) and so the potential energy is dissipated much more effectively than the kinetic energy. The opposite is true for \(\mbox{\it Pr}\gg 1\).
In Fig. 7(a) and (b), we plot the kinetic and potential energy spectra multiplied by powers of \(k\) chosen to best compensate for the power-laws exhibited by the spectra. The hyperviscous runs are shown in the main plots, and normal viscosity runs in the insets. The observed behaviour, with \(\hat{E}_{u}(k)\propto k^{-2.3}\) and \(\hat{E}_{\theta}(k)\propto k^{-1.2}\), is in contrast to 3D RBC with periodic boundary conditions, where the kinetic and potential energy exhibit \(k^{-5/3}\) spectra [24; 28; 30], similar to those observed in passive scalar turbulence [42]. It is close to, but not fully consistent with, BO phenomenology, according to which the exponents should be \(-11/5=-2.2\) and \(-7/5=-1.4\), respectively.
To understand the turbulent cascades, in Fig. 7(c) and (d), we plot the associated kinetic and potential energy
Figure 5: The Nusselt number compensated with the ‘classical’ and the ‘ultimate’ scaling in (a) and (b), respectively. The Prandtl numbers are colour-coded in the legend.
Figure 6: The ratio of the kinetic energy to the potential energy spectra \(\widehat{E}_{u}(k)/\widehat{E}_{\theta}(k)\) compensated by \(k^{4/5}\) for the hyperviscous simulations with \(\mbox{\it Ra}=9.4\times 10^{49}\), \(\mbox{\it Rh}=(2\pi)^{5/2}\) and different Prandtl numbers as colour-coded in the legend. The inset shows the same ratio but for runs with normal viscosity at \(\mbox{\it Ra}=6.2\times 10^{11}\).
fluxes normalised by the time-averaged injection rates of energy due to buoyancy. The positive potential energy flux suggests a strong direct cascade, while there is a weak inverse cascade of kinetic energy, which is typical for 2D flows [40]. For \(\mathit{Pr}=1\) these types of cascades are in agreement with [22]. The negative kinetic energy flux peaks at low wavenumbers, while the potential energy flux peaks at high wavenumbers. The inverse cascade of kinetic energy is not affected by the Prandtl number; however, the direct cascade of potential energy moves to higher wavenumbers along with the peak of \(\Pi_{\theta}(k)\) as \(\mathit{Pr}\) increases. We emphasise the wavenumber dependence of kinetic and potential energy fluxes, with \(\partial_{k}\Pi_{u}(k)>0\) and \(\partial_{k}\Pi_{\theta}(k)>0\) in the inertial range of wavenumbers for all of the Prandtl numbers considered.
In Fig. 8, we present the time-averaged spectra of the magnitudes of the terms in the kinetic and potential energy balances (7) for runs with hyperviscosity and three different values of \(\mathit{Pr}\in\left\{10^{-2},1,10^{2}\right\}\). The corresponding plots with normal viscosity and \(\mathit{Ra}=6.2\times 10^{11}\) are shown in Fig. 9. In both figures, the red dots indicate where the inertial flux terms \(\partial_{k}\Pi_{u}\) and \(\partial_{k}\Pi_{\theta}\) become negative. In the kinetic energy balance, we identify three distinct wavenumber ranges, labeled I to III in the plots. In region I, the kinetic energy injected by buoyancy is dissipated by the large-scale friction. In region II, the inertial term balances buoyancy, which is positive for all wavenumbers in this region, i.e.
\[\partial_{k}\Pi_{u}(k)\approx\alpha gF_{B}(k)>0. \tag{25}\]
This relation shows how the kinetic energy injected by buoyancy is cascaded to larger scales in the inertial range of wavenumbers and explains the \(k\) dependence of the kinetic energy flux we see in Fig. 7(c). Note that region II is largest for small Prandtl numbers, especially in the runs with normal viscosity, as evident from the results presented in Fig. 9(a)-(c). In region III, the balances between terms depend on the Prandtl number. For \(\mathit{Pr}=10^{-2}\) buoyancy decays rapidly and so small-scale viscous dissipation is balanced by the inertial term, which is negative in this range of wavenumbers. As \(\mathit{Pr}\) increases, buoyancy becomes more significant in the balance of region III between the small-scale viscous dissipation and the inertial term. For \(\mathit{Pr}\gtrsim 10^{2}\), small-scale viscous dissipation seems to be balanced by buoyancy rather than by the inertial term. This effect is shown more clearly in the runs with normal viscosity shown in Fig. 9(c), where the small-scale dissipation range is much larger.
In the potential energy balance, we identify two distinct wavenumber ranges, labeled A and B in the plots (see (d)-(f) in Plots. 8 and 9). In these plots we observe the inertial term to be balanced by buoyancy in region A and by small- scale thermal dissipation in region B. In other words, the potential energy injected by buoyancy in region A is cascaded to larger wavenumbers, where it is dissipated by thermal diffusivity.
We recall that the red dots in Plots. 8 and 9 show where the inertial terms of the kinetic and potential energy become negative. This sign change occurs primarily in region III for \(\partial\Pi_{u}(k)\) and region B for \(\partial\Pi_{\theta}(k)\), the latter corresponding to the large negative gradient of \(\Pi_{\theta}(k)\) observed in Fig. 7 at large wavenumbers. Near the boundary between regions A and B in Plots. 8 and 9(d)-(f), we observe that exhibits fluctuations between positive and negative
Figure 7: The time averaged energy spectra compensated by best fit power laws (a,b), and spectral fluxes normalised by the time-averaged dissipation rates (c,d) for the runs with hyperviscosity at \(\mathit{Ra}=9.4\times 10^{49}\), \(\mathit{Rh}=(2\pi)^{5/2}\) and different Prandtl numbers as colour-coded in the legend. The insets shows the same quantities but for runs with normal viscosity at \(\mathit{Ra}=6.2\times 10^{11}\).
values over these wavenumbers. However, for the majority of the wavenumbers in region A we find that
\[\partial_{k}\Pi_{\theta}(k)\approx\frac{\Delta T}{L}F_{B}(k)>0. \tag{26}\]
This relation explains the \(k\) dependence of the potential energy flux we observe in Fig. 7(d) and demonstrates that the BO phenomenology, which assumes that the potential energy flux is constant in the inertial range of wavenumbers, does not hold for any of the Prandtl numbers we studied. Note that in the wavenumber range where region II and
Figure 8: Time-averaged spectra of the terms in the kinetic energy balance (7a) (a)–(c); and the potential energy balance (7b) (d)–(f) for the runs with hyperviscosity, \(\mathit{Ra}=9.4\times 10^{49}\), \(\mathit{Rh}=(2\pi)^{5/2}\), and \(\mathit{Pr}=10^{-2}\) (a), (d), \(\mathit{Pr}=1\) (b), (e), and \(\mathit{Pr}=10^{2}\) (c), (f). The terms displayed in the legends are defined in (8), (10) and (11). In (a)–(c) and (d)–(f), we observe three and two distinct dominant balances, respectively, annotated I–III and A–B. The red dots indicate where the inertial terms become negative.
Figure 9: Time-averaged spectra of the terms in the kinetic energy balance (7a) (a)–(c); and the potential energy balance (7b) (d)–(f) for the runs with normal viscosity, \(\mathit{Ra}=6.2\times 10^{11}\), \(\mathit{Rh}=(2\pi)^{5/2}\), and \(\mathit{Pr}=10^{-2}\) (a), (d), \(\mathit{Pr}=1\) (b), (e), and \(\mathit{Pr}=10^{2}\) (c), (f). The terms displayed in the legend are defined in (8), (10) and (11). In (a)–(c) and (d)–(f), we observe three and two distinct dominant balances, respectively, annotated I–III and A–B. The red dots indicate where the inertial terms become negative.
A overlap, we have \(\partial_{k}\Pi_{u}(k)/\alpha g\approx L\partial_{k}\Pi_{\theta}(k)/\Delta T \approx F_{B}(k)\), such that \(\Pi_{u}(k)-\frac{\alpha gL}{\Delta T}\Pi_{\theta}(k)\) is approximately constant. This is the inertial range of scales, where the viscous, diffusive and friction effects can be neglected and so the energy flux is constant. This is expected for the quantity \(E_{u}-\frac{\alpha gL}{\Delta T}E_{\theta}\), which is constant in the limit of \(\nu\to 0\), \(\kappa\to 0\), and \(\mu\to 0\).
## IV Conclusions
In this paper we study the effects of varying the Rayleigh and Prandtl numbers on the dynamics of two-dimensional Rayleigh-Benard convection without boundaries, i.e., with periodic boundary conditions. First, we focus on the limits of \(Pr\to 0\) and \(Pr\to\infty\). Our findings indicate that, unless large-scale dissipation is made so strong as to almost suppress convection completely, large-scale runaway solutions dominate the dynamics. Such runaway solutions at extreme Prandtl numbers have long been known, even with boundaries [43], and they have recently been studied more generally at low Rayleigh numbers [29]. Instead, we turn to finite Prandtl numbers.
In all parameter values simulated, the inequality (17) is violated, implying that the system admits exact single-mode solutions that grow exponentially without bound. Nevertheless, with finite Prandtl numbers, we find that non-linear interactions between modes continue to allow the system to find a turbulent stationary state. In general, whether or not the solution blows up must depend on the initial conditions, in a way that is not currently understood.
Examining the Prandtl and Rayleigh number dependence of the terms in the kinetic and potential energy balances, we find that the enstrophy scales as \(\langle\omega^{2}\rangle\propto Pr^{0}\mathit{Ra}^{1/4}\) and hence that the small-scale viscous dissipation scales as \(\epsilon_{\nu}=\nu\langle\omega^{2}\rangle\propto Pr^{1/2}\mathit{Ra}^{-1/4}\). On the other hand, we observe that the injection rate of kinetic energy \(\epsilon_{u}\) due to buoyancy is effectively independent of both the Prandtl and the Rayleigh number. Using this observation, we find that \(\mathit{Nu}\propto Pr^{1/2}\mathit{Ra}^{1/2}\), which agrees with the so-called ultimate scaling.
Looking at the kinetic and potential energy spectral fluxes, we find an inverse cascade of kinetic energy and a direct cascade of potential energy in contrast to 3D RBC [26], where both kinetic and potential energies cascade toward small scales. The inverse cascade is independent of the Prandtl number, while the peak of the potential energy flux moves to higher wavenumbers as \(\mathit{Pr}\) increases. The kinetic and potential energy fluxes, \(\Pi_{u}(k)\) and \(\Pi_{\theta}(k)\), are not constant in the inertial range because both are balanced by the buoyancy term \(F_{B}(k)\), which is predominantly positive in this range of wavenumbers. These two balances imply a positive slope in \(k\) for the fluxes. Although we observe no range of wavenumbers where either \(\Pi_{u}(k)\) or \(\Pi_{\theta}(k)\) is constant, we find that they are connected by the relation \(\Pi_{u}(k)-\frac{\alpha gL}{\Delta T}\Pi_{\theta}(k)\approx\mathrm{constant}\) in the overlap between regions II and A shown in Plots. 8 and 9.
The kinetic energy spectra scale as \(\widehat{E}_{u}(k)\propto k^{-2.3}\), which is close to \(k^{-11/5}\) behaviour of the BO phenomenology. However, the potential energy spectra scale as \(\widehat{E}_{\theta}(k)\propto k^{-1.2}\), which deviates significantly from the \(k^{-7/5}\) scaling predicted by the BO arguments. The deviation from the BO phenomenology is clearer when we test the scaling \(\widehat{E}_{u}(k)/\widehat{E}_{\theta}(k)k^{4/5}\approx\mathrm{constant}\), which is clearly not followed by our spectra which follow \(\widehat{E}_{u}(k)/\widehat{E}_{\theta}(k)k^{4/5}\propto k^{-0.3}\). For the hyperviscous simulations, the observed power-laws in the inertial range of the kinetic and potential energy spectra do not show any dependence on the Prandtl number. The only dependence we observe is at the dissipative range of wavenumbers, where the viscosity and the thermal diffusivity dominate the dynamics. In the spectra from the normal viscosity simulations, the effects of the Prandtl number are more significant due to the comparatively low scale separation. Hence, the inertial range over which a power-law behaviour can be observed is truncated.
This study clearly demonstrates the necessity for large scale separation to be able to make clearer conclusions on the spectral dynamics and the power-law exponents of two-dimensional Rayleigh-Benard convection. This requirement makes similar studies in three dimensions more challenging. The development of a phenomenology where buoyancy acts as a broadband spectral forcing is required to interpret the current observations. Numerical simulations at Prandtl and Rayleigh numbers outside the ranges we investigated, i.e., \(Pr<10^{-3}\), \(Pr>10^{2}\) and \(\mathit{Ra}>10^{12}\), are challenging but would be of great interest to see if they agree with the hyperviscous simulations we have performed and to provide a more complete picture of the asymptotic regime of buoyancy driven turbulent convection.
|
2305.15351 | Algorithms for the Bin Packing Problem with Scenarios | This paper presents theoretical and practical results for the bin packing
problem with scenarios, a generalization of the classical bin packing problem
which considers the presence of uncertain scenarios, of which only one is
realized. For this problem, we propose an absolute approximation algorithm
whose ratio is bounded by the square root of the number of scenarios times the
approximation ratio for an algorithm for the vector bin packing problem. We
also show how an asymptotic polynomial-time approximation scheme is derived
when the number of scenarios is constant. As a practical study of the problem,
we present a branch-and-price algorithm to solve an exponential model and a
variable neighborhood search heuristic. To speed up the convergence of the
exact algorithm, we also consider lower bounds based on dual feasible
functions. Results of these algorithms show the competence of the
branch-and-price in obtaining optimal solutions for about 59% of the instances
considered, while the combined heuristic and branch-and-price optimally solved
62% of the instances considered. | Yulle G. F. Borges, Vinícius L. de Lima, Flávio K. Miyazawa, Lehilton L. C. Pedrosa, Thiago A. de Queiroz, Rafael C. S. Schouery | 2023-05-24T17:02:10Z | http://arxiv.org/abs/2305.15351v1 | # Algorithms for the Bin Packing Problem with Scenarios
###### Abstract
This paper presents theoretical and practical results for the bin packing problem with scenarios, a generalization of the classical bin packing problem which considers the presence of uncertain scenarios, of which only one is realized. For this problem, we propose an absolute approximation algorithm whose ratio is bounded by the square root of the number of scenarios times the approximation ratio for an algorithm for the vector bin packing problem. We also show how an asymptotic polynomial-time approximation scheme is derived when the number of scenarios is constant. As a practical study of the problem, we present a branch-and-price algorithm to solve an exponential model and a variable neighborhood search heuristic. To speed up the convergence of the exact algorithm, we also consider lower bounds based on dual feasible functions. Results of these algorithms show the competence of the branch-and-price in obtaining optimal solutions for about 59% of the instances considered, while the combined heuristic and branch-and-price optimally solved 62% of the instances considered.
**Keywords** Bin Packing Problem; Scenarios; Approximation Algorithm; Variable Neighborhood Search; Branch-and-Price Algorithm;
## 1 Introduction
Cutting and packing problems have been widely studied in the context of Operations Research, mainly because of their properties and real-world applicability (Wascher et al., 2007). Among these problems, the _Bin Packing Problem_ (BPP) asks to pack a set of one-dimensional items, each of a given size, into the least possible number of bins of given identical capacities, where the total size of items in a bin does not exceed the bin capacity. Several variants of this problem have been studied in the literature, either due to their theoretical interest, or their high applicability in different industries (Sweeney and Paternoster, 1992; Dahmani et al., 2013).
The BPP and its variants have been extensively investigated since the thirties (Kantorovich, 1960). Such problems are among the most studied in approximation contexts. A recent survey by Coffman et al. (2013) presented over 180 references related to approximation results for the BPP and its variants. In particular, the BPP is shown to be APX-hard, and indeed no algorithm has an approximation factor smaller than 3/2, unless P = NP (Vazirani, 2001). Regarding practical techniques for the BPP, a survey by Delorme et al. (2016a) reviewed models and solution methods, and experimentally compared the available software tools to solve the problem. Recent practical contributions for the BPP include the generalized arc-flow formulation by Brandao and Pedroso (2016), the cooperative parallel genetic algorithm by Kucukyilmaz and Kiziloz (2018), the reflect formulation by Delorme and Iori (2019), the branch-and-cut-and-price algorithm by Wei et al. (2020b) and the framework of de Lima et al. (2023).
Several variants of the BPP have also been extensively investigated in the literature. For example, in the variant with fragile objects, the capacity of a bin is directly influenced by its most fragile item (Clautiaux et al., 2014). In the BPP with precedence constraints, a precedent item cannot be packed in a bin later than its successors (Pereira, 2016). The variant with overlapping items assumes that some subsets of items, when packed in the same bin, occupy less capacity than the sum of their individual size (Grange et al., 2018). In the BPP with item fragmentation, items can be split and fragmentally packed (Bertazzi et al., 2019). In the generalized BPP, bins may have different cost and capacity and items are associated to a profit and are divided by compulsory (i.e., mandatory to load) and non-compulsory. The aim is to minimize the overall cost based on the bins cost and items profit (Baldi et al., 2019). In the online BPP, items arrive sequentially and each must be packed before the arrival of the next one, with no information about the next
items (Balogh et al., 2019). The temporal variant considers one additional dimension in the BPP, and the bin capacity should not be violated at any unit of a discretized time horizon (Dell'Amico et al., 2020).
One of the current challenges to solving practical logistic problems is to deal with the uncertainty arising from real-world applications (Saint-Guillan et al., 2021; Xu et al., 2016; Juan et al., 2018; Baghalian et al., 2013). In particular, one popular way to deal with this issue is by describing scenarios. A scenario is defined as a possible outcome that may arise depending on the problem's uncertain variables. Some authors use scenarios to consider the problem description as combinatorial rather than stochastic, as, e.g., Feuerstein et al. (2014). Considering the scarcity of works that apply this concept of scenarios to deal with uncertainty in bin packing problems and the existence of practical applications, Bodis and Balogh (2018) recently introduced the _Bin Packing Problem with Scenarios_ (BPPS).
The BPPS is a generalization of the BPP where each item belongs to a subset of scenarios, and a packing must respect the capacity constraints of the bins for each scenario individually. Whilst the set of scenarios is known in advance, only one of the scenarios will be realized. This introduces the possibility of packing in a single bin items whose combined sizes surpasses the bin capacity, as long as the bin capacity is respected in each individual scenario. Although Bodis and Balogh (2018) introduced three objective functions for the BPPS, we are concerned with the objective of minimizing the number of bins of the worst-case scenario, which is the most challenging one. In this way, the BPP is the particular case of the BPPS where there is a single scenario.
A generalization of the BPP that is similar to the BPPS is the _Vector Bin Packing Problem_ (VBPP), where bins and items have multiple dimensions and the bin capacity must be respected in all of its dimensions. The objective is the same as in the BPP, to minimize the number of bins. Caprara and Toth (2001) presented lower bounds, heuristics, and a branch-and-bound (B&B) algorithm for the VBPP with two dimensions. Alves et al. (2014) investigated dual feasible functions for this problem. Buljubasic and Vasquez (2016) proposed a local search algorithms where a Tabu search and descent search with add and drop moves were used to explore the search space. In Hu et al. (2017), a set-covering model is solved by a branch-and-price (B&P) algorithm. In Hessler et al. (2018), there is a stabilized branch-and-price algorithm with dual optimal cuts. Recently, Wei et al. (2020a) developed a branch-and-price algorithm for the vector packing with two dimensions, where a goal cut approach is used to obtain lower bounds and a dynamic programming with branch-and-bound solves the pricing problem, improving previous results of the literature.
To the best of our knowledge, the recent work by Bodis and Balogh (2018) is the only one in the literature concerned with the BPPS. Motivated by its interesting theoretical aspects, its general applicability and its relation to well-studied problems, this paper proposes theoretical and practical results for the BPPS. The contributions of the current work are the following: (i) an absolute approximation algorithm for the BPPS based on the VBPP; (ii) an _asymptotic polynomial time approximation scheme_ (APTAS) for the version of the problem with a constant number of scenarios; (iii) an exact method for the problem based on a branch-and-price algorithm for an exponential _Integer Linear Programming_ (ILP) model; and (iv) a _Variable Neighborhood Search_ (VNS) heuristic.
This paper is organized as follows. In Section 2 we provide formal definitions to be used to contextualize the contributions of the paper. In Section 3, we show that an approximate solution of the BPPS can be obtained by solving an instance of the VBPP, leading to an absolute approximation algorithm for the BPPS. In Section 4, we present an APTAS for the BPPS, when the number of scenarios is constant. As for practical results, Section 5 describes a VNS algorithm for the BPPS and Section 6 presents a branch-and-price algorithm to solve an exponential model for the BPPS. The evaluation of the proposed VNS and branch-and-price algorithms is performed in Section 7, from experiments based on the solution of randomly generated instances. Finally, Section 8 presents the conclusions and directions for future research.
## 2 Formal Definitions
In this section, we provide some definitions to contextualize the contributions of this work. For simplicity, throughout the paper, we denote, for any natural \(n\), the set \(\{1,\ldots,n\}\) simply by \([n]\).
Bin Packing Problem with Scenarios (BPPS)In the BPPS, we are given a number \(d\) of scenarios, a set \(\mathcal{I}=[n]\) of \(n\) items, with each \(i\in\mathcal{I}\) having a size \(s_{i}\in\mathbb{Q}_{+}\) and a set of scenarios \(\mathcal{K}_{i}\subseteq[d]\), and an unlimited number of identical bins of capacity \(W\in\mathbb{Q}_{+}\). For each \(k\in[d]\), the set of items in scenario \(k\) is denoted by \(S_{k}=\{i\in\mathcal{I}\mid k\in\mathcal{K}_{i}\}\). A solution \(\mathcal{B}\) of the BPPS is a partition of \(\mathcal{I}\) such that for each part \(B\in\mathcal{B}\) and for each scenario \(k\in[d]\), \(\sum_{i\in B\cap S_{k}}s_{i}\leq W\). The objective of the BPPS is to find a solution that minimizes
\[V_{BPPS}(\mathcal{B})=\max_{k\in[d]}|\{B\in\mathcal{B}:B\cap S_{k}\neq\emptyset \}|.\]
The objective corresponds to minimize the number of bins of the worst-case scenario, i.e., the scenario with the largest number of bins. A closely related problem is the VBPP.
Vector Bin Packing Problem (VBPP)In the VBPP, we are given a number \(d\) of resources, a set \(\mathcal{I}=[n]\) of \(n\) items, each \(i\in\mathcal{I}\) having a vector \(s_{i}\in\mathbb{Q}_{+}^{d}\) of resource consumption, and an unlimited number of identical bins of resource capacity given as a \(d\)-dimensional vector \(W=(w_{1},w_{2},\ldots,w_{d})\in\mathbb{Q}_{+}^{d}\). A solution \(\mathcal{B}\) is a partition of \(\mathcal{I}\) such that for each part \(B\in\mathcal{B}\) and each resource \(k\in[d]\), \(\sum_{i\in B}s_{ik}\leq w_{k}\). The objective is to find a solution that minimizes
\[V_{\text{VBPP}}(\mathcal{B})=|\mathcal{B}|.\]
The VBPP is a generalization of the BPP, and thus it is also APX-hard. Woeginger (1997) showed that there is no APTAS for the two-dimensional version of this problem unless P = NP. Chekuri and Khanna (2004) gave an asymptotic \(O(\ln d)\)-approximation for the case where \(d\) is a fixed constant. Later on, this result was improved to \(1+\ln d\) by Bansal et al. (2009). Also, as noted by Christensen et al. (2017), the APTAS of Fernandez de la Vega and Lueker (1981) implies a \((d+\varepsilon)\)-approximation for the VBPP.
For the theoretical results of this paper in Sections 3 and 4, we assume, without loss of generality, that the instances of the BPPS and the VBPP are normalized by a proper scaling of the items, so that the capacity of the bins correspond \(1\) on each dimension.
An important concept in both theoretical and practical results for the BPP and related variants is the one of _cutting pattern_: a feasible combination of items in a single bin. For the BPPS, we represent a cutting pattern as a vector \(p=(a_{1p},\ldots,a_{np},b_{1p},\ldots,b_{dp})\) of binary coefficients, such that: each coefficient \(a_{ip}\) is equal to \(1\) if and only if item \(i\) is in pattern \(p\), and each coefficient \(b_{kp}\) is equal to \(1\) if and only if some item of the scenario \(k\) is in pattern \(p\). We denote by \(\mathcal{P}\) the set of all feasible patterns for the BPPS.
## 3 An Absolute Approximation Algorithm
In the following, we consider a mapping between the set of instances of the BPPS and a subset of instances of the VBPP. Starting with a normalized instance \(I=(d,\mathcal{I},\mathcal{K},s)\) of the BPPS, we construct a normalized instance \(I^{\prime}=(d,\mathcal{I},s^{\prime})\) of the VBPP. For that, let, for each \(i\in\mathcal{I}\) and \(k\in[d]\),
\[s_{ik}=\begin{cases}s_{i}&\text{if }i\in S_{k},\\ 0&\text{otherwise}.\end{cases}\]
It is simple to verify that a solution \(\mathcal{B}\) for \(I\) is a solution for \(I^{\prime}\), and vice-versa.
**Definition 1**.: A solution \(\mathcal{B}\) is minimal for the BPPS if, for any two bins \(A,B\in\mathcal{B}\), there exists a scenario \(k\), such that
\[\sum_{i\in A\cap S_{k}}s_{i}+\sum_{i\in B\cap S_{k}}s_{i}>1.\]
Notice that the value of \(\mathcal{B}\) for the BPPS is not larger than the value of \(\mathcal{B}\) for the VBPP. In the opposite direction, we have Theorem 1.
**Theorem 1**.: If \(\mathcal{B}\) is a minimal solution for the BPPS, then \(\,V_{\text{VBPP}}(\mathcal{B})\leq\sqrt{d}\,V_{\text{BPPS}}(\mathcal{B})\).
Proof.: We say that two bins \(A\) and \(B\) are incompatible in a scenario \(k\) if both bins, \(A\) and \(B\), are non-empty in scenario \(k\). Let \(G\) be a multigraph where each bin represents a vertex, and for each pair of bins \(A\) and \(B\) and scenario \(k\), there is one edge between \(A\) and \(B\) (labeled as \(k\)) if they are incompatible in \(k\).
For each scenario \(k\in[d]\), we define \(h_{k}=|\{B\in\mathcal{B}:B\cap S_{k}\neq\emptyset\}|\), i.e., \(h_{k}\) is the number of bins used by \(\mathcal{B}\) in \(k\). Let \(h=\max_{k\in[d]}h_{k}\), and notice that, since, for each scenario, there is an edge for each two used bins, the number of edges in \(G\) is
\[|E(G)|=\sum_{k=1}^{d}\frac{h_{k}(h_{k}-1)}{2}\leq d\frac{h(h-1)}{2}.\]
Let \(r\) be the number of bins. Since \(\mathcal{B}\) is minimal, any two bins are incompatible in at least one scenario and \(G\) must have at least one edge between any two vertices, i.e., \(|E(G)|\geq\frac{r(r-1)}{2}\). Thus,
\[\frac{r(r-1)}{2}\leq d\frac{h(h-1)}{2},\]
and, since \(r\leq dh\), as each bin is non-empty in at least one scenario, we have that
\[r^{2}\leq dh^{2}+r-dh\leq dh^{2}.\]
The result follows as \(r=\,V_{\text{VBPP}}(\mathcal{B})\) and \(h=\,V_{\text{BPPS}}(\mathcal{B})\)
**Corollary 1**.: Suppose there exists an \(\alpha\)-approximation for the VBPP, then there exists an \(\alpha\sqrt{d}\)-approximation for the BPPS.
Proof.: Consider an instance \(I\) of BPPS and the instance \(I^{\prime}\) of VBPP obtained as described above. Also, let \(\mathcal{B}\) be the solution found by the \(\alpha\)-approximation for instance \(I^{\prime}\), \(\mathcal{B}^{*}_{V}\) be an optimal solution for VBPP with instance \(I^{\prime}\) and \(\mathcal{B}^{*}_{S}\) be a minimal optimal solution for BPPS with instance \(I\). Then, we have
\[V_{\textit{BPPS}}(\mathcal{B})\leq\,V_{\textit{VBPP}}(\mathcal{B})\leq\alpha\, V_{\textit{VBPP}}(\mathcal{B}^{*}_{V})\leq\alpha\,V_{\textit{VBPP}}( \mathcal{B}^{*}_{S})\leq\alpha\sqrt{d}\,V_{\textit{BPPS}}(\mathcal{B}^{*}_{S}).\qed\]
The bound given by this strategy cannot improve the dependency on \(d\), by the following lemma.
**Theorem 2**.: For each \(d\geq 1\), there exists an instance \(I\) of the BPPS with \(d\) scenarios and a solution \(\mathcal{B}\) for \(I\), which is minimal for the BPPS, such that \(\,V_{\textit{VBPP}}(\mathcal{B})\geq\frac{\sqrt{d}}{2}\,V_{\textit{BPPS}}( \mathcal{B})\).
Proof.: Let \(r=\left\lceil\sqrt{d}\right\rceil\). We construct an instance \(I=(d,T,\mathcal{K},s)\) that contains \(r\) items.
We consider a complete graph \(G\) with \(r\) vertices. For each vertex \(i\) of \(G\), create an item \(i\) with size \(s_{i}=1\), and for each edge \(\{i,j\}\) of \(G\), create a scenario \(k\) with items \(S_{k}=\{i,j\}\). Notice that each item is in exactly \(r-1\) scenarios, and that the number of scenarios is
\[|E(G)|=\frac{r(r-1)}{2}=\frac{\left\lceil\sqrt{d}\right\rceil(\left\lceil \sqrt{d}\right\rceil-1)}{2}\leq\frac{(\sqrt{d}+1)\sqrt{d}}{2}\leq d.\]
Now, we create a solution \(\mathcal{B}\) with \(r\) bins, such that, for each item \(i\), there is one bin \(B_{i}=\{i\}\). We claim that \(\mathcal{B}\) is minimal for the BPPS. Indeed, for any two bins \(B_{i}\) and \(B_{j}\), items \(i\) and \(j\) are contained in a common scenario \(k\), with \(S_{k}=\{i,j\}\), and \(s_{i}+s_{j}>1\).
Notice that \(\,V_{\textit{VBPP}}(\mathcal{B})=r\). Let \(h=V_{\textit{BPPS}}(\mathcal{B})\), so \(h\) is the maximum number of non-empty bins in a given scenario. Since each scenario contains exactly 2 items, then \(h=2\). Therefore,
\[r=\lceil\sqrt{d}\rceil\geq\frac{\sqrt{d}}{2}2=\frac{\sqrt{d}}{2}h.\qed\]
Finally, using the \(1+\ln d\)-asymptotic approximation algorithm of Bansal et al. (2009) for the VBPP (for constant \(d\)) and the fact that the APTAS of Fernandez de la Vega and Lueker (1981) implies a \((d+\varepsilon)\)-approximation for the VBPP Christensen et al. (2017), we have the following result.
**Corollary 2**.: There exists a \((d+\varepsilon)\sqrt{d}\)-approximation for the BPPS. Also, for any constant \(d\), there exists a \((1+\ln d)\sqrt{d}\)-approximation for the BPPS.
## 4 An Asymptotic Polynomial Time Approximation Scheme
We present an APTAS for the BPPS when the number of scenarios \(d\) is a constant. First, we observe that, if all items are large and the number of distinct item sizes is bounded by a constant, then the number of valid patterns of items in a bin is also bounded by a constant. It implies that for this restricted case, a polynomial algorithm can be readily obtained by simply enumerating the frequencies of all patterns.
To obtain such a restricted instance, we use the _linear grouping_ technique, following Fernandez de la Vega and Lueker (1981), by grouping items of similar sizes and creating a map from smaller to larger items. A naive approach does not work, however, since one must take into account the scenarios. Thus, we group items according to their scenarios separately, but we do the mapping simultaneously.
Finally, to solve a general instance, we combine small items into artificial large items. Again, this is done on a per-scenario basis, so as only compatible items are combined. If the small items do not fit exactly into the area of the artificial items, then this change may cause bins to be slightly augmented. The surpassing items are then relocated to new bins greedily, increasing the number of bins by just a fraction of the optimal value.
In the following, we use the notion of type. The _type_ of item \(i\) is the set of scenarios which contain \(i\). The set of all types is denoted by \(\mathcal{T}\). Also, the value of an optimal solution for an instance \(I\) is denoted by \(\text{OPT}(I)\).
### Restricted instances
In this subsection, we consider instances of the BPPS in which the items are large and the number of distinct sizes is bounded by a constant. Precisely, let \(V\) be a set of positive numbers and consider a constant \(\varepsilon\), with \(0<\varepsilon\leq 1/4\) and \(1/\varepsilon\) is an integer. An instance \((d,\mathcal{I},\mathcal{K},s)\) of the BPPS is said to be _\(V\)-restricted_ if \(\{s_{i}:i\in\mathcal{I}\}=V\) and \(s_{i}\geq\varepsilon^{2}\) for every \(i\in\mathcal{I}\).
Recall that a bin \(B\subseteq\mathcal{I}\) is feasible if, for any scenario \(k\in[d]\), \(\sum_{i\in B\cap S_{k}}s_{i}\leq 1\). For a type \(t\in\mathcal{T}\) and \(v\in V\), let \(B_{tv}\) be the subset of items in \(B\) with type \(t\) and size \(v\). Observe that the family of all sets \(B_{tv}\) form a partition of \(B\). The _pattern_ of \(B\) is the vector \(\left(|B_{tv}|\right)_{t\in\mathcal{T},v\in V}\).
**Lemma 1**.: If \(|V|\) is constant, the number of distinct patterns for a \(V\)-restricted instance is bounded by a constant.
Proof.: Consider a feasible bin \(B\). Since \(s_{i}\geq\varepsilon^{2}\), for any \(i\in B\),
\[|B|\varepsilon^{2}\leq\sum_{i\in B}s_{i}\leq\sum_{k=1}^{d}\sum_{i\in B\cap S_{ k}}s_{i}\leq\sum_{k=1}^{d}1=d,\]
where the last inequality follows because \(B\) is feasible. It means that the number of items in any bin is at most \(d/\varepsilon^{2}\). Since a vector corresponding to a pattern has \(|\mathcal{T}||V|\) elements, the number of patterns is at most \(\left(1+d/\varepsilon^{2}\right)^{|\mathcal{T}||V|}\), which is constant since \(d\) is constant.
**Lemma 2**.: If \(|V|\) is constant, then an optimal solution of a \(V\)-restricted instance can be computed in polynomial time.
Proof.: Consider a \(V\)-restricted instance \(I\), and let \(M\) be the set of distinct patterns for \(I\). Given a solution \(\mathcal{B}\) and a pattern \(p\in M\), the number of bins in \(\mathcal{B}\), which has (a non-empty) pattern \(p\), is denoted by \(n_{p}\). Clearly \(n_{p}\leq|\mathcal{I}|\) for every \(p\in M\), as there are only \(|\mathcal{I}|\) items, and each bin has at least one item. The configuration of \(\mathcal{B}\) is the vector \(\left(n_{p}\right)_{p\in M}\). Thus, the number of possible configurations is bounded by \(|\mathcal{I}|^{|M|}\), which is polynomial since, by Lemma 1, \(|M|\) is constant.
Notice that, given a vector \(\left(n_{p}\right)_{p\in M}\), one can either find a solution \(\mathcal{B}\) with this configuration, or decide there is no such solution. Indeed, it is sufficient to count the total number of items of each type and size over all the patterns. Therefore, one can list all configurations in polynomial time and return the one with the minimum value.
### An APTAS for large items
Suppose now that we have an instance \(I=(d,\mathcal{I},\mathcal{K},s)\), such that all items are large, i.e., for each \(i\in\mathcal{I}\), we have \(s_{i}\geq\varepsilon^{2}\), but the number of distinct sizes is not necessarily bounded by a constant.
**Definition 2**.: Given instances \(I=(d,\mathcal{I},\mathcal{K},s)\) and \(\bar{I}=(d,\bar{\mathcal{I}},\bar{\mathcal{K}},\bar{s})\), we say that \(I\) dominates \(\bar{I}\) if there exists a subset \(\mathcal{I}^{\prime}\subseteq\mathcal{I}\) and a bijection \(f:\mathcal{I}^{\prime}\to\bar{\mathcal{I}}\), such that, for every \(i\in\mathcal{I}^{\prime}\), items \(i\) and \(f(i)\) have the same type and \(s_{i}\geq\bar{s}_{f(i)}\). In this case, we write \(I\succcurlyeq\bar{I}\).
**Lemma 3**.: If \(I\succcurlyeq\bar{I}\), then \(\operatorname{OPT}(I)\geq\operatorname{OPT}(\bar{I})\).
In the following, we create an instance \(\bar{I}=(d,\bar{\mathcal{I}},\bar{\mathcal{K}},\bar{s})\) with \(I\succcurlyeq\bar{I}\), such that all items of \(\bar{I}\) are large and the number of distinct sizes is bounded by a constant.
First, for each \(t\in\mathcal{T}\), let \(\mathcal{I}_{t}\) be the set of items of type \(t\) and let \(m=\frac{2^{d}}{\varepsilon^{3}}-1\). We consider the set \(\mathcal{I}_{t}\) sorted in decreasing order of size, and build \(m+1\) groups of consecutive items, obtaining a partition \(\{G_{t}^{0},G_{1}^{1},\ldots,G_{t}^{m}\}\), where the first \(m\) groups have size \(\lceil|\mathcal{I}_{t}|/(m+1)\rceil\), and the last group either has the same size, or is smaller.
Now, for each \(t\in\mathcal{T}\) and \(\ell\in\{1,2,\ldots,m\}\), we create a set \(\bar{G}_{t}^{\ell}\) as follows: for each item \(i\in G_{t}^{\ell}\), add an item to \(\bar{G}_{t}^{\ell}\) with the same set of scenarios \(t\), and with size equal to the size of the largest item in \(G_{t}^{\ell}\). Let \(\bar{\mathcal{I}}\) be the union of all sets \(\bar{G}_{t}^{\ell}\), and \(\bar{I}\) be the instance induced by \(\bar{\mathcal{I}}\).
**Lemma 4**.: \(I\succcurlyeq\bar{I}\)_._
Proof.: For \(\ell\in\{1,2,\ldots,m\}\), as \(|G_{t}^{\ell-1}|\geq|G_{t}^{\ell}|=|\bar{G}_{t}^{\ell}|\), we can find a bijection \(f\) between a subset of \(G_{t}^{\ell-1}\) and \(\bar{G}_{t}^{\ell}\) for \(\ell\in\{1,2,\ldots,m\}\) such that \(i\) and \(f(i)\) has the same type. Finally, as \(s_{i}\geq s_{j}\) for \(i\in G_{t}^{\ell-1}\) and \(j\in G_{t}^{\ell}\) and \(\bar{s}_{f(i)}\) is the maximum size of an item in \(G_{t}^{\ell}\), we have that \(s_{i}\geq\bar{s}_{f(i)}\).
For a type \(t\in\mathcal{T}\), the group of the largest items is \(G_{t}^{0}\). For each item \(i\) in this group, we pack it into a separate bin \(\{i\}\). By joining all these bins, we obtain a packing \(\mathcal{P}^{0}\).
**Lemma 5**.: \(V_{\text{BPPS}}(\mathcal{P}^{0})\leq\varepsilon\operatorname{OPT}(I)+2^{d}\)_._
Proof.: Let \(t\in\mathcal{T}\) and fix an arbitrary scenario \(k\) of \(t\). We consider an optimal solution for \(I\), and let \(\mathcal{P}_{k}\) be the set of bins used in scenario \(k\) in this solution. Therefore, \(|\mathcal{P}_{k}|\leq\operatorname{OPT}(I)\). Observe that, since \(k\) belongs to \(t\), the set of items included in bins of \(\mathcal{P}_{k}\) must contain \(\mathcal{I}_{t}\). As, for each \(i\in\mathcal{I}_{t}\), \(s_{i}\geq\varepsilon^{2}\),
\[\varepsilon^{2}|\mathcal{I}_{t}|\leq\sum_{i\in\mathcal{I}_{t}}s_{i}\leq\sum_{ B\in\mathcal{P}_{k}}\sum_{i\in B\cap S_{k}}s_{i}\leq\sum_{B\in\mathcal{P}_{k}}1=| \mathcal{P}_{k}|\leq\operatorname{OPT}(I).\]
This implies that the value of \(\mathcal{P}^{0}\) can be bounded by
\[\sum_{t\in\mathcal{T}}|G_{t}^{0}|=\sum_{t\in\mathcal{T}}\left[\frac{\varepsilon ^{3}}{2^{d}}\left|\mathcal{I}_{t}\right|\right]\leq 2^{d}+\sum_{t\in \mathcal{T}}\frac{\varepsilon\operatorname{OPT}(I)}{2^{d}}\leq\varepsilon \operatorname{OPT}(I)+2^{d}.\qed\]
To obtain a packing of the remaining items, we solve instance \(\bar{I}\). For this, let \(V\) be the set of distinct sizes in \(\bar{I}\). For each \(t\in\mathcal{T}\), we created \(m\) groups, where each group has items of the same size. Therefore, \(|V|\leq 2^{d}m\). This means that \(\bar{I}\) is a \(V\)-restricted instance for \(|V|\) bounded by a constant. Using Lemma 2, we obtain a solution \(\bar{\mathcal{P}}^{1}\) for \(\bar{I}\) in polynomial time. Since \(I\succcurlyeq\bar{I}\), we can obtain a solution \(\mathcal{P}^{0}\cup\mathcal{P}^{1}\) for \(I\), where \(\mathcal{P}^{1}\) is obtained from \(\bar{\mathcal{P}}^{1}\) by replacing items of \(G_{t}^{d}\) by items of \(G_{t}^{d}\). Let \(I^{1}\) be the instance obtained from the items of \(\mathcal{P}^{1}\). Observe that \(\mathcal{P}^{1}\) is feasible, since we replaced items of larger size which appear in the same set of scenarios, thus for each scenario, the capacity of each bin remains respected.
**Lemma 6**.: \(V_{\text{BPPS}}(\mathcal{P}^{1})\leq\operatorname{OPT}(I)\)_._
Proof.: Note that the number of unused bins in each scenario is the same for \(\mathcal{P}^{1}\) and \(\bar{\mathcal{P}}^{1}\), thus \(V_{\text{BPPS}}(\mathcal{P}^{1})=V_{\text{BPPS}}(\bar{\mathcal{P}}^{1})\). Since, \(\bar{\mathcal{P}}^{1}\) is optimal for \(\bar{I}\), we have \(V_{\text{BPPS}}(\bar{\mathcal{P}}^{1})=\operatorname{OPT}(I^{1})\). Now, using Lemma 3 and the fact that \(I\succcurlyeq\bar{I}\), we have \(\operatorname{OPT}(I^{1})\leq\operatorname{OPT}(I)\).
Combining the previous results, one obtains a APTAS for instances with large items only.
**Lemma 7**.: Suppose for every \(i\in\mathcal{I}\), \(s_{i}\geq\varepsilon^{2}\). Then one can find in polynomial time a packing \(\mathcal{P}\) of \(\mathcal{I}\) with \(V_{\text{BPPS}}(\mathcal{P})\leq(1+\varepsilon)\operatorname{OPT}(I)+2^{d}\).
Proof.: Define \(\mathcal{P}=\mathcal{P}^{0}\cup\mathcal{P}^{1}\). Notice that \(\mathcal{P}\) contains every item of \(\mathcal{I}\), and thus it is feasible. Using lemmas 5 and 6,
\[V_{\text{BPPS}}(\mathcal{P})\leq\,V_{\text{BPPS}}(\mathcal{P}^{0})+\,V_{\text{ BPPS}}(\mathcal{P}^{1})\leq\varepsilon\operatorname{OPT}(I)+2^{d}+ \operatorname{OPT}(I).\qed\]
### An APTAS for the general case
In the general case of the BPPS, an instance \(I=(d,\mathcal{I},\mathcal{K},s)\) may contain large and small items. Let \(\mathcal{L}=\{i\in\mathcal{I}:s_{i}\geq\varepsilon^{2}\}\) be the set of large items, and \(\mathcal{S}=\mathcal{I}\setminus\mathcal{L}\) be the set of small items.
To obtain a packing of \(\mathcal{L}\cup\mathcal{S}\), we first replace \(\mathcal{S}\) by the set of large items, \(\hat{\mathcal{S}}\), which we define as follows. For each type \(t\in\mathcal{T}\), let \(\mathcal{S}_{t}\) be the set of small items with type \(t\). Now, let \(\hat{\mathcal{S}}_{t}\) be a set of \(\lceil\sum_{i\in\mathcal{S}_{t}}s_{i}/(\varepsilon-\varepsilon^{2})\rceil\) items, such that each \(j\in\hat{\mathcal{S}}_{t}\) has size \(\hat{s}_{j}=\varepsilon\) and is of type \(t\). The set \(\hat{\mathcal{S}}\) is the union of sets \(\hat{\mathcal{S}}_{t}\) for every type \(t\).
Thus, we create an instance \(\hat{I}=(d,\hat{\mathcal{I}},\hat{\mathcal{K}},\hat{s})\) whose set of items is \(\hat{\mathcal{I}}=\mathcal{L}\cup\hat{\mathcal{S}}\), each of which has size at least \(\varepsilon^{2}\). Lemma 8 compares the optimal value of \(I\) and \(\hat{I}\).
**Lemma 8**.: \(\operatorname{OPT}(\hat{I})\leq(1+2^{d+1}(d+1)\varepsilon)\operatorname{OPT}(I)+1\)_._
Proof.: Consider an optimal solution \(\mathcal{P}^{*}\) for \(I\). For each bin \(B\in\mathcal{P}^{*}\) and type \(t\in\mathcal{T}\), we define \(B_{t}=B\cap\mathcal{S}_{t}\), i.e., \(B_{t}\) is the set of small items with type \(t\). Then, we replace the items in \(B_{t}\) by \(\lceil\sum_{i\in B_{t}}s_{i}/(\varepsilon-\varepsilon^{2})\rceil\) new items of size \(\varepsilon\), and with the same scenarios of \(t\). We call the modified packing by \(\mathcal{P}^{\prime}\).
Observe that \(\mathcal{P}^{\prime}\) may be infeasible, since new items may surpass the bins' capacity. We can obtain a feasible solution \(\mathcal{P}^{\prime\prime}\) by relocating some created items from \(\mathcal{P}^{\prime}\) to new bins. For each type \(t\) and each bin \(B\in\mathcal{P}^{\prime}\), the number of items needed to be picked is the additional area divided by \(\varepsilon\) rounded up, that is,
\[\left[\frac{\left\lceil\sum_{i\in B_{t}}\frac{s_{i}}{\varepsilon- \varepsilon^{2}}\right\rceil\varepsilon-\sum_{i\in B_{t}}s_{i}}{\varepsilon} \right] \leq 1+\frac{\varepsilon+\varepsilon\sum_{i\in B_{t}}\frac{s_{i}}{ \varepsilon-\varepsilon^{2}}-\sum_{i\in B_{t}}s_{i}}{\varepsilon}\] \[= 2+\frac{1}{1-\varepsilon}\sum_{i\in B_{t}}s_{i}\] \[\leq 2+2\sum_{i\in B_{t}}s_{i}\]
where the last inequality follows from the fact that \(\varepsilon\leq 1/2\). Let \(R\) be the set of all picked items. Observe that \(|\mathcal{P}^{*}|\leq d\operatorname{\textsc{OPT}}(I)\), since each bin is not empty for at least one scenario. Thus, as there are \(2^{d}\) types and \(\sum_{B\in\mathcal{P}^{*}}\sum_{i\in B_{t}}s_{i}\leq\operatorname{\textsc{OPT} }(I)\), the number of items in \(R\) is at most
\[\sum_{t\in\mathcal{T}}\sum_{B\in\mathcal{P}^{*}}\left(2+2\sum_{i \in B_{t}}s_{i}\right) =2^{d+1}|\mathcal{P}^{*}|+2^{d+1}\operatorname{\textsc{OPT}}(I)\] \[=2^{d+1}d\operatorname{\textsc{OPT}}(I)+2^{d+1}\operatorname{ \textsc{OPT}}(I)\] \[=2^{d+1}(d+1)\operatorname{\textsc{OPT}}(I).\]
Since \(1/\varepsilon\) is an integer, one can rearrange all items of \(R\) into at most
\[\left\lceil\frac{2^{d+1}(d+1)\operatorname{\textsc{OPT}}(I)}{1/\varepsilon}\right\rceil\]
new bins of unit capacity. Let \(\mathcal{R}\) be this set of new bins, and obtain a feasible packing \(\mathcal{P}^{\prime\prime}\) of \(I\), by making a copy of \(\mathcal{P}^{\prime}\), removing the items of \(R\), and adding the bins of \(\mathcal{R}\).
Notice that \(\mathit{V}_{\mathit{BPPS}}(\mathcal{P}^{\prime})=\mathit{V}_{\mathit{BPPS}}( \mathcal{P}^{*})=\operatorname{\textsc{OPT}}(I)\). Using these facts, the value of \(\mathcal{P}^{\prime\prime}\) can be estimated as
\[\mathit{V}_{\mathit{BPPS}}(\mathcal{P}^{\prime\prime}) \leq\mathit{V}_{\mathit{BPPS}}(\mathcal{P}^{\prime})+|\mathcal{R}|\] \[=\operatorname{\textsc{OPT}}(I)+\left\lceil\frac{2^{d+1}(d+1) \operatorname{\textsc{OPT}}(I)}{1/\varepsilon}\right\rceil\] \[=\operatorname{\textsc{OPT}}(I)+2^{d+1}(d+1)\varepsilon \operatorname{\textsc{OPT}}(I)+1.\]
We notice that \(\mathcal{P}^{\prime\prime}\) contains all large items \(\mathcal{L}\) and, for each type \(t\), a certain number of items with scenarios \(t\) and size \(\varepsilon\). Thus, to obtain a packing for \(\hat{\mathcal{I}}\) from \(\mathcal{P}^{\prime\prime}\), it is sufficient to show that for each \(t\), the number of created items corresponding to \(t\) is not smaller than \(|\hat{\mathcal{S}}_{t}|\). Indeed, for a given \(t\), the number of created items is
\[\sum_{B\in\mathcal{P}^{*}}\left\lceil\sum_{i\in B_{t}}s_{i}/\varepsilon\right\rceil \geq\left\lceil\sum_{i\in\mathcal{S}_{t}}s_{i}/\varepsilon\right\rceil=|\hat{ \mathcal{S}}_{t}|.\]
To conclude the proof, we create a packing \(\hat{\mathcal{P}}\) for \(\hat{\mathcal{I}}\) by removing the exceeding items of \(\mathcal{P}^{\prime\prime}\). Since removing such items can only decrease the objective value, \(\mathit{V}_{\mathit{BPPS}}(\hat{\mathcal{P}})\leq\mathit{V}_{\mathit{BPPS}}( \mathcal{P}^{\prime\prime})\).
Finally, to obtain a packing for \(I\), one can first obtain a packing of \(\hat{I}\), then replace items from \(\hat{\mathcal{S}}_{t}\) by items in \(\mathcal{S}_{t}\). This leads to Theorem 3.
**Theorem 3**.: For every constant \(\varepsilon^{\prime}>0\), one can find in polynomial time a packing \(\mathcal{P}\) of \(\mathcal{I}\) such that \(\mathit{V}_{\mathit{BPPS}}(\mathcal{P})\leq(1+\varepsilon^{\prime}) \operatorname{\textsc{OPT}}(I)+2+2^{d}\).
Proof.: Define \(r=2^{d+1}(d+1)\) and define \(\varepsilon\) as the largest number such that \(\varepsilon\leq\min(\frac{\varepsilon^{\prime}}{3r},1/4)\) and \(1/\varepsilon\) is an integer. Let \(\hat{I}\) be the instance obtained from \(I\) by replacing, for each type, the items smaller than \(\varepsilon^{2}\) with items of size \(\varepsilon\), as previously mentioned. Since each item of \(\hat{I}\) has size at least \(\varepsilon^{2}\), by Lemma 7, we obtain a packing \(\mathcal{P}\) for \(\hat{I}\) with value \(\mathit{V}_{\mathit{BPPS}}(\hat{\mathcal{P}})\leq(1+\varepsilon) \operatorname{\textsc{OPT}}(\hat{I})+2^{d}\) in polynomial time.
For each \(t\in\mathcal{T}\), we place all items of \(\mathcal{S}_{t}\) over the items of \(\hat{\mathcal{S}}_{t}\). Precisely, we do the following: start by making a copy \(\mathcal{P}\) of \(\hat{\mathcal{P}}\); select a set \(A\) of unpacked items in \(\mathcal{S}_{t}\) whose total size is at least \(\varepsilon-\varepsilon^{2}\) and at most \(\varepsilon\). If there is not such a set \(A\), then let \(A\) be the remaining set of unpacked items in \(\mathcal{S}_{t}\). Find a bin \(B\in\mathcal{P}\) with an item \(j\in\hat{\mathcal{S}}_{t}\cap B\) and replace \(j\) by the items of \(A\). Since there are \(\lceil\sum_{i\in\mathcal{S}_{t}}s_{i}/(\varepsilon-\varepsilon^{2})\rceil\) items in \(\hat{\mathcal{S}}_{t}\), and each group \(A\), except perhaps one of them, has size at least \(\varepsilon-\varepsilon^{2}\), we always find an unused item \(j\in\hat{\mathcal{S}}_{t}\); repeat these steps while there are unpacked items in \(\mathcal{S}_{t}\) for some \(t\).
The resulting packing \(\mathcal{P}\) contains all items in \(\mathcal{I}\). As no bin was added, \(\mathcal{P}\) has the same value of \(\hat{\mathcal{P}}\). Using Lemma 8, we finally obtain
\[\mathit{V}_{\mathit{BPPS}}(P) =\mathit{V}_{\mathit{BPPS}}(\hat{P})\leq(1+\varepsilon) \operatorname{\textsc{OPT}}(\hat{I})+2^{d}\] \[\leq(1+\varepsilon)[(1+2^{d+1}(d+1)\varepsilon)\operatorname{ \textsc{OPT}}(I)+1]+2^{d}\] \[=(1+r\varepsilon^{2}+r\varepsilon+\varepsilon)\operatorname{ \textsc{OPT}}(I)+2^{d}+1+\varepsilon\] \[\leq(1+3r\varepsilon)\operatorname{\textsc{OPT}}(I)+2^{d}+1+\varepsilon\] \[\leq(1+\varepsilon^{\prime})\operatorname{\textsc{OPT}}(I)+2+2^ {d}.\qed\]
A Variable Neighborhood Search Algorithm
The Variable Neighborhood Search is a metaheuristic that searches in different neighborhood structures and has systematic change of neighborhood to find better solutions and escape from local optima. Mladenovic and Hansen (1997) proposed the VNS not only for combinatorial optimization problems, but for optimization in general. Due to its high applicability to similar problems (Puchinger and Raidl, 2008; Fleszar and Hindi, 2002; Beltran et al., 2004), we apply the VNS to solve the BPPS.
For successfully implement it, we need to clarify three key characteristics of the problem: the local search neighborhood structures and how to navigate the search space; sensitive and computationally viable objective functions; and the stopping criterion. In the following, we detail each of these and describe the resulting algorithm.
### Neighborhood structures
A neighborhood structure specifies a well-defined way to get from any given solution into another solution that is "close" to the first one. We refer to these neighborhood structures as a "movement" that takes one solution as input and, depending on some structure-dependent input parameters, it returns a different but "close" solution. We define four neighborhood structures for the BPPS in the following order:
* \(N_{1}\): Given items \(i\) and \(j\) packed into different bins, move \(i\) to the bin currently containing \(j\), then move \(j\) to the bin that contained \(i\);
* \(N_{2}\): Given an item \(i\) and a bin \(B\) that currently does not contain \(i\), move \(i\) to \(B\);
* \(N_{3}\): Given bins \(B_{1},B_{2}\), and \(B_{3}\) and items \(i\) and \(j\) packed into \(B_{1}\), move these items to \(B_{2}\) and \(B_{3}\), respectively;
* \(N_{4}\): Given a bin \(B\), remove it from the solution and repack the items in the first bin that can accommodate each of them, respecting the order that they appeared in \(B\).
Neighborhoods \(N_{1},N_{2}\), and \(N_{3}\) are prioritized according to their complexity. Although the neighborhood structure \(N_{4}\) is the smallest, it can take comparatively more computational effort to compute since we have to repack all items of a given bin, so we consider it as the last one.
### Objective function
The objective function for the BPPS may not be sensitive enough to rank two neighbor solutions. Thus, to guide the VNS, we define a fitness function that penalizes solutions in which the bin occupation per scenario is small.
Let \(\mathcal{B}\) be a solution for the BPPS and \(\mathcal{B}_{k}\) be the subset of bins that contain items in scenario \(k\), where \(|\mathcal{B}_{k}|\) is the number of bins used in \(k\). Furthermore, let \(\mathrm{ocp}(k,B)\) be the total size of the items in bin \(B\) from scenario \(k\). Given a solution \(\mathcal{B}\), the VNS moves to a neighbor solution \(\mathcal{B}^{\prime}\) such that \(f(\mathcal{B})>f(\mathcal{B}^{\prime})\), for \(f\) defined as:
\[f(\mathcal{B})=\mathit{V}_{\mathit{BPPS}}(\mathcal{B})-\sum_{k\in\mathcal{K}} \sum_{B\in\mathcal{B}_{k}}\left(\frac{\mathrm{ocp}(k,B)}{|\mathcal{B}_{k}|W} \right)^{2}\]
### Stopping criterion
Since the VNS cannot tell whether an optimal solution is reached, we use two stopping criteria to decide when the algorithm stops. The first criterion is the _timeout_, where a preset CPU time limit (\(t_{\max}\)) is imposed, so the VNS stops as soon as the time limit is exceeded. The second criterion is the _local-convergence_, where a preset number of iterations (\(c_{\max}\)) is imposed, so the VNS stops if the best known solution is not updated within this number of iterations.
### Algorithm
The VNS takes as input an initial solution \(x\), a number of neighborhood structures \(N_{\max}=4\), a time limit \(t_{\max}=1800\) seconds and a convergence limit \(c_{\max}=500\) iterations (these values were achieved after preliminary computational tests). The initial solution \(x\) is obtained as follows: items are sorted in non-increasing order of their sizes and then packed in the first open bin where it fits, considering the capacity constraints in every scenario and opening a new bin whenever necessary.
The overall structure of the proposed VNS is given in the following algorithm:
\(\text{VNS}(x,N_{\text{max}},t_{\text{max}},c_{\text{max}})\)
```
1\(t\)\(=\) 0, \(c\)\(=\) 0
2while\(t<t_{\text{max}}\) and \(c<c_{\text{max}}\)
3\(\kappa\)\(=\) 1
4\(improvement\)\(=\) False // Assume there will be no improvement
5repeat
6\(x^{\prime}\)\(=\)\(\text{Shake}(x,\kappa)\) // Get a random solution in \(N_{\kappa}(x)\)
7\(x^{\prime\prime}\)\(=\)\(\text{Local-Search}(x^{\prime},\mathcal{N}_{\text{max}})\) // Perform VND to improve \(x^{\prime}\)
8if\(f(x^{\prime\prime})<f(x)\) // There has been an improvement
9\(x\)\(=\)\(x^{\prime\prime}\)
10\(\kappa\)\(=\) 1
11\(improvement\)\(=\) True
12else // Local optima, change neighborhood
13\(\kappa\)\(=\)\(\kappa\)\(+\) 1
14until\(\kappa\)\(==\)\(N_{\text{max}}\)
15ifimprovement\(\not\)\(\not\)\(\not\)\(\not\)\(\not\)\(\not\)\(\not\)\(\not\)\(\not\)\(\text{There has been an improvement
16\(c\)\(=\) 0
17else // No improvement in this iteration
18\(c\)\(=\)\(c\)\(+\) 1
19\(t\)\(=\) CPU-Time() // Update processing time.
```
The VNS uses a Shake procedure that simply returns a solution \(x^{\prime}\) randomly selected from the neighborhood \(N_{\kappa}\) of \(x\). The Local-Search procedure is based on the Variable Neighborhood Descent (VND) approach, which is the deterministic descent-only version of the VNS. Our local search takes as input a solution \(x\) and the number of neighborhood structures \(N_{\text{max}}=4\), resulting in the following algorithm:
```
Local-Search(\(x,N_{\text{max}}\))
1\(\kappa\)\(=\) 1
2repeat
3\(x^{\prime}\)\(=\)\(\text{arg}\min_{y\in N_{\kappa}(x)}f(y)\) // Find the first best neighbor of \(x\)
4if\(f(x^{\prime})<f(x)\) // There has been an improvement
5\(x\)\(=\)\(x^{\prime}\)
6\(\kappa\)\(=\) 1
7else // Local optima, change neighborhood
8\(\kappa\)\(=\)\(\kappa\)\(+\) 1
9until\(\kappa\)\(==\)\(N_{\text{max}}\)
```
## 6 A Branch-and-Price Algorithm
We propose an exact method for the BPPS based on column generation and branch-and-bound, that is, a branch-and-price algorithm. This method consists of modelling the problem as an ILP with a very large (usually exponential) number of variables, which are generated and added to the model only as they are needed, in each node of the branch-and-bound tree. Branch-and-price has been used successfully to solve practical instances of the BPP and several of its variants (Delorme et al., 2016; Sadykov and Vanderbeck, 2013; Dell'Amico et al., 2020; Borges et al., 2020; Bettinelli et al., 2010).
One of the most successful formulations for the classical BPP and variants is based on associating a variable to each cutting pattern (see, e.g., Delorme et al. (2016b)). In practice, such kinds of formulations lead to models with an exponential number of variables, as the number of cutting patterns is usually exponential with respect to the number of items. Due to this, to solve practical instances with such models, one has to rely on column generation to solve the corresponding linear relaxation.
The _column generation method_, proposed by Ford and Fulkerson (1958) and generalized by Dantzig and Wolfe (1961), solves the linear relaxation of models with a large number of variables. Gilmore and Gomory (1961, 1963) were the first to present practical experiments based on column generation, by solving the linear relaxation of an exponential model for the BPP. The column generation method solves the linear relaxation by iteratively solving the _restricted master problem_ (RMP), which considers only a subset of variables (columns) of the original model. At each iteration, the method has to determine the existence of non-basic columns which are candidate to improve the current solution of the RMP, i.e., columns with negative reduced cost (on minimization problems). If the method determines that no such column exists, then the solution of the RMP is an optimal solution for the original linear relaxation, providing a bound for the integer problem.
### A pattern-based model
Recalling that \(\mathcal{P}\) is the set of all cutting patterns for the BPPS, the following exponential ILP model solves the BPPS:
\[\text{minimize} \mathcal{F}, \tag{1}\] \[\text{s.t.}: \sum_{p\in\mathcal{P}}a_{ip}X_{p}\geq 1, \forall i\in\mathcal{I};\] (2) \[\sum_{p\in\mathcal{P}}b_{kp}X_{p}\leq\mathcal{F}, \forall k\in\mathcal{K};\] (3) \[X_{p}\in\{0,1\}, \forall p\in\mathcal{P};\] (4) \[\mathcal{F}\in\mathbb{Z}_{+}. \tag{5}\]
For each pattern \(p\in\mathcal{P}\), binary variable \(X_{p}\) is equal to \(1\) if and only if \(p\) is used in the solution. The integer variable \(\mathcal{F}\) represents an upper bound on the number of bins (patterns) that is used by each individual scenario. The objective function (1) is to minimize \(\mathcal{F}\). Constraints (2) guarantees that every item belongs to at least one cutting pattern of the solution. Constraints (3) ensures that, for every scenario \(k\in\mathcal{K}\), the number of bins that packs at least one item from \(k\) is at most \(\mathcal{F}\). The domain of variables is in (4) and (5).
Next, we describe a column generation algorithm to solve the linear relaxation of (1)-(5). As the initial set of columns for the RMP, we consider each cutting pattern with a single item. The dual solution of the RMP consists of a value \(\alpha_{i}\geq 0\) for each item \(i\) associated with a constraint in (2), and a value \(\beta_{k}\leq 0\) for each scenario \(k\) associated with a constraint in (3). The dual solution is used to solve the _pricing problem_, which determines a column with minimum reduced cost, that is
\[\min_{p\in\mathcal{P}}\left\{c_{p}-\left(\sum_{i\in\mathcal{I}}\alpha_{i}a_{ip }+\sum_{k\in K}\beta_{k}b_{kp}\right)\right\},\]
which is equivalent to
\[\max_{p\in\mathcal{P}}\left\{\sum_{i\in\mathcal{I}}\alpha_{i}a_{ip}+\sum_{k \in K}\beta_{k}b_{kp}\right\}. \tag{6}\]
### The pricing problem
In order to generate a new column, we need to find a feasible pattern that optimizes (6). Since it includes a term for each item and a term for each scenario used in the pattern, we can describe the pricing problem as the following Knapsack Problem with Scenarios.
Knapsack Problem with Scenarios (KPS)In the KPS, we are given: a number \(d\) of scenarios; a set \(\mathcal{I}=[n]\) of \(n\) items, with each item \(i\in\mathcal{I}\) having a size \(s_{i}\), item value \(v_{i}\in\mathbb{Q}\), and a set of scenarios \(\mathcal{K}_{i}\subseteq[d]\); a vector \(u\in\mathbb{Q}^{d}\) of scenario values, and a knapsack of capacity \(W\in\mathbb{Q}_{+}\). For simplicity, for each item \(i\in\mathcal{I}\), define:
\[s_{i}^{k}=\begin{cases}s_{i},\text{ if item }i\text{ is in the scenario }k;\\ 0,\text{ otherwise.}\end{cases}\]
A solution is a subset \(B\) of \(\mathcal{I}\) such that for each scenario \(k\in\mathcal{K}\), \(\sum_{i\in B}s_{i}^{k}\leq w_{k}\). The objective is to find a solution that maximizes
\[\sum_{i\in B}v_{i}+\sum_{k:s_{i}^{k}>0,\,i\in B}u_{k}.\]
**Theorem 4**.: KPS is NP-hard in the strong sense.
Proof.: Let us recall the Maximum Independent Set problem, which given a graph \(G\) consists of finding a subset of maximum cardinality of its vertices such that no two of its elements are adjacent in \(G\). This problem is known to be strongly NP-hard (Garey and Johnson, 1978), so all we need to do is present a polynomial reduction from this problem into the KPS.
Let \(I=G(V,E)\) be an instance of the Maximum Independent Set problem. We construct an instance \(I^{\prime}=(d,\mathcal{I},s,v,\mathcal{K},u,W)\) of the KPS such that \(|\mathcal{I}|=|V|\), \(d=|V|\). For each item \(i\in\mathcal{I}\) there is a scenario \(k_{i}\in[d]\) associated with it. We define \(\mathcal{K}_{i}=\{k_{i}\}\cup\{k_{j}\colon(i,j)\in E\}\) and we let \(s_{i}^{k}=1\) if \(k\in\mathcal{K}_{i}\). We define \(v_{i}=1\) for all \(i\in\mathcal{I}\), \(u_{k}=0\) for all \(k\in[d]\) and \(W=1\).
Now notice that there is a bijection between solutions of \(I\) and \(I^{\prime}\) which preserves the value of the objective function two items corresponding to two adjacent vertices of \(V\) cannot be in the same solution of \(I^{\prime}\), the items are unit-valued and the scenarios are zero-valued.
As a consequence of Theorem 4, our pricing subproblem does not admit a pseudo-polynomial time algorithm, unless \(P=\mathrm{NP}\), thus, we propose the following ILP model to solve it.
We can use the dual solution \((\alpha,\beta)\) of the RMP as item-values and scenario-values \((u,v)\) to describe the following model for the pricing subproblem
maximize \[\sum_{i\in\mathcal{I}}\alpha_{i}A_{i}+\sum_{k\in\mathcal{K}}\beta_ {k}B_{k}\] (7) s.t. \[\sum_{i\in\mathcal{I}}s_{i}^{k}A_{i}\leq WB_{k}, \forall k\in\mathcal{K};\] (8) \[A_{i},B_{k}\in\{0,1\}, \forall i\in\mathcal{I},k\in\mathcal{K}.\] (9)
In model (7)-(9), variables \(A_{i}\) are related to the coefficients \(a_{ip}\), and variables \(B_{k}\) are related to the coefficients \(b_{kp}\). Hence, variables \(A\) and \(B\) determine the cutting pattern that is generated. The objective function (7) is related to the pricing formula in (6). Constraints (8) assures that the solution of the model represents a valid cutting pattern for the BPPS. Notice that \(W\) is the capacity of a bin, then if a scenario \(k\in\mathcal{K}\) is used (\(B_{k}=1\)), the sum of the sizes of all items that belong to the scenario \(k\) must be at most \(W\), and if this scenario is not used (\(B_{k}=0\)), then the sum of the sizes of all items that belong to \(k\) must be equal to \(0\).
### The branch-and-price algorithm
To solve the model (1)-(5), we propose a branch-and-price algorithm. The branch-and-price is based on an enumeration of the fractional solution of the column generation algorithm. For this end, we use the branching scheme of Ryan and Foster (1981). Given a fractional solution \(X^{*}\), we know, from Vance et al. (1994), that there exists rows \(l\) and \(m\) such that
\[0<\sum_{p:a_{lp}=1,a_{mp}=1}X_{p}<1.\]
We use this pair of rows to create two branches for the current node. On one side we enforce that items \(l\) and \(m\) must be packed in the same pattern, and on the other side we enforce that they must be packed in different patterns. This is achieved by the following branching constraints
\[\sum_{pa_{lp}=1,a_{mp}=1}X_{p} \geq 1 \tag{10}\] \[\sum_{pa_{lp}=1,a_{mp}=1}X_{p} \leq 1 \tag{11}\]
Rather than explicitly adding these constraints to the RMP, they are enforced by setting the infeasible columns' upper bound to zero. On the branch of (10), this is equivalent to combine items \(l\) and \(m\). For the branch of (11), this is equivalent to making rows \(l\) and \(m\) disjoint. However, this is not enough, since we must also avoid infeasible columns to be re-generated on each branch, which can be achieved by modifying the underlining pricing subproblem as follows:
* For the branches in which items \(l\) and \(m\) are combined, it suffices to modify the set of items to \(\tilde{\mathcal{I}}=\{1,\ldots,n\}\setminus\{l,m\}\cup\{n+1\}\), with \(\alpha_{n+1}=\alpha_{l}+\alpha_{m}\), \[s_{n+1}^{k}=\begin{cases}s_{l}^{k},&\text{if }s_{l}^{k}>0\text{ and }s_{m}^{k}=0,\\ s_{m}^{k},&\text{if }s_{l}^{k}=0\text{ and }s_{m}^{k}>0,\\ s_{l}^{k}+s_{m}^{k},&\text{if }s_{l}^{k}>0\text{ and }s_{m}^{k}>0,\\ 0,&\text{otherwise};\end{cases}\] (12)
* For branches in which items \(l\) and \(m\) must be packed in different patterns, we simply add the following disjunction constraint \[A_{l}+A_{m}\leq 1.\] (13)
These changes can be easily accommodated into the model (7)-(9) without much effort.
To obtain an upper bound at each node, we solve the current restricted master problem with the integrality constraint. In other words, we solve an ILP model that consists only of the columns that have been already generated. The use of this method at each node can be computationally expensive, but it usually provides tight upper bounds.
Some instances of the problem may be too complex to be solved by the column generation algorithm in practical time. In this manner, we would need faster methods for obtaining lower bounds for such instances. Two methods are proposed to get a lower bound at the root node of the branch-and-price algorithm. One of them is based on the continuous lower bound for the bin packing problem, and it is given by the following equation:
\[\mathit{LB}_{\mathit{CON}} =\max_{k\in\mathcal{K}}\left\{\left\lceil\sum_{i\in S_{k}}s_{i} ^{k}/W\right\rceil\right\}. \tag{14}\]
The idea behind the continuous lower bound (14) is to break items into unitary-sized items to fill empty spaces in a packing, which could not be filled otherwise. The \(\mathit{LB}_{\mathit{CON}}\) computes the continuous lower bound for every scenario, and returns the largest value between them.
The other lower bound that we use is based on dual feasible functions. A _dual feasible function_ (DFF) is a function \(f\) such that, for any finite set \(X\) of real numbers, if \(\sum_{x\in X}x\leq 1\), then \(\sum_{x\in X}f(x)\leq 1\). For a survey on DFFs, we refer to Claudiaux et al. (2010). Given a DFF \(f\), we can construct a lower bound based on it, similarly to the continuous lower bound, as given in the following equation:
\[\mathit{LB}_{\mathit{DFF}}=\max_{k\in K}\{\lceil\sum_{i\in S_{k}}f(s_{i}^{k}/ W)\rceil\} \tag{15}\]
In the proposed branch-and-price algorithm, given \(\lambda\in\{1,\ldots,W/2\}\), we consider the DFF proposed by Fekete and Schepers (2001) and described in (16):
\[f(s)=\begin{cases}W,\text{ if }s>W-\lambda;\\ 0,\text{ if }s\leq\lambda;\\ s,\text{ otherwise.}\end{cases} \tag{16}\]
## 7 Numerical Experiments
The efficiency of the proposed VNS and B&P algorithm is evaluated with computational experiments. Both solutions methods were implemented in the C++ programming language, with the B&P algorithm using the Gurobi Solver version 8.1 Gurobi Optimization (2023) to solve the linear programming models. The experiments were carried out in a computer with an Intel(R) Xeon(R) CPU E5-2630 v4 processor at 2.20 GHz, with 64 GB of RAM and under the Ubuntu 18.04 operating system.
A total of 120 instances were generated, divided into 12 different classes of 10 instances each. The classes are combinations of the number of items \(n\in\{10,50,100,200\}\) and the number of scenarios \(d\in\{0.5n,n,2n\}\), which depends on the number of items of the class. For every instance, the size of the items is randomly generated over the set \(\{1,\ldots,99\}\) under a uniform probability distribution, whereas the size of the bin is fixed to 100. The scenarios were generated randomly, similarly to Bodis and Balogh (2018), such that an item \(i\in\mathcal{I}\) belongs to the scenario \(k\in\mathcal{K}\) with probability 0.5.
A summary of the computational results is presented in Table 7. Columns \(n\) and \(d\) present, respectively, the number of items and the number of scenarios of each class of instances. We consider three sets of columns, each representing one of the tested algorithms, with the last one, VNS+B&P, being the B&P algorithm using the solution obtained from the VNS as a warm-start. The columns \(\mathit{gap}\) represent the average optimality gap that each algorithm achieved in each class, computed as \(\frac{\mathit{UB}-\mathit{LB}}{\mathit{UB}}\), where \(\mathit{UB}\) and \(\mathit{LB}\) are respectively the upper and lower bounds obtained. Notice that since the VNS does not produce a lower bound on its own, we used the lower bound from the VNS+B&P to compute its optimality gap. Columns \(\mathit{time}\) represent the average time in seconds that each algorithm spent to solve each instance of the corresponding class, being 1800 seconds the time limit for the VNS and VNS + B&P algorithms, and a time limit of 3600 for B&P. Columns \(|OPT|\) represent the number of instances from each class that were solved to optimality (out of 10) by each algorithm. Columns \(cols\) and \(nodes\) represent the average number of columns generated and the number of explored nodes, respectively, considering the B&P algorithm.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c c} \hline \hline & & \multicolumn{3}{c}{VNS} & \multicolumn{3}{c}{B\&P} & \multicolumn{3}{c}{VNS + B\&P} \\ \cline{3-13} \(n\) & \(d\) & _gap_ & _time_ & \(|\)OPT\(|\) & _gap_ & _nodes_ & _cols_ & _time_ & \(|\)OPT\(|\) & _gap_ & _nodes_ & _cols_ & _time_ & \(|\)OPT\(|\) \\ \hline
10 & 5 & 0.00\% & 0.00 & 10 & 0.00\% & 1.0 & 6.9 & 0.02 & 10 & 0.00\% & 0 & 0 & 0.00 & 10 \\
10 & 10 & 0.00\% & 0.00 & 10 & 0.00\% & 1.6 & 14.0 & 0.03 & 10 & 0.00\% & 0.3 & 5.4 & 0.00 & 10 \\
10 & 20 & 1.67\% & 0.00 & 9 & 0.00\% & 1.2 & 11.5 & 0.03 & 10 & 0.00\% & 0.3 & 3.4 & 0.00 & 10 \\
50 & 25 & 7.11\% & 14.80 & 1 & 0.00\% & 6.4 & 156.1 & 11.21 & 10 & 1.01\% & 955.8 & 89256.4 & 364.99 & 8 \\
50 & 50 & 5.24\% & 28.80 & 1 & 0.00\% & 15.7 & 289.8 & 30.08 & 10 & 0.50\% & 321.8 & 34501.3 & 187.00 & 9 \\
50 & 100 & 7.26\% & 28.80 & 0 & 0.00\% & 270.2 & 2401.1 & 569.41 & 10 & 0.93\% & 129.8 & 10871.7 & 370.18 & 8 \\
100 & 50 & 9.34\% & 134.30 & 0 & 1.44\% & 80.8 & 2381.6 & 1832.76 & 5 & 0.56\% & 6.6 & 4575.5 & 516.95 & 8 \\
100 & 100 & 8.25\% & 181.20 & 0 & 2.21\% & 85.3 & 2399.6 & 3019.09 & 2 & 1.38\% & 29.2 & 12419.1 & 1193.98 & 5 \\
100 & 200 & 7.78\% & 240.40 & 0 & 1.51\% & 41.0 & 1028.1 & 2710.84 & 4 & 1.72\% & 5.8 & 3577.8 & 1052.04 & 6 \\
200 & 100 & 16.32\% & 1048.80 & 0 & 13.74\% & 1.0 & 532.7 & 3814.57 & 0 & 20.13\% & 1 & 3323.1 & 1817.98 & 0 \\
200 & 200 & 18.56\% & 1407.70 & 0 & 17.52\% & 1.0 & 460.3 & 3925.27 & 0 & 20.07\% & 1 & 4801.9 & 1828.71 & 0 \\
200 & 400 & 18.31\% & 1595.30 & 0 & 20.16\% & 1.0 & 362.7 & 3866.15 & 0 & 20.29\% & 1 & 4832.4 & 1817.93 & 0 \\ \hline Average & 8.32\% & 390.01 & 2.58 & 4.71\% & 42.21 & 837.0 & 1648.28 & 5.91 & 5.55\% & 121.05 & 14764 & 762.45 & 6.16 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the computational results for all the algorithms tested.
First, we discuss the performance of the VNS algorithm. From Table 7, we can notice that most instances with 10 items are easily solved by the heuristic. However, for instances of size 50 and 100 we can notice a significant larger gap, even though the algorithm time never reaches its time limit. This indicates that the VNS quickly found a local-optima and was not able to escape it, thus prematurely converging. Something similar can be observed for the instances of size 200, although the running time gets closer to the time limit in these cases. For this algorithm, only two instances of size \(50,100\), and 200 are solved to optimality, achieving a total average gap of \(8.32\%\) in an average time of \(390.01\) seconds.
Observing the results for algorithm B&P, we can notice a significant improvement. All instances of 10 and 50 items are solved to optimality very quickly (100 seconds on average). We start to notice a drop in performance for the instances of size 100 and 200. For instances of size 100, the algorithm was able to solve 11 instances to optimality, out of 30, averaging over 1800 seconds for all number of scenarios. We can also observe that the average gap for these instances is very small (around \(2\%\) for all number of scenarios). The algorithm times out for all instances of size 200, with an average gap of \(14\%\). Interestingly, the average gaps achieved by the B&P for these instances are not far from the ones obtained by the VNS, with the latter having half the time limit of the former. The total average gap for this algorithm is \(4.71\%\) in an average time of \(1648.28\) seconds.
Finally, we analyze the VNS + B&P algorithm, that is, B&P using the VNS solution as a warm-start. Once again, all instances of size 10 are completely solved in negligible time. Differing from the pure B&P algorithm though, this one could not find optimal solutions for a few instances of size 50. These 5 instances brought all the averages for this group of instances up, since they were tried until the time limit of 1800 seconds, totaling thousands of columns and hundreds of nodes more than their non-warm-started counterpart. This might happen because the solution of the VNS algorithm might impact the branching step of the B&P, making it harder for the algorithm to converge depending on the branch it takes initially. However, we can notice that for instances of size 100, the VNS + B&P solved 19 instances to optimality, 8 more than the pure B&P and with a significant smaller average time. This algorithm also timed out for all instances of size 200, while presenting a greater gap in average compared to the pure B&P because of the smaller time limit. When compared to the pure B&P algorithm, the VNS + B&P gives an average gap greater by \(0.16\%\) while using less than half the time in average.
The data from Table 7 shows us that although the VNS algorithm struggles to avoid local-optima, its results serve as good upper bounds to be fed into a more elaborated model such as the B&P presented. By using the VNS's result as a warm-start, the B&P algorithm was able to solve more instances in less time, this was not enough to tackle instances with 200 items.
## 8 Concluding Remarks
This work deals with a variant of the bin packing problem where items occupy the bin capacity when they belong to the same scenario. For this problem, a variety of solution methods are proposed, including an absolute approximation algorithm, an asymptotic polynomial time approximation scheme, a variable neighborhood search based heuristic, and a branch-and-price that solves a set-cover-based formulation. Results of the heuristic and the branch-and-price illustrate how effective is the latter to solve small and medium size instances, having its efficiency improved when the heuristic provides a start solutions.
While the approximation algorithms depend either on an approximation algorithm for the vector bin packing problem, for the absolute approximation algorithm, or the number of scenarios, for the APTAS, the heuristic and branch-and-price can be used to handle the problem in practical contexts. The heuristic is not so competitive with the branch-and-price algorithm, since the former only optimally solved \(26\%\) of the instances, while the latter obtained optimal solutions for \(59\%\) of the instances. If solutions of the heuristic are used as a warm-start for the branch-and-price, the number of optimal solutions increases to about \(62\%\). In total, we were able to prove optimality for 79 out of the 120 proposed instances.
Future research can focus on new approximation algorithms that explore the influence of the scenarios. Improvements in the variable neighborhood search are also expected, including new neighborhood structures and local searches. Regarding the branch-and-price, we will work on valid cuts and the possibility of dynamic programming algorithms for the pricing problem.
## Compliance with Ethical Standards
The authors acknowledge the financial support of the National Counsel of Technological and Scientific Development (CNPq grants 144257/2019-0, 312186/2020-7, 311185/2020-7, 311039/2020-0, 405369/2021-2, 313146/2022-5), the State of Goias Research Foundation (FAPEG), and the State of Sao Paulo Research
Foundation (FAPESP grant 2017/11831-1). This study was financed in part by the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior - Brasil (CAPES) - Finance Code 001.
## Ethical approval
This article does not contain any studies with human participants or animals performed by any of the authors.
## Data availability
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
|
2310.11381 | Chiral Bell-state transfer via dissipative Liouvillian dynamics | Chiral state transfer along closed loops in the vicinity of an exceptional
point is one of the many counter-intuitive observations in non-Hermitian
physics. The application of this property beyond proof-of-principle in quantum
physics, is an open question. In this work, we demonstrate chiral state
conversion between singlet and triplet Bell states through fully-quantum
Liouvillian dynamics. Crucially, we demonstrate that this property can be used
for the chiral production of Bell states from separable states with a high
fidelity and for a large range of parameters. Additionally, we show that the
removal of quantum jumps from the dynamics through postselection can result in
near-perfect Bell states from initially separable states. Our work presents the
first application of chiral state transfer in quantum information processing
and demonstrates a novel way to control entangled states by means of
dissipation engineering. | Shishir Khandelwal, Weijian Chen, Kater W. Murch, Géraldine Haack | 2023-10-17T16:24:41Z | http://arxiv.org/abs/2310.11381v2 | # Chiral Bell-state transfer via dissipative Liouvillian dynamics
###### Abstract
Chiral state transfer along closed loops in the vicinity of an exceptional point is one of the many counter-intuitive observations in non-Hermitian physics. The application of this property beyond proof-of-principle, specifically in quantum physics, is an open question. In this work, we demonstrate chiral state conversion between singlet and triplet Bell states through fully-quantum Liouvillian dynamics. Crucially, we demonstrate that this property can be used for the chiral production of Bell states from separable states with a high fidelity and for a large range of parameters. Additionally, we show that the removal of quantum jumps from the dynamics through postselection can result in near-perfect Bell states from initially separable states. Our work paves the way to a novel type of quantum control and a potential application of non-Hermitian physics in quantum information processing.
_Introduction.-_ In recent years, Exceptional Points (EPs) in non-Hermitian systems have seen a surging interest, for example, in sensing [1; 2; 3; 4; 5; 6; 7], topological properties [8; 9; 10; 11; 12; 13; 14] and recently in the quantum control of dynamics [15; 16; 17; 18]. Open quantum systems are a natural platform to explore EPs and associated effects as their evolution is characteristically non-Hermitian [19]. Chiral state transfer along closed trajectories in the vicinity of an EP is a well-known property of non-Hermitian physics, where eigenstates can be adiabatically switched and the final state is solely determined by the directions of the trajectories [20]. While this effect was first discussed for classical and semiclassical systems [21; 22; 23; 24; 25; 26], its applications to quantum settings are only now being discovered. Theoretical works [27; 28] have been accompanied by successful experiments with superconducting circuits [29; 30] and single ions [31]. These results offer a pathway for quantum state control by utilising dissipation as a resource. However, presently, they do not necessarily involve genuinely quantum phenomena, such as the creation of quantum correlations like entanglement, or offer an advantage in a quantum setting.
In this work, we show for the first time, that it is possible to create highly entangled states by means of chiral state conversion between two Bell states. To achieve this, we consider a minimal model of two interacting qubits, put in an out-of-equilibrium situation by coupling to thermal environments. This model has been used to demonstrate the presence of entanglement in the steady-state regime under autonomous dissipative dynamics only [32; 33; 34; 35] and has recently been investigated in the context of EPs [15]. We demonstrate that slow trajectories in the parameter space of this two-qubit system can result in chiral state transfer between two Bell states. Importantly, this property can be utilized to create highly entangled states from arbitrary initial states. We further demonstrate that the fidelity of state transfer and the created entanglement can be increased by means of postselection, at the cost of reduced success rate [27; 36]. Finally, we demonstrate that our results hold for a wide range of parameters, including trajectories not necessarily encircling an EP [12; 26].
_Markovian quantum dynamics.-_ We consider a system of interacting qubits, depicted in Fig. 1 (a), with transition energies \(\varepsilon_{1}=\varepsilon\), \(\varepsilon_{2}=\varepsilon+\delta\). The full Hamiltonian takes the form \(H_{0}=\sum_{j=1,2}\varepsilon_{j}\sigma_{+}^{(j)}\sigma_{-}^{(j)}+g(\sigma_{+ }^{(1)}\sigma_{-}^{(2)}+\sigma_{-}^{(1)}\sigma_{+}^{(2)})\) (\(j=1,2\)), where \(\sigma_{\pm}^{(j)}\) are the raising and lowering operators of qubit \(j\) and \(g\) is the interaction strength. Each qubit couples to its own fermionic environment with coupling strength \(\gamma_{j}\). Under the assumption that \(g,\gamma_{j},\delta\ll\varepsilon\), the Markovian evolution of the two qubits
Figure 1: (a) The system of two interacting qubits subject to gain and loss, with rates \(\gamma\) and \(\alpha\gamma\), respectively. (b) Representative Clockwise (CW) and Counterclockwise (CCW) trajectories in the space of \(\delta\) and \(\gamma\), along with depictions of EPs for different levels of postselection of quantum jumps, \(q=0\) (complete postselection), \(q=1\) (no postselection) and \(0<q<1\) (partial postselection). (c) Riemann sheets corresponding to the NHH eigenvalues involved in the EP, along with a sample evolution CCW trajectory showing state transfer. The more decaying branch of the eigenvalues (i.e., with more negative imaginary part) is depicted in red.
can be expressed in the following Lindblad form (with \(\hbar=k_{B}=1\)) [34; 37],
\[\dot{\rho}=\mathcal{L}\rho=-i\left[\rho,H_{\text{eff}}\right]_{\dagger}+\sum_{j= 1,2}\gamma_{j}^{+}\mathcal{J}_{+}^{(j)}\rho+\gamma_{j}^{-}\mathcal{J}_{-}^{(j)}\rho, \tag{1}\]
where \([a,b]_{\dagger}\coloneqq ab-b^{\dagger}a^{\dagger}\) and the effective non-Hermitian Hamiltonian (NHH) \(H_{\text{eff}}\coloneqq H_{0}-(i/2)\sum_{j}\gamma_{j}^{-}\sigma_{+}^{(j)} \sigma_{-}^{(j)}+\gamma_{j}^{+}\sigma_{-}^{(j)}\sigma_{+}^{(j)}\). The superoperators \(\mathcal{J}_{\pm}^{(j)}\rho\coloneqq\sigma_{\pm}^{(j)}\rho\sigma_{\mp}^{(j)}\) represent quantum jump terms. The corresponding rates \(\gamma_{j}^{+}=\gamma_{j}n_{j}\) and \(\gamma_{j}^{-}=\gamma_{j}(1-n_{j})\) are determined by the Fermi-Dirac distribution, \(n_{j}=(e^{\beta_{j}\varepsilon_{j}}+1)^{-1}\) with inverse temperature \(\beta_{j}\) of reservoir \(j\). The NHH induces coherent non-unitary loss, while quantum jumps represent a continuous monitoring by the environment [38]. It is possible to interpolate between purely NHH and fully quantum dynamics by using a hybrid-Liouvillian framework [36], which parametrizes quantum jumps with a parameter \(q\in[0,1]\),
\[\mathcal{L}_{[q]}\rho=-i\left[\rho,H_{\text{eff}}\right]_{\dagger}+q\sum_{j= 1,2}\gamma_{j}^{+}\mathcal{J}_{+}^{(j)}\rho+\gamma_{j}^{-}\mathcal{J}_{-}^{(j )}\rho. \tag{2}\]
The case \(q=1\) corresponds to full Lindblad dynamics, while \(q=0\) corresponds to the dynamics induced exclusively by the NHH \(H_{\text{eff}}\), i.e., with quantum jumps completely postselected out. The case \(0<q<1\) corresponds to partial postselection of quantum jumps or the weighted average between the two extremal cases. For \(q<1\), the hybrid Liouvillian is not trace preserving and requires renormalization at every time-step of the evolution. We note that for our setup, the Hilbert space is 4-dimensional and correspondingly, the Liouville space is 16-dimensional. The spectra of \(H_{\text{eff}}\) and \(\mathcal{L}_{[q]}\) and corresponding EPs are discussed in detail in the Supp. Mat. Secs. 1 and 2. Importantly, the NHH has a second-order EP involving eigenvectors which become Bell states \(\ket{\Psi^{\pm}}=\left(\ket{10}\pm\ket{01}\right)/\sqrt{2}\) when the system is decoupled from the reservoirs (conversely, the eigenvectors \(\ket{\Psi^{\pm}}\) of \(H_{0}\) are involved in an EP in the presence of gain and loss). By judicious choice of a parameter trajectory, it would therefore be expected that these Bell states play a central role in chiral state transfer. This forms the basis of the upcoming analysis.
_Trajectory in parameter space.-_ We set \(\gamma_{1}\coloneqq\gamma\) and \(\gamma_{2}\coloneqq\alpha\gamma_{1}\) with \(\alpha=\gamma_{2}/\gamma_{1}\) and choose a closed trajectory in the space of \(\gamma\) and \(\delta\) (Fig. 1 (b)). We pick the following form of periodic driving of the parameters,
\[\delta(t) = \pm\Delta\delta\sin\left(\frac{2\pi t}{T}\right) \tag{3}\] \[\gamma(t) = \gamma_{0}+\Delta\gamma\sin^{2}\left(\frac{\pi t}{T}\right)\,, \tag{4}\]
where \(\gamma_{0}\) sets the origin of the contour, \(\Delta\delta\) and \(\Delta\gamma\) are the amplitudes for \(\delta(t),\gamma(t)\) respectively and \(T\) is the period of the driving. The "\(+\)" and "\(-\)" signs correspond to Clockwise (CW) and Counterclockwise (CCW) trajectories, respectively. By taking an appropriately large \(\Delta\gamma\), the trajectory can be made to encircle the EPs. In the absence of quantum jumps \(q=0\), the dynamics are dictated by \(H_{\text{eff}}\). In this case, the unnormalized state at time \(t\) is simply \(\rho(t)=e^{-itH_{\text{eff}}}\rho(0)e^{itH_{\text{eff}}^{\dagger}}\). However, in the case \(0<q<1\), the unnormalized state evolves along the trajectory according to,
\[\rho(t)=\mathcal{T}e^{\int_{0}^{t}\mathcal{L}_{[q]}(t^{\prime})dt^{\prime}} \rho(0), \tag{5}\]
where \(\mathcal{T}\) denotes time ordering. The case \(q=1\) corresponds to full Lindblad dynamics which are trace-preserving. The following numerical results for \(q<1\) have been calculated using the above time-ordered exponential, along with appropriate renormalization.
_Chiral Bell-state transfer.-_ We now turn to the core of this Letter: chiral state transfer in the vicinity of EPs in the two-qubit system. We first investigate the case \(\gamma_{0}=0\), in which the initial and final points (i.e., \(t=0\) and \(t=T\)) of the trajectory are \(\gamma=\delta=0\). In this case, at the initial and final points of the trajectory times, \(H_{0}\) and \(H_{\text{eff}}\) have the same spectrum and eigenvectors,
Figure 2: Chiral Bell-state transfer. (a) Overlap with \(\ket{\Psi^{-}}\) for the two qubits initially in state \(\ket{\Psi^{+}}\) as a function of time, for CW (dashed) and CCW (solid) trajectories. (b) The final overlap (at time \(T\)) as a function of the quantum jump parameter \(q\), shown only for the CCW trajectory. The parameters are \(\varepsilon=1\), \(\Delta\delta/\varepsilon=0.04\), \(\gamma_{0}=0\), \(\Delta\gamma/\varepsilon=0.008\), \(g/\varepsilon=0.01\), \(\alpha=1.2\), \(\beta_{1}\rightarrow-\infty\), \(\beta_{2}\rightarrow\infty\) and \(T=2500\).
\(\{\left|11\right\rangle,\left|\Psi^{+}\right\rangle,\left|\Psi^{-}\right\rangle, \left|00\right\rangle\}\) (see Supp. Mat. Sec. 2 for general expressions). Importantly, at \(t=0\), the eigenmatrices of \(\mathcal{L}\) can be constructed from the eigenvectors of \(H_{0}\) and \(H_{\text{eff}}\). This equivalence between Hamiltonian, effective Hamiltonian, and Liouvillian dynamics at the origin of the trajectory is essential for chiral Bell-state transfer. For our case with fermionic reservoirs, we choose the inverse temperatures \(\beta_{1}\rightarrow-\infty\) and \(\beta_{2}\rightarrow\infty\). This choice of temperatures corresponds to perfect population inversion in reservoir 1 (\(n_{1}=1\)) and to initialization in the lowest energy level (\(n_{2}=0\)) for reservoir 2, leading to the specific gain and loss situation shown in Fig. 1 (a). As discussed at the end of this section, this corresponds to the optimal setup for chiral Bell-state transfer. We note that this condition has a connection to PT symmetry in the two-qubit system (see Supp. Mat. Sec. 4). We characterize the state along the trajectory at all times \(t\in[0,T]\) with its fidelity with respect to one of the Bell states \(\mathcal{J}_{\left|\Psi^{\pm}\right\rangle}(t)\coloneqq\text{Tr}\{\left|\Psi^ {\pm}\right\rangle\!\!\left\langle\Psi^{\pm}\right|\rho(t)\}\).
We first consider the case \(q=0\), where \(H_{\text{eff}}\) dictates the dynamics. In this case, chiral state transfer is expected with near-perfect fidelity due to the conservation of purity (see Supp. Mat. Sec. 3). At the origin of the trajectory, the two states involved in the EP are \(\left|\Psi^{\pm}\right\rangle\). For example, starting in \(\left|\Psi^{+}\right\rangle\), the system can be directed to either switch to \(\left|\Psi^{-}\right\rangle\) or end up again in \(\left|\Psi^{+}\right\rangle\), depending solely on the orientation of the trajectory. In the blue curves of Fig. 2 (a), we show this effect starting with \(\left|\Psi^{+}\right\rangle\), which ends up at time \(t=T\) nearly perfectly in the \(\left|\Psi^{-}\right\rangle\) state for a CCW trajectory (solid) or comes back to \(\left|\Psi^{+}\right\rangle\) for a CW trajectory (dashed). Therefore, the switching between the states is chiral in nature. We note that the fidelity of state transfer, although expected to be high in this case, can change depending on the parameters of the system and time period of the driving. For \(q=0\), we find that the fidelity can be increased to 1 simply by taking sufficiently slow trajectories.
When quantum jumps are not postselected out (\(q\neq 0\)), maximal fidelity is not reached at \(t=T\) as Liouvillian dynamics do not preserve purity (i.e., a Bell state cannot, in general, evolve into another Bell state). The case \(q=1\), which corresponds to full Liouvillian dynamics, is shown in red in Fig. 2 (a) reaching a final fidelity \(\mathcal{F}_{\left|\Psi^{-}\right\rangle}=0.83\). Let us mention that this is not an upper bound, and a higher fidelity can be achieved by altering the parameters. For example, in Fig. 4, a fidelity close to 0.9 is obtained. The chirality of the state \(\left|\Psi^{-}\right\rangle\) can similarly be verified; switching to \(\left|\Psi^{+}\right\rangle\) is observed for a CW trajectory, while the state returns to \(\left|\Psi^{-}\right\rangle\) for a CCW one. Moreover, the fidelity of transfer remains the same as for the case with \(\left|\Psi^{+}\right\rangle\) as the initial state. In Fig. 2 (b), we show that there is a monotonic decrease in transfer fidelity with increasing \(q\) (i.e., with decreasing degree of postselection). This corresponds to previous observations with a single qubit [27; 29] and can be traced to the loss of purity with quantum jumps. Crucially, as the inset in Fig. 2 (b) shows, there is a drastic fidelity loss for trajectories with origins far from \(\gamma_{0}=0\). This highlights the importance of the spectra of \(H_{0}\), \(H_{\text{eff}}\) and \(\mathcal{L}\) being equivalent at the origin.
Although analytical insights along the trajectory are challenging, for \(q=1\), important hints can be obtained from the spectrum of the one-cycle evolution superoperator \(\mathcal{P}(T)=\mathcal{T}\exp\left[\int_{0}^{T}\mathcal{L}(t^{\prime})dt^{ \prime}\right]\)[39]. Importantly, it has eigenvalues which are either 1 or lie within the unit circle. For long time periods of the driving (i.e., when \(T\) is large), the fixed point of the map is reached, which corresponds to the eigenmatrix with eigenvalue 1, unique for our system. In the time-independent case, this 1-eigenmatrix is the same as the 0-eigenmatrix of the time-independent Liouvillian \(\mathcal{L}\), corresponding to the usual steady state. This fixed point is evidently independent of the initial state of the system and only depends on the time-dependent Liouvillian, determined by the system and driving parameters. This can be understood within the framework of Floquet theory [39; 40], which can also be useful to look at the general case, \(q\in[0,1]\), with a corresponding \(\mathcal{P}_{\left|q\right\rangle}(T)\). We note that although for higher fidelities, we require the trajectory to be slow (i.e.,
Figure 3: (a) Chiral Bell-state transfer under general transport conditions. Fidelity \(\mathcal{F}_{\left|\Psi^{-}\right\rangle}(T)\) as a function of \(\beta_{1}\), with \(\beta_{2}\rightarrow\infty\). Maximal fidelity is reached for population inversion \(n_{1}=1,\beta_{1}\rightarrow-\infty\), independently of \(q\). This corresponds to unidirectional transport. (b) The final fidelity as a function of the amplitude \(\Delta\gamma\) for \(q=0\), 0.5 and 1. The trajectories are EP-encircling on the right of each corresponding dashed line. The other parameters are taken from Fig. 2.
the time period \(T\) to be large), the driving should not be adiabatic as it would drive the system to its instantaneous steady state at every point in the parameter trajectory. It has recently been noted that driving too slow also leads to loss of chirality [28]. We expect further insights to be built through slow-driving perturbation theory, by calculating corrections to adiabatic evolution [41].
We now extend our predictions to the case where transport is not necessarily unidirectional. This corresponds to a situation where perfect control of gain or loss cannot be implemented, for example, due to experimental constraints. We let \(\beta_{2}\to\infty\) (i.e., zero temperature), implying perfect loss at qubit 2, and calculate the final fidelity as a function of \(\beta_{1}\) (\(\beta_{1}<0\) implying population inversion). The absence of complete population inversion induces a decrease in the fidelity as shown in Fig. 3 (a). Optimal fidelity is achieved for perfect population inversion, \(\beta_{1}\to-\infty\), independently of \(q\). Moreover, for thermal environments (\(\beta_{1}>0\)) which satisfy local detailed balance, there is a drastic reduction in fidelity, the maximum is achieved for low \(\beta_{1}\), ideally \(\beta_{1}\to 0\) corresponding to \(n_{1}=1/2\). Our analysis demonstrates that high fidelity of state transfer requires \(n_{1}>1/2\), which is the case of population inversion. It can further be verified that simultaneously, a low temperature for reservoir 2 is required for a high fidelity.
Finally, let us comment on the role of EPs in our predictions. It has recently been observed in some semi-classical settings that encircling EPs is not necessary for chiral state transfer [25; 26]. We find that encircling EPs in our two-qubit system is not necessary to achieve chiral Bell-state transfer. We illustrate this in Fig. 3 (b) for different values of \(q\), by varying the radius \(\Delta\gamma\) of the trajectory along \(\gamma\) (see Eq. (3)). The left side of each dashed line corresponds to the fidelity \(\mathcal{F}_{|\Psi^{-}\rangle}(T)\) for trajectories _not_ encircling the corresponding EP, while the right side corresponds to trajectories encircling the EP. The plot demonstrates that chiral state transfer is possible for a large range of \(\Delta\gamma\), which makes this effect experimentally robust. Interestingly, we find that the maximum fidelity is obtained for trajectories _not_ encircling the EPs. Whether there is a fundamental principle underlying this observation is an interesting question beyond the scope of this work.
_Chiral production of Bell states.-_ In the previous section, we showed chiral state conversion between Bell states. We now extend this result to show that this property can be exploited to create entanglement between two qubits which are initially in any separable state. To this end, we consider the two qubits to be initially in a maximally mixed state, \(\rho(0)=\mathbb{1}/4\) and take the same parameter driving as discussed above. Similarly to the previous section, we characterize their state with their fidelity with one of the Bell states. To characterize the amount of entanglement, we additionally calculate the concurrence \(\mathcal{C}\); \(\mathcal{C}=0\) for a separable state, and \(\mathcal{C}=1\) for a maximally entangled state [42]. In Fig. 4 (a), the fidelity increases from \(\mathcal{F}=0.25\) at time \(t=0\) to its maximum at time \(t=T\), where the system is driven close to the \(|\Psi^{-}\rangle\) state. We note that this observation is rooted in the fact that for sufficiently large \(T\), the final state is independent of the initial state. However, the results are dependent on the parameters of system, which is reflected in the higher fidelity \(\mathcal{F}\sim 0.9\) obtained for the \(q=1\) case. In Fig. 4 (b), we plot the concurrence as a function of time. Importantly, for any \(q\), a high concurrence can be obtained; specifically for \(q=1\), \(\mathcal{C}>0.83\). Although this is not an upper bound, it far exceeds the highest possible concurrence, \(\mathcal{C}^{\rm(aut)}_{\rm max}\approx 0.31\)[35; 43], possible with the two-qubit system being operated autonomously (i.e., in the absence of driving or external control). Crucially, the production of Bell states has an associated chirality; for a CCW state, the system is driven to the \(|\Psi^{-}\rangle\) state, while for a CW trajectory, to the \(|\Psi^{+}\rangle\) state.
_Experimental scope.-_ We anticipate that an experimental implementation is readily achievable in the circuit QED platform [44]. Here we will utilize a superconducting device consisting of two transmon circuits [45] that interact via a resonator-mediated coupling. The two transmons have individual readout resonators [46] that allow us to introduce the respective thermal baths. In this case, the positive and negative temperature fermionic baths
can be replaced with engineered bosonic baths harnessing cavity assisted bath engineering [47], where population inversion can readily be achieved through appropriate off-resonant driving of the qubit and associated cavity. The limit \(q=1\), corresponding to Lindblad dynamics, can be attained by harnessing the lowest two energy levels of the transmon [29; 30], and the limit \(q=0\) can be accessed by utilizing the higher manifold of states of the transmon coupled with postselection [48].
_Discussion.-_ We have theoretically demonstrated Liouvillian chiral state transfer involving Bell states in a system of two dissipative qubits. This property allows for the chiral production of near-Bell states starting with any separable initial state, breaking the bounds for autonomous dissipative entanglement production. The results hold for a large range of parameters, operating in the vicinity of an EP, without the necessity of encircling it.
Our results have implications beyond simple two-qubit models, and present a recipe for quantum state control through controlled dissipation and clever eigenstate engineering. For example, by judicious choice of many-qubit Hamiltonians and dissipation, our results suggest that chiral state transfer, and by extension, entanglement production, can be seen for genuinely multipartite entangled states, like the W or GHZ state [49].
_Acknowledgements.-_ We thank Soumya S. Kumar for contributions to an early stage of this project and Parveen Kumar for useful correspondence. SK and GH acknowledge support from the SNSF through the starting grant PRIMA PR00P2_179748, as well as the NCCR SwissMAP for supporting short scientific visits. WC and KM acknowledge support from NSF Grant No. PHY-1752844 (CAREER), and the Air Force Office of Scientific Research (AFOSR) Multidisciplinary University Research Initiative (MURI) Award on Programmable systems with non-Hermitian quantum dynamics (Grant No. FA9550-21-1- 0202).
|
2305.03533 | Challenging interferometric imaging: Machine learning-based source
localization from uv-plane observations | In our work, we examine, for the first time, the possibility of fast and
efficient source localization directly from the uvobservations, omitting the
recovering of the dirty or clean images. We propose a deep neural network-based
framework that takes as its input a low-dimensional vector of sampled uvdata
and outputs source positions on the sky. We investigated a representation of
the complex-valued input uv-data via the real and imaginary and the magnitude
and phase components. We provided a comparison of the efficiency of the
proposed framework with the traditional source localization pipeline based on
the state-of-the-art Python Blob Detection and Source Finder (PyBDSF) method.
The investigation was performed on a data set of 9164 sky models simulated
using the Common Astronomy Software Applications (CASA) tool for the Atacama
Large Millimeter Array (ALMA) Cycle 5.3 antenna configuration. We investigated
two scenarios: (i) noise-free as an ideal case and (ii) sky simulations
including noise representative of typical extra-galactic millimeter
observations. In the noise-free case, the proposed localization framework
demonstrates the same high performance as the state-of-the-art PyBDSF method.
For noisy data, however, our new method demonstrates significantly better
performance, achieving a completeness level that is three times higher for
sources with uniform signal-to-noise (S/N) ratios between 1 and 10, and a high
increase in completeness in the low S/N regime. Furthermore, the execution time
of the proposed framework is significantly reduced (by factors about 30) as
compared to traditional methods that include image reconstructions from the
uv-plane and subsequent source detections. | O. Taran, O. Bait, M. Dessauges-Zavadsky, T. Holotyak, D. Schaerer, S. Voloshynovskiy | 2023-05-05T13:40:40Z | http://arxiv.org/abs/2305.03533v1 | Challenging interferometric imaging: Machine learning-based source localization from uv-plane observations
###### Abstract
Context:Rising interest in radio astronomy and upcoming projects in the field is expected to produce petabytes of data per day, questioning the applicability of traditional radio astronomy data analysis approaches under the new large-scale conditions. This requires new, intelligent, fast, and efficient methods that potentially involve less input from the domain expert.
Aims:In our work, we examine, for the first time, the possibility of fast and efficient source localization directly from the uv-observations, omitting the recovering of the dirty or clean images.
Methods:We propose a deep neural network-based framework that takes as its input a low-dimensional vector of sampled uv-data and outputs source positions on the sky. We investigated a representation of the complex-valued input uv-data via the real and imaginary and the magnitude and phase components. We provided a comparison of the efficiency of the proposed framework with the traditional source localization pipeline based on the state-of-the-art Python Blob Detection and Source Finder (PyBDSF) method. The investigation was performed on a data set of 9164 sky models simulated using the Common Astronomy Software Applications (CASA) tool for the Atacama Large Millimeter Array (ALMA) Cycle 5.3 antenna configuration.
Results:We investigated two scenarios: (i) noise-free as an ideal case and (ii) sky simulations including noise representative of typical extra-galactic millimeter observations. In the noise-free case, the proposed localization framework demonstrates the same high performance as the state-of-the-art PyBDSF method. For noisy data, however, our new method demonstrates significantly better performance, achieving a completeness level that is three times higher for sources with uniform signal-to-noise (S/N) ratios between 1 and 10, and a high increase in completeness in the low S/N regime. Furthermore, the execution time of the proposed framework is significantly reduced (by factors \(\sim 30\)) as compared to traditional methods that include image reconstructions from the uv-plane and subsequent source detections.
Conclusions:The proposed framework for obtaining fast and efficient source localization directly from uv-plane observations shows very encouraging results, which could open new horizons for interferometric imaging with existing and future facilities.
## 1 Introduction
Radio astronomy is at the cusp of a revolution in terms of the sensitivity that can be achieved at centimeter and meter wavelengths. The various radio-astronomical interferometers such as the LOw-Frequency ARray (LOFAR; van Haarlem et al. 2013), MeerKAT radio telescope (Jonas and MeerKAT Team 2016), Australian square kilometer array pathfinder (ASKAP; Hotan et al. 2021), and Murchison Widefield Array (MWA; Tingay et al. 2013) are already producing promising results. At the same time, radio astronomy is an extremely data-intensive science. It is expected that the upcoming projects will produce data volumes on an exabyte scale (Scaife 2020). Thus, it will be very challenging for astronomers to undertake standard radio data analysis tasks such as calibration, imaging and source localization using traditional techniques. Hence, it is vital to design fast and efficient techniques to replace the traditional data analysis approaches in radio astronomy.
At millimeter (mm) wavelengths the Atacama Large Millimeter Array (ALMA) has led to several large imaging and spectroscopic programs thanks to its excellent sensitivity. Of particular interest in the context of the current work are the various large programs targeting extra-galactic deep fields. This includes, for example, the Reionization Era Bright Emission Line Survey (REBELS; Bouwens et al. 2022), the ALMA SPECTroscopic Survey in the Hubble Ultra-Deep Field (ASPECS; Walter et al. 2016), the ALMA Large Program to Investigate [CII] at Early times (ALPINE Le Fevre et al. 2020; Bethermin et al. 2020; Faisst et al. 2020), and the GOODS-ALMA survey at 1.1 mm (Franco et al. 2018). Another rich ALMA data set on which our current study is focused is the Automated Mining of the ALMA Archive in the COSMOS Field (A\({}^{1}\)COSMOS) data set1(Liu et al. 2019). These surveys study the gas and dust properties of galaxies at high redshifts.
Footnote 1: [https://sites.google.com/view/a3cosmos/home?authuser=0](https://sites.google.com/view/a3cosmos/home?authuser=0)
A key technical component while analyzing these data is to accurately identify sources (their positions, fluxes, and sizes) in noise-limited images produced by radio interferometers. This is important, for example, when calculating the number density and luminosity function of astronomical sources. It has impor
tant implications for constraining various physical models in astrophysics and cosmology. Traditionally, radio source properties were measured using a two-dimensional (2D) Gaussian fit to the light profile (Condon, 1997) and usually required some level of manual intervention. Such manual interventions become increasingly difficult when dealing with large-area ALMA surveys mentioned above. Also with the all-sky radio surveys such as the Faint Images of the Radio Sky at Twenty-cm (FIRST) survey (Becker et al., 1995) and NRAO Very Large Array (VLA) Sky Survey (NVSS; Condon et al., 1998), which contain millions of sources, manual source detection is difficult. Future large-area radio surveys (Norris et al., 2013) are expected to detect increase in the number of sources by an order of magnitude (e.g., the Evolutionary Map of the Universe (EMU) survey (Norris et al., 2011)). Radio surveys of extra-galactic deep fields using upgraded radio facilities, such as the VLA-COSMOS 3 GHz survey (Smolcic et al., 2017) and the MeerKAT International GHz Tiered Extra-galactic Exploration (MIGHTEE) survey (Jarvis et al., 2016; Heywood et al., 2022), are already detecting several thousands of sources in a single field.
Thus, it is essential to build an automatic source detection algorithm that would accurately detect sources above the noise level with high completeness and simultaneously have a low number of false detections. There are several automatic source detection tools available in the literature, for example, Search and Destroy (SAD)2, Source-Extractor (SExtractor; Bertin & Arnouts, 1996), AEGEAN (Hancock et al., 2012, 2018) and Python Blob Detection and Source Finder (PyBDSF; Mohan & Rafferty, 2015), PySE (Carbone et al., 2018), CAESAR (Riggi et al., 2019, 2016), PROFOUND (Robotham et al., 2018; Hale et al., 2019), and SOFIA (Serra et al., 2015; Westmeier et al., 2021). Hopkins et al. (2015) provide a detailed comparison between different source detection algorithms available in the literature and their limitations. This works very well for bright sources. However, it is much more interesting to detect faint sources, typically with a signal-to-noise ratio (S/N) below 5.0, with high level of completeness and purity. This not only offers the ability to probe sources at even higher redshifts than currently achieved. It also has the potential to detect new kinds of astronomical sources lying close to the noise level, which may have been missed by studies using traditional source detection techniques.
Footnote 2: [http://www.aips.nrao.edu/cgi-bin/ZXHLP2.PL?SAD](http://www.aips.nrao.edu/cgi-bin/ZXHLP2.PL?SAD)
Recent advances in machine learning and in particular deep neural networks in the form of convolutional neural networks (CNNs) have led to a lot of success in radio astronomy. In particular, CNNs have been extensively used to classify radio galaxy morphologies (e.g., Aniryan & Thorat, 2017; Lukic et al., 2019; Ma et al., 2019; Tang et al., 2019; Bowles et al., 2021; Riggi et al., 2022). In particular, Riggi et al. (2022) have used Mask R-CNN object detection framework on the ASKAP EMU survey data to perform both object detections and classifications. Schmidt et al. (2022) used CNNs designed for super-resolution applications directly on the UV data to up-sample features in the case of sparse sampling, for instance, very-long-baseline interferometry. The images produced from these UV data show good recovery of source properties. Also, CNNs have shown great promise in point source detection, as demonstrated in ConvoSource (Lukic et al., 2019) and DeepSource (Vafaei Sadr et al., 2019). DeepSource is shown to be perfect in terms of purity and completeness down to a S/N of 4 and outperforms the current state-of-the-art source detection algorithm PyBDSF in several metrics. Recently, it was shown that an encoder-decoder based neural network DECORAS (Rezaei et al., 2022) can perform source detection even on dirty images down to a S/N of 5.5 and can also recover various source properties such as fluxes and sizes quite accurately. Delli Veneri et al. (2022) further shows that deep learning-based source detection can also be performed on dirty spectral data cubes and shows a good recovery of source properties such as morphology, flux density, and projection angle.
A common problem in most of the source-finding algorithms in the literature is that they are performed on the image plane and mostly on CLEAN images, which are computationally expensive to produce and scale poorly with the data volumes. Thus, even if these source finding algorithms are made efficient, they will still be limited by the time taken to produce the CLEAN images. It is well known that CLEAN leads to several imaging artifacts that can affect the purity of these source-finding algorithms. And despite automating several steps in source finding, these approaches still require some amount of manual intervention to exclude imaging artifacts. In this work, we circumvent these problems by designing a novel direct uv-plane based source localization algorithm using recent advances in deep neural networks.
Motivated by the achievements of the deep neural networks (DNN) made in many domains, we propose a fast and efficient DNN-based framework for source localization. In general, this framework takes as input a low-dimension vector of sampled uv-observations and outputs the source position on sky in the form of a binary map. Our proposed framework is targeted towards source detections in ALMA continuum images at mm wavelengths, particularly with the aim of detecting low S/N sources and to speed up the source detection process overall. We trained and tested our proposed framework on simulated ALMA data. We investigated the impact of different factors on the performance of the proposed framework, namely, the representation of complex-valued input data via real and imaginary or magnitude and phase real-valued components, the impact of receiver noise and atmospheric noise due to the presence of water vapor, the impact of the S/N values of the sources, and the impact of the number of sources in the field of view. We provide a detailed analysis of the execution time of the proposed framework and compare it with those of the traditional source localization pipeline.
The paper is organized as follows. The ALMA data simulation procedure, the chosen parameters and the analysis are given in Sect. 2. The traditional pipeline of source localization is described in Sect. 3. The framework proposed in this work is explained in Sect. 4. Section 5 is dedicated to the analysis of the obtained results and Sect. 6 offers comparisons with the literature. Finally, we present our conclusions in Sect. 7.
## 2 Data
We train the proposed DNN-based framework on synthetic data with known source localization. All reported experiments have been performed exclusively on synthetic data. In the future, we plan to apply the proposed frameworks to the real observations taken from the (A\({}^{3}\)COSMOS) data set (Liu et al., 2019). Thus, our simulated ALMA observations are designed to somewhat match the A\({}^{3}\)COSMOS observations.
For all our simulations, we used the Common Astronomy Software Applications (CASA) data processing software v6.2 (McMullin et al., 2007; THE CASA TEAM et al., 2022). We produced simulations for the 12-m ALMA array with a fixed configuration from the ALMA cycle 5.3. All our simulation pointings were randomly distributed within a radius of 1 deg around the COSMOS field centered on: J2000 10h00m28.6s
+02d12m21.0s. For simplicity, we fixed the ALMA observing band to Band-6 centered at 230 GHz split in 240 channels with a channel width of 7.8 MHz. Our simulation pipeline follows the standard approach of first simulating a true sky model at a known phasecentre with known source positions and fluxes. The sky model consists only of sources with a Gaussian light profile with sizes close to or below the synthesized beam. In each pointing, the source position, size (i.e., the major and minor axis of the Gaussian), position angle, S/N, and total number of sources (in the range between 1 and 5) are kept at random and drawn from an uniform distribution. This range in the number of sources per pointing and their sizes are chosen from true data from the A\({}^{3}\)COSMOS data in Band-6 for the ALMA compact configuration (similar to ours). Instead of drawing our simulation parameters exactly from the observed distribution in A\({}^{3}\)COSMOS data, we used a uniform distribution but the range of these parameters is motivated by the A\({}^{3}\)COSMOS data set (see the discussion in the following paragraph). The sky model contains a \(512\times 512\) pixel image with a pixel size of 0.1\({}^{\prime\prime}\). This sky model is used to generate the noise free uv-data using the simama task (Mc
\begin{table}
\begin{tabular}{c c c} \hline \hline Parameter & Value & Notes \\ \hline Antenna configuration & ALMA Cycle 5.3 & \\ Number of antennas & 50 & \\ Field & COSMOS & J2000 10h00m28.6s +02d12m21.0s \\ Number of pointings or simulations & 9164 & Every pointing is randomly chosen within a radius of 1 deg around the field. \\ \hline Central frequency & Band 6 (230 GHz) & \\ Number of channels & 240 & \\ Channel width & 7.8 MHz & \\ Sampling time & 10 secs & \\ Total Integration time & 20 mins & \\ Hour angle & Transit & \\ \hline Sky model dimensions & \(512\times 512\) pixel with 240 channels & \\ Pixel size & 0.1\({}^{\prime\prime}\) & \\ Source type & Gaussian & The size of the major/minor axis and position angle is varied randomly and is chosen to be below the synthesized beam. & \\ Major (minor) axis & 0.4\({}^{\prime\prime}\) to 0.8\({}^{\prime\prime}\) & Typical resolution is 0.89\({}^{\prime\prime}\)\(\times\) 0.82\({}^{\prime\prime}\) \\ Position angle (PA) & 0 to 360 deg & Chosen randomly \\ Number of sources & 0 –5 & Randomly chosen between 1 and 5. We add a few source free simulations \\ Source positions & random & Sources are randomly distributed within the primary beam. \\ Flux range & 0.05 mJy to 0.5 mJy & The flux of each source is randomly chosen from this range assuming a uniform distribution. This roughly keeps a flat S/N range for our data set. \\ Spectral index & 0 & We set this parameter to zero since the fractional bandwidth is quite small (0.8\%). \\ \hline Primary beam & 22.86 \({}^{\prime\prime}\) & \\ Synthesized beam & 0.89\({}^{\prime\prime}\)\(\times\) 0.82\({}^{\prime\prime}\) & \\ Noise (rwv) & 1.796 & This parameter adds the receiver and atmospheric noise due to water vapor to the visibilities. \\ Weighting & Natural & \\ Robust & 0.5 & \\ RMS noise (in images) & \(\sim\) 50 \(\mu\)Jy & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the data simulation parameters using CASA.
Mullin et al. 2007). We sample each visibility every 10 secs for a total of 20 mins of integration on-source and choose the hour angle range such that the observing field is at transit. Appendix A shows a typical UV sampling for one of the simulations and the corresponding dirty beam. We then added the ALMA receiver and atmospheric noise due to water vapor to the uv-data using the pwv parameter value of 1.796 (McMullin et al. 2007). This is a typical value in Band-6 as mentioned in the ALMA technical handbook3. We then average the visibilities, both noisy and noise-free, in time and frequency, by gridding the visibilities on a uniform grid using the msubvyn task (McMullin et al. 2007). Finally, we produce the dirty and CLEAN images from both the noisy and noise-free visibilities using the standard tclean task (McMullin et al. 2007). For our simulation setup, we typically reach a rms noise of 50 \(\mu\)Jy in our dirty and CLEAN images. The size of the primary and synthesized beam is 22.86\({}^{\prime}\)and 0.82\({}^{\prime\prime}\), respectively. Table 1 summarizes the various simulation parameters.
Footnote 3: [https://almascience.nrao.edu/proposing/technical-handbook](https://almascience.nrao.edu/proposing/technical-handbook)
Although geared towards real observations from A\({}^{3}\)COSMOS, our simulations are intentionally chosen to not match them exactly. In particular, our sources are assigned random on-sky positions, fluxes, and sizes - as opposed to choosing directly from the distribution of sources from true observations or those motivated by cosmological simulations. The phase center and number of sources in each simulation pointing are also randomly distributed around the COSMOS field. This ensures that during training we do not end up in the region of latent space of our deep-learning model, which is only trained to work for a particular data set. Thus, it does not learn additional patterns which might exist in real observations or cosmological simulations such as, source clustering, the luminosity function, size distribution etc. Instead, our model is trained to be flexible enough to be applied to any other new ALMA data. On the other hand, for our current work, we fix the ALMA observing band, channel width and total integration time. This is to reduce the number of free parameters in ALMA simulations. In the future, we will test the effect of changing these two parameters especially when we apply our model to real data.
We produced a total of 9164 independent simulations or pointings with the sky model containing between 1 to 5 sources. In total, we ended up with 27632 sources across all the pointings. Each simulation produces \(\sim 3.5\) GB of data which includes the uv-data and dirty and CLEAN images that is \(\sim 35\) TB for the entire data set4. The simulations are produced on the LESTA-computing cluster. The more details are given in the Appendix B.
Footnote 4: The produced data set is available by request.
Figure 1 and 2 show examples of the simulated data. The red circles highlight the positions of the sources in the true sky models. It is important to mention that the generated data set includes the same challenges encountered in real data, for example, the presence of blended sources (as in Fig. 2) and sources that are hardly distinguishable from the background noise as in Fig. 1 (top-middle and bottom-right). For the training of our framework, we perform the train-validation-test splitting three times with different seeds. Table 2 shows the number of sky models with different numbers of sources in the test subsets.
The notation we use is defined as follows: \(\mathbf{x}\in\mathbb{R}^{N\times M}\) denotes the true sky model, where \(N\) and \(M\) are the image size, set to 512 in our simulations. Then, \(\mathbf{\hat{x}}_{dirty}\in\mathbb{R}^{N\times M}\) and \(\mathbf{\hat{x}}_{clean}\in\mathbb{R}^{N\times M}\) denote dirty and CLEAN image, respectively. \(\mathbf{y}\in\mathbb{C}^{K}\) stands for the sampled uv-data with \(K=1400\lx@math@degree\). \(\mathbf{m}\in\{0,1\}^{N\times M}\) is a binary source map, in which source pixels are set to 1, and background pixels are set to 0. We calculated the S/N as:
\[\text{S/N}=\frac{\text{total flux}}{\sigma_{\text{noise}}}, \tag{1}\]
where the total flux is the true source intensity and \(\sigma_{\text{noise}}\) is the noise rms. As explained in Table 1, \(\sigma_{\text{noise}}\sim 50\mu\)Jy.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{5}{c}{Number of sources} & Total \\ & 1 & 2 & 3 & 4 & 5 & \\ \hline Test subset 1 & 161 & 177 & 196 & 177 & 205 & 916 \\ Test subset 2 & 197 & 191 & 163 & 182 & 183 & 916 \\ Test subset 3 & 165 & 175 & 181 & 180 & 215 & 916 \\ \hline Average & 174 & 181 & 180 & 180 & 201 & 916 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Number of sky models with different numbers of sources in the test subsets.
Figure 1: Examples of simulated data used in this study. Top-left: True sky model. Top-middle: Noise-free model. Top-right: Noisy. Bottom-left: True sky model. Bottom-middle: Noise-free. Bottom right: Noisy. For both noise-free and noisy cases, the CLEAN representations are visualized. The sources are highlighted as red circles.
Figure 2: Example of closely located sources of different intensity. For better visibility, the cropped and zoomed parts of the true sky models are shown.
## 3 Traditional pipeline of source localization
The traditional pipeline of source localization is based on the reconstruction of the dirty and CLEAN images. The schematic representation of the traditional pipeline is given in Fig. 3.
### Dirty image recovering
The recovering of the dirty image might be formulated as a recovering of a high dimension sky model6\(\mathbf{x}\in\mathbb{R}^{N\times M}\) from the corresponding low-dimensional sampled uv-visibility \(\mathbf{y}\in\mathbb{C}^{K}\) (\(K<<N\cdot M\)) corrupted by noise \(\mathbf{e}\):
Footnote 6: The sky model \(\mathbf{x}\) is of size \(N\times M=512\times 512\). For the simplicity of notations we use the vectorized representation \(N\cdot M\).
\[\mathbf{y}=\mathbf{W}\mathbf{x}+\mathbf{e}, \tag{2}\]
where \(\mathbf{W}=\mathbf{P}_{\Omega}\mathbf{\Psi}\in\mathbb{C}^{K\times N\cdot M}\) is a measurement sub-sampling matrices with an orthonormal basis of \(\mathbf{\Psi}\in\mathbb{C}^{N\cdot M\times N\cdot M}\) and sampling operator of \(\mathbf{P}_{\Omega}:\mathbb{C}^{N\cdot M}\rightarrow\mathbb{C}^{K}\) and \(|\Omega|=K\).
It should be pointed out that due to the physical imaging constraints in the radio-astronomy, \(\mathbf{\Psi}\) corresponds to the Fourier operator and \(\mathbf{P}_{\Omega}\) is determined by the antennas configuration, measured frequencies, Earth's movement, sampling, and integration time.
The recovering of the \(\mathbf{\hat{x}}_{dirty}\in\mathbb{R}^{N\times M}\) consists in: (i) \(\mathbf{P}_{\Omega}^{T}\mathbf{y}\) - expanding the observation \(\mathbf{y}\) to a \(N\times M\) representation by placing zeros in the entries corresponding to \(\Omega^{C}\) that is the complementary support set of \(\Omega\), (ii) adding the corresponding symmetrical signal via \(S(.)\) related to the symmetry property of Fourier and then (iii) applying the inverse Fourier \(\mathbf{\Psi}^{*}\):
\[\mathbf{\hat{x}}_{dirty}=\mathbf{\Psi}^{*}\big{(}\mathbf{P}_{\Omega}^{T} \mathbf{y}+S\big{(}\mathbf{P}_{\Omega}^{T}\mathbf{y}\big{)}\big{)}. \tag{3}\]
### Clean image recovering
Due to the presence of missing Fourier frequencies in \(\mathbf{P}_{\Omega^{C}}\), the dirty image contains a lot of artifacts, leading to an increase in the false detected sources. In order to remove these artifacts, the CLEAN algorithm (Hogbom 1974) is applied to the dirty image.
The CLEAN fundamental method is used in radio astronomy and consists of several steps. First, it finds the intensity and position of the peak that is of the greatest absolute intensity in the dirty image \(\mathbf{\hat{x}}_{dirty}\). Second, it generates at this position a spike of an intensity equal to the product of a damping factor and the intensity at that position. Usually, the damping factor is \(\leq 1\) and is termed the loop gain. The generated spikes are convolved with the instrumental point source function (PSF). Then, the obtained instrumental response is subtracted from the dirty image \(\mathbf{\hat{x}}_{dirty}\). This procedure is repeated unless any remaining peak is below some user-specified level. The search for peaks may be constrained to specified areas of the image, called CLEAN windows. Then the accumulated point sources are convolved with an idealized CLEAN beam that, usually, is an elliptical Gaussian fitted to the central lobe of the dirty beam. Finally, the remaining residual of the dirty image is added. The obtained reconstruction is called CLEAN image \(\mathbf{\hat{x}}_{clean}\).
In terms of the fast processing of large amounts of data, CLEAN as well as its accelerated versions such as the \(w\)-Stacking Clean (WSCLEAN; Offringa et al. 2014) act as the main bottleneck. The reconstruction remains time-consuming since several parameters have to be adjusted in iterative runs.
### Source localization
Traditionally, the source localization is applied to the CLEAN images. Depending on the antenna configuration and the corresponding amount of artifacts, it might also be applied to the dirty images. Nowadays, a broad range of different source localization approaches exist that are used in the traditional pipeline. Hopkins et al. (2015) provide a good overview of these methods. In our work, we focus on PyBDSF (Mohan & Rafferty 2015).
## 4 Proposed framework
We propose a DNN-based framework that performs the source localization in the form of a binary map \(\mathbf{m}\in\{0,1\}^{N\times M}\) directly from the uv-data by taking only sampled visibility data without reconstruction of dirty or CLEAN images as an input. The general scheme of the basic framework is illustrated in Fig. 4 and consists of three steps: (1) input data pre-processing (i.e., normalization), (2) DNN-processing: stage 1 and 2, (3) post-binarization and source localization.
It should be pointed out that the sampled visibility data \(\mathbf{y}\in\mathbb{C}^{K}\) are complex values. However, modern DNNs are not designed to work with complex values. To deal with this, we decompose the complex-valued uv-samples \(\mathbf{y}\in\mathbb{C}^{K}\) into real-valued real and imaginary or magnitude and phase representation \(\mathbf{\hat{y}}\in\mathbb{R}^{2\times K}\). Due to the nature of the Fourier transform, in the case of real and imaginary representation, the trained model \(g_{\mathbf{e}}\) at the stage 1 is additive in nature, as shown in on the left side of Fig. 5. While in case of the magnitude and phase, it is multiplicative in nature as shown on the right side of Fig. 5. The other steps are universal and remain unchanged7.
Footnote 7: The Python implementation of the proposed framework is publicly available at [https://github.com/taran0/ml-based_source_localization_from_uv-plane](https://github.com/taran0/ml-based_source_localization_from_uv-plane).
### Input data pre-processing
Examples of the raw noise-free and noisy input data, \(\mathbf{\hat{y}}\in\mathbb{R}^{2\times K}\), are given in Fig. 6. As can be seen from the noise-free case, the real and imaginary components have the same dynamic range
Figure 3: Schematic representation of the traditional pipeline.
of power \(10^{-4}\) (Fig. 6, top-left), while the magnitude and phase have different dynamic range (Fig. 6, top-right): the magnitude is of a power of \(10^{-4}\), similarly to the real and imaginary components, and the phase is in a much wider range, going from \(-3\) to \(3\). Regarding the noisy case, it is important to note that in the real, imaginary, and magnitude components, the noise dominates and increases the dynamic range of data by two orders of magnitude, while the phase component is less affected by noise and preserves the same dynamic range: from \(-3\) to \(3\). To guarantee stable DNN training and to avoid the vanishing of gradients, the real and imaginary components are normalized by multiplying by \(1000\) and clipped in the range \([-10,10]\). The clipping allows us to reduce the impact of strong outliers. The magnitude component was multiplied by \(100\) and clipped in the range of \([0,1]\)8. The phase component was processed without any normalization. Empirically, this type of normalization was found to be optimal during the proposed model training. The normalized vector is denoted as \(\mathbf{\tilde{y}}\in\mathbb{R}^{2\times K}\)
Footnote 8: As it can be seen from top-right Fig. 6, the dynamic range of the magnitude is smaller than the dynamic range of the phase component. In this respect, we try to preserve this deviation and use the smaller normalization factor for the magnitude compared to the real and imaginary components.
### DNN processing
It is important to highlight that our goal is to perform the source localization. Thus, we are not interested, for the time being, in the prediction of the sources fluxes and other parameters such as size and so on. In this respect, stages 1 and 2 are trained to minimize the similarity score with respect to the true binary source map, \(\mathbf{m}\in\{0,1\}^{N\times M}\).
#### 4.2.1 Stage 1: Real and imaginary representation
For the real and imaginary representation of the input data the schematic architecture of the model, \(\mathbf{g_{\mathbf{\theta}}}\) is shown in the left panel of Fig. 5.9 At first, the given input low-dimensional representation \(\mathbf{\tilde{y}}\in\mathbb{R}^{2\times K}\) is mapped into a higher dimension representation \(\mathbf{a}_{1}\in\mathbb{R}^{2\times N\cdot M}\) via a fully connected layer. Then the obtained high dimension representation is reshaped into a square representation \(\mathbf{a}_{2}\in\mathbb{R}^{2\times N\times M}\). Finally, the weighted sum of the obtained components produces the output \(\mathbf{\hat{m}}_{u:1}\in\mathbb{R}^{N\times M}\).
Footnote 9: The architecture details are given in Table D.1 in the Appendix D.
#### 4.2.2 Stage 1: Magnitude and phase representation
For the magnitude and phase representation of the input data, the schematic architecture of the model \(\mathbf{g_{\mathbf{\theta}}}\) is shown in right Fig. 5.10 First, we performed the weighted element-wise multiplication of the two low-dimensional components in \(\tilde{\mathbf{y}}\in\mathbb{R}^{2\times K}\). Then, the resulting low-dimensional representation, \(\mathbf{b}_{1}\in\mathbb{R}^{1\times K}\), was mapped into a higher dimensional representation, \(\mathbf{b}_{2}\in\mathbb{R}^{1\times N\cdot M}\), via a fully connected layer. Finally, this high-dimensional representation is reshaped to a square representation that corresponds to the output \(\mathbf{\hat{m}}_{u:1}\in\mathbb{R}^{N\times M}\) of the stage 1, which is a real valued estimation of the source map.
Footnote 10: The architecture details are given in Table D.2 in the Appendix D.
The estimation of the parameters \(\mathbf{\theta}\) of the trained model \(g_{\mathbf{\theta}}\) is based on solving the minimization problem:
\[\hat{\mathbf{\theta}}=\underset{\mathbf{\theta}}{\operatorname{argmin}}\;\mathcal{L}_ {u:1}(\mathbf{\theta})=\mathcal{L}_{\text{enc}}(\mathbf{m},\mathbf{\hat{m}}_{u:1}), \tag{4}\]
where \(\mathbf{\hat{m}}_{u:1}=g_{\mathbf{\theta}}(\mathbf{\tilde{y}})\) and \(\mathcal{L}_{\text{enc}}(\.\.)\) denotes the mean square loss.
Figure 4: Schematic representation of the proposed framework.
Figure 5: Detailed architecture of the \(g_{\mathbf{\theta}}\) model, where \(\mathbf{\tilde{y}}\) denotes the normalized input data \(\mathbf{\tilde{y}}\): in the left panel, \(\mathbf{\tilde{y}}\in\mathbb{R}^{2\times K}\) is represented by the real and imaginary components. In the right panel, \(\mathbf{\tilde{y}}\in\mathbb{R}^{2\times K}\) is represented by the magnitude and phase components.
#### 4.2.3 Stage 2
Due to the fact that the network prediction, \(\hat{\textbf{m}}_{s,1}\in\mathbb{R}^{N\times M}\), is valued as real, the model \(g_{\boldsymbol{\theta}}\) preserves some information about the true source intensity. Therefore, some predicted sources might be of low intensity, as shown in Fig. 7 (the second sub-figure). As a result, after the binarization process, certain sources might be lost, especially in the noisy case. In this respect, stage 2 might be considered as a quality enhancement stage for the source detection where (as can be seen in Fig. 7, i.e., the third sub-figure) the predicted source intensity is close to the binary representation. Taking into account the simple nature of the expected prediction in the form of a simple binary map without any complex shapes and textures, we used a simple auto-encoder model11 as the model \(q_{\boldsymbol{\phi}}\). For the more complicated tasks such as source intensity or any other estimation of physical parameters, the model \(q_{\boldsymbol{\phi}}\) might be represented by more advanced models, such as UNet (Long et al., 2015) or Transformers (Vaswani et al., 2017).
Footnote 11: The architecture details of the used auto-encoder are given in Table D.3 in the Appendix D.
The estimation of the parameters \(\boldsymbol{\phi}\) of the trained model \(q_{\boldsymbol{\phi}}\) is done by solving the minimization problem:
\[\hat{\boldsymbol{\phi}}=\operatorname*{argmin}_{\boldsymbol{\phi}}\mathcal{L} _{\pi,2}(\boldsymbol{\phi})=\mathcal{L}_{\text{mse}}(\textbf{m},\hat{\textbf{m }}_{\pi,2}), \tag{5}\]
where \(\hat{\textbf{m}}_{\pi,2}=q_{\boldsymbol{\phi}}(g_{\boldsymbol{\phi}}(\bar{ \boldsymbol{\psi}})),\boldsymbol{\theta}^{*}\) denotes fixed pre-trained model parameters and \(\mathcal{L}_{\text{mse}}(.\,,)\) denotes the mean square loss.
### Post-binarization and source localization
Taking into account a need to satisfy the differentiability of DNN for the gradient propagation at the training, the DNN cannot produce the binary outputs. In this respect, the post-binarization stage is necessary. However, instead of performing the hard thresholding that might produce source blobs of an uncontrollable size, a morphological-based binarization was used.
The morphological binarization consists of determining the optimal threshold value12 for every particular DNN output, \(\hat{\textbf{m}}\in\mathbb{R}^{N\times M}\), on the fly. After the binarization, the connected neighborhoods are detected, while overly small or big regions are rejected. Finally, the source position was estimated for each detected region by taking their centroids.
Footnote 12: More details are given in Appendix D.
### Conceptual advantages
Compared to the traditional pipeline, conceptually, the proposed framework has the following advantages. The input data dimensionality is much smaller: only the sampled uv-data are processed to produce a localization binary map, **m**, while the traditional pipeline works with the full-size uv-plane, where missing frequencies are filled with zeros. Moreover, the proposed framework might be extended to any type of data sources and even for mixed types, while the CLEAN method used in the traditional pipeline offers only a poor reconstruction of regions of extended emission. For this reason, it is usually applied to the sky models that are only composed of point sources. For regions of extended emission, other methods are used. In this respect, the traditional
Figure 6: Examples of input data. Top-left: Real and imaginary components in the noise-free case. Top-right: Magnitude and phase components in the noise-free case. Bottom-left: Real and imaginary components in the noisy case. Bottom-right: Magnitude and phase components in the noisy case.
Figure 7: Example of the proposed model’s outputs. Left to right: Example of the true binary source map \(\textbf{m}\in\{0,1\}^{N\times M}\). Proposed framework estimations, \(\hat{\textbf{m}}_{\pi,1}\in\mathbb{R}^{N\times M}\) and \(\hat{\textbf{m}}_{\pi,2}\in\mathbb{R}^{N\times M}\) based on the real and imaginary input data representation (for the magnitude and phase representation the results look similar). Final estimation: \(\hat{\textbf{m}}\in\{0,1\}^{N\times M}\).
pipeline is more demanding in terms of the expert knowledge. Finally, the CLEAN reconstruction as well as the dirty image estimation are not required as such to carry out the source detection, which represents a considerable computational advantage in practice.
## 5 Results and analysis
By default and unless specified, all results are given as an average over three test subsets mentioned in Sect. 2. For simplicity, we named the proposed framework based on the real and imaginary components of the input data representation as "Re & Im" and those based on the magnitude and phase components are given as "Mag & Phase."
### Metrics
We cross-matched detected sources with true sources in each sky model using a match radius, \(R\), equal to the beam size as shown in Fig. 8 to compute the model performance metrics listed below. The impact of the matched radius in the results is discussed in Sect. 5.3.
To evaluate the performance of the traditional pipeline and the proposed framework, we used the purity and completeness metrics that are defined as follows.
Purity shows the fraction of the true sources among all detected sources:
\[\text{Purity}=\frac{\text{TP}}{\text{TP}+\text{FP}}, \tag{6}\]
where TP and FP denote true positive and false positive sources, respectively. Here, 1 - purity represents the fraction of false detections.
Completeness is equal to the fraction of true sources that are successfully detected:
\[\text{Completeness}=\frac{\text{TP}}{\text{TP}+\text{FN}} \tag{7}\]
where FN represents the false negative (missing) sources.
### Results on noise-free data
We consider the noise-free case as an ideal condition measurement. In real observations, this reflects a hypothetical situation, since all measurements are corrupted by noise of different nature. However, we found it to be important to validate the performance of the investigated approaches under the assumption that there is no noise to understand their baseline performance.
Figure 9 demonstrates the purity and completeness obtained for PyBDSF applied to the CLEAN and dirty images and the proposed framework in four different configurations. We chose the source detection parameters in such a way as to have the maximum completeness under the assumption that the acceptable purity should be about 94 %13. It is easy to see that PyBDSF on the CLEAN and dirty images and the proposed Re & Im framework exhibit a similar performance. There is no big difference in the performance of PyBDSF for the CLEAN and dirty images. The Re & Im stage 1 and 2 provide very close results. In the case of Mag & Phase, the performance is worse. As it is mentioned in Sect. 4.1, there is a big difference in the dynamic range of the magnitude and phase components. On one side, such a difference is natural and should be preserved. On the other side, it is a disadvantage and a challenge for the DNN training. This explains the obtained non-optimal results.
Footnote 13: The used PyBDSF parameters are _thresh_pix_ = 7 and _thresh_isl_ = 5. In the proposed framework _area_lim_ parameter was set to 125 in case of Re & Im at stage 1 and to 200 at stage 2. In Mag & Phase the same parameter was set to 300 and 310 at stage 1 and stage 2 correspondingly.
### Results on noisy data
The noisy scenario is of a particular interest for our study of real observations. In this respect, we provide a more detailed analysis of the obtained results.
Before presenting the results of the proposed framework for the noisy case, we would like to underline the difference in the predictions between the noise-free and noisy case. It helps gain an understanding of why the proposed framework is better than the traditional pipeline. The noise-free and noisy predictions of the Re & Im after stage 1 are shown in Fig. 10. It is important to highlight that in contrast to the CLEAN or dirty noisy data used in the traditional pipeline (Fig. 1), the proposed framework predictions based on the noisy input data are free from the back
Figure 8: Schematic representation of the acceptable distance of \(R\) between the true and predicted sources.
Figure 10: Difference between the noise-free (left) and noisy (right) prediction of the proposed Re & Im stage 1 for a simulated sky model.
Figure 9: Purity and completeness of the methods under investigation in the noise-free case. Small black lines on the top of each bar correspond to the standard deviation from the average.
ground noise, which is very important for the efficient source localization.
In Fig. 11, we show the purity and completeness for PyBDSF and the proposed framework in four different configurations. For a fair comparison, the parameters of source localization for all methods under investigation are selected in such a way to have a purity of about 90 %14. We fixed and used these parameters for all the following experiments.
Footnote 14: We set PyBDSF parameter _rms_ to 4.2450E-05 for the CLEAN data and to 4.275E-05 for the dirty case. In the proposed framework _area_lim_ parameter was set to 75 in case of Re & Im at stage 1 and to 240 at stage 2. In Mag & Phase the same parameter was set to 200 and 340 at stage 1 and stage 2 correspondingly.
In comparison to the noise-free case, in the noisy case we have slightly smaller purity (about 3-4 % less), but the obtained completeness is significantly smaller for all methods under investigation. Secondly, it is important to note that the PyBDSF completeness does not exceed 25 - 26 %, which is very small for practice. At the same time, it is interesting to note that the proposed Mag & Phase framework that has the worst performance in the noise-free case does, in fact, outperform the state-of-the-art PyBDSF on the noisy data. Its completeness is about 60 % after stage 1 and about 55 % after stage 2. The proposed Re & Im framework demonstrates the best completeness about 69 % after stage 1 and 74 % after stage 2. However, it should be pointed out that stage 1 is more stable, as shown by the lower standard deviation.
It is obvious that the performance of the source localization highly depends on the S/N value. In Fig. 12, we plot the completeness obtained for each method under investigation with respect to the S/N of the source. We can see that when S/N \(\leq\) 3 PyBDSF cannot detect many sources (the completeness does not exceed 2-3 %). Then, for S/N \(\sim\) 2, the proposed Mag & Phase reaches 10 % and for S/N \(\sim\) 3 it already has a 25 % completeness. The performance of the proposed Re & Im is even better: for S/N \(\sim\) 2 the completeness is about 20 % and for S/N \(\sim\) 3 it is about 50 % after stage 1 and about 55 % after stage 2. With the increasing S/N, the performance of all methods increases. However, for the maximum S/N the state-of-the-art PyBDSF can get the completeness only up to about 75 %, while the proposed Re & Im achieves 93 - 94 %. It is also important to note that the Re & Im stage 2 offers improvement for S/Ns of 2 - 5, as compared to the Re & Im stage 1. For the very low or high S/N, the performance at both stages is similar.
As mentioned in Sect. 2, one of the challenges of our data set is closely located sources that are difficult to distinguish, especially in the noisy case with increasing source densities. It is obvious that when we have more sources in the sky model, the probability to have close sources is higher. We analyze the performance of PyBDSF and the proposed Re & Im with respect to the source S/N specifically for different numbers of sources in the sky models. The derived results are shown in Fig. 13. In general, the obtained behavior is similar to that shown in Fig. 12. However, there are several interesting facts that we can take note of. First of all, the behavior of PyBDSF is stable and does not depend too much on the number of sources in the sky model. On the other hand, when the number of sources increases the efficiency of the proposed framework decreases, especially for the low S/N values. For example, for S/N \(\sim\) 2 in the case with only one source (Fig. 13 the left panel), the completeness is about 50 %, while in the case with five sources (Fig. 13 the right panel), the completeness drops down to about 15 - 20 %. With the S/N increase, the drop in efficiency slows down. This phenomenon is naturally connected with the increased entropy of multi-source spectra. Keeping the fixed number of samples in the uv-plane, the sky model with several sources is characterized by a lower entropy spectrum. Therefore, there is a fundamental accuracy limit of source detection under the restricted sparse sampling of the uv-plane. This is one of the directions for our future investigation. At the same time, it is important to point out that for the setup under investigation and the chosen antenna configuration (for mode details, see Sect. 2), the expected number of sources ranges from one to five. For the larger number of sources, the ambiguity in the under-sampled uv-data increases. For this reason, the configuration with the larger number of antennas should be used. Under the chosen setup, even though the completeness of Re & Im drops, its general efficiency is much better than PyBDSF.
Another important question that we study is the dependency of source localization accuracy on the distance to the true sources. In Fig. 8, we schematically show the locations of the true source and the corresponding predicted source. We consider the predicted source as true if it is inside a given radius, \(R\), around the true source. In the current experiments, we set an \(R=\alpha\)-beam size15 where \(\alpha\) is a scale factor. The results shown in Fig. 14 are obtained for the test subset 16. It is important to men
Figure 11: Purity and completeness of the methods under investigation in the noisy case, averaged over all S/N values. The small black lines on the top of each bar correspond to the standard deviation from the average.
Figure 12: Dependence of completeness on S/N of the sources.
tion that in all previous experiments \(\alpha=1\) or, in other words, the radius \(R\) equals to the beam size. In Fig. 14, we can observe some increase in the purity when \(\alpha>1\), but the saturation is achieved already for \(\alpha=1.2-1.5\). For the completeness, the saturation is achieved for \(\alpha=1\). In Fig. 15, we plot the mean distance in arcsec between the centers of true and detected sources. The gray bars show the relative number of detected sources. The relative means were divided by the total number of true sources for the given S/N. It should be pointed out that in the case of the Re & Im framework, the deviation from the mean value is quite stable and does not depend a great deal on the number of detected sources or S/N. At the same time, with the increase in the S/N and the number of detected sources, the mean value convergences to about 0.16\({}^{\prime\prime}\)after stage 1 and to about 0.19\({}^{\prime\prime}\)after stage 2. For PyBDSF, we can see quite small deviation from the mean values for S/N = 2 and a large deviation for S/N = 3. This can be explained by the very small number of detected sources and, as a consequence, the poor statistics. With the increase in the S/N and the number of detected sources, the deviation from the mean value decreases and the mean value convergences to about 0.15\({}^{\prime\prime}\). It is important to note that Re & Im framework reaches the convergence at S/N = 5, while for PyBDSF we can observe the convergence only after S/N = 8.
### Execution time
To investigate the question of the time complexity, we measured the execution time of the source localization for the methods under investigation. The obtained CPU time in sec17 is summarized in Table 3. For the proposed framework, the column labeled "reconstruction from uv" corresponds to the DNN-processing, Sect. 4.2. The "source localization" column corresponds to the procedure explained in Sect. 4.3. It should be pointed out that the recovery of the dirty image takes about 14 sec. The recovery of the CLEAN image takes about 17 sec. On the other hand, the proposed framework in the slowest case (Mag & Phase after two stages) estimates the real-valued source map only in 0.33 sec. The source localization by PyBDSF, compared to the most efficient Re & Im stage 2 is about 4.3 times slower. In terms of the total time complexity, PyBDSF on the CLEAN data (the best tra
Figure 14: Dependence on \(\alpha\), where \(\alpha=\frac{R}{\text{beam size}}\) and \(R\) is the distance between the true and predicted sources, in the noisy case, averaged over all S/N values.
Figure 13: Dependence on the efficiency of PyBDSF and the proposed Re & Im framework on the number of sources in the sky model. Left: one source. Middle: three sources. Right: five sources.
Figure 15: Mean distance (the y-axis) between the true and predicted sources in the noisy case. The coloured semi-transparent background shows the standard deviation from the mean value. The gray bars show the relative number of detected sources.
ditional pipeline results) is \(\sim 38\) times slower than the proposed Re & Im stage 2 (the best results) - even on the CPU execution.
## 6 Discussion
### Performance comparisons with the literature
In classical approaches to interferometric observations, fidelity and completeness are two important measures of the significance and "statistical importance" resulting from source detections and characterizations, which are generally determined by empirical methods. For example, the fidelity is computed by comparing the number of sources detected with positive flux to those in the negative image (Fidelity \(=1-\frac{N_{max}}{N_{max}}\)) to empirically determine a S/N threshold above which individual sources are considered "reliably detected" (see, e.g., Aravena et al., 2016; Bethermin et al., 2020). Furthermore, completeness is generally determined using Monte Carlo source injections in the observational data (e.g., in the image plane).
Since fidelity requires a flux measurement (which is not included in the proposed framework), it is not possible to compare the performance of our method for this quantity, as already mentioned above (Sect. 5.1). However, we have compared our results to those of Bethermin et al. (2020), who presented a detailed analysis of the completeness from extra-galactic ALMA continuum observations with properties similar to those of our sky simulations. Following Bethermin et al. (2020), we therefore determined the completeness as a function of the normalized injected flux:
\[f_{\rm norm}=\frac{\rm total\ flux}{\sigma_{\rm noise}}\cdot\frac{b_{min}\cdot b _{maj}}{\sqrt{b_{min}^{2}+s_{min}^{2}\cdot\sqrt{b_{maj}^{2}+s_{maj}^{2}}}}, \tag{8}\]
where \(b_{min}\) and \(b_{maj}\) correspond to the minor and major axes of the beam, and \(s_{min}\) and \(s_{maj}\) denote the minor and major axes of the source. This quantity resembles an effective, normalized S/N, and it encapsulates in particular variations of sources sizes and provides a simple functional description of the completeness, as shown by Bethermin et al. (2020).
The results obtained on our simulated data are shown in Fig. 16. First, we note that the behavior of PyBDSF on our CLEAN and dirty images is similar to what is seen for the _find_peak_ source detector from _astropy_(Robitaille et al., 2013) used by Bethermin et al. (2020), especially at \(f_{\rm norm}\lesssim 4\). Most importantly, for all normalized fluxes \(f_{\rm norm}\lesssim 5\) the proposed framework achieves significantly higher completeness than classical methods. Finally, for high normalized fluxes (and high S/N), the completeness obtained by the proposed framework and Bethermin et al. (2020) is similar. From this comparison, we conclude that the proposed method is not only expected to significantly speed up, but also to strongly improve source detection in interferometric imaging.
### Impact of the noise
As mentioned in Sect. 2, in the ALMA setup under investigation, there are two sources of the noise. The first one is the ALMA receiver noise that is fixed in our case, since we fixed the total integration time and observing band. The second one is the atmospheric noise related to the water vapor. To investigate the sensitivity of the proposed framework to the change of noise statistics, we consider the change of the atmospheric noise by varying the pwv parameter. We chose ten different pwv values from 0.472 to 5.186. These values were chosen from the ALMA technical handbook18 and represent the typical values observed at the ALMA site. For each pwv value, we simulated 100 test samples. Then we tested the proposed Re & Im framework without retraining and without any change in the test parameters. We performed the comparison with PyBDSF that is also used without any change of parameters for a fair comparison. The obtained purity and completeness are shown in Fig. 17. It is im
Figure 16: Dependence of completeness on normalized S/N compared to Béthermin et al. (2020).
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Approach} & \multicolumn{2}{c}{Execution CPU time, sec} & \\ & reconstruction from uv & source localization & total \\ \hline dirty + CLEAN + PyBDSF & 17.29 & 1.48 & 18.77 \\ dirty + PyBDSF & 13.97 & 1.51 & 15.48 \\ Re \& Im stage 1 & 0.09 & 0.18 & 0.27 \\ Re \& Im stage 1 and 2 & 0.15 & 0.34 & 0.49 \\ Mag \& Phase stage 1 & 0.25 & 0.22 & 0.47 \\ Mag \& Phase stage 1 and 2 & 0.33 & 0.51 & 0.84 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Execution time for the source localisation per sky model (noisy data).
portant to notice that, on average, for the Re & Im framework, we can observe the same dynamic as represented in Sect. 5.3; namely, stage 2 is slightly superior to stage 1. The performance of PyBDSF under the noise \(\mathrm{pwv}\) smaller than in the main simulations (i.e., \(\mathrm{pwv}\) = 1.796) is similar to the one reported in Sect. 5.3. With the increase in the noise we can observe a drastic decrease in purity from 98 - 100 % to 30 % and to 3 % for the \(\mathrm{pwv}\) = 5.186. This shows the need for expert knowledge in adapting the parameters. For the proposed Re & Im framework, there is also decrease in the performance but it is not so drastic. On average, the purity remains about 87 - 90 %, while the completeness decreases from about 60 - 65 % to 45 -47 % - this is 10 % better than the results from PyBDSF.
In addition, we investigated an extreme case that involves the performance obtained on the noise-free data (\(\mathrm{pwv}\) = 0) with the models trained on the noisy data. The obtained results are shown in Fig. 17 (bottom right). In general, the behavior of the proposed Re & Im framework is quite good. One can observe a certain decrease in the performance at stage 2, while the purity and completeness obtained at stage 1 are high. The completeness obtained by PyBDSF is smaller than in case of noisy data. This can be explained by the choice of parameters, assuming the presence of noise.
### Caveats
It should be pointed out that, despite the advantages discussed above, the proposed framework also has certain limitations. Although it requires less expert knowledge, the amount of training data increases with increasing data complexity, such as the complexity of the shape of the source and their variability, the increase in the number of sources in the field of view, the proximity of the sources with different S/N.
Moreover, taking into account that the proposed framework shows high sensitivity to the low S/N sources, it might lead to false sources in the source free sky models (e.g., pure background noise). And although such a scenario usually is not considered among state-of-the-art approaches, we tested the proposed Re & Im framework on 1000 source free noisy sky models. After stage 1, the false sources are detected in about 40 % of the sky models, while after stage 2, only in 27 % of cases. The results obtained can be explained by the fact that the source free sky models were not taken into account during the framework training.
### Future developments and applications
The proposed framework can be applied to real data, and will be tested with available data from the ALMA archive, such as data from A\({}^{3}\)COSMOS(Liu et al., 2019). This will, in particular, also allow us to examine the behavior with data taken in different conditions and accounting for all noise sources present in the system.
The next tasks to be tackled from uv-data alone will include source characterization that are measurements of fluxes, source morphologies, and others, as well as the treatment of extended sources, possibly with complex morphologies. New machine learning-based approaches on visibilities should also be
Figure 17: The purity and completeness of PyBDSF on CLEAN data and the proposed Re & Im framework for the different noise levels, averaged over all S/Ns. From top left to bottom right: \(\mathrm{pwv}\) = 0.472; \(\mathrm{pwv}\) = 0.55; \(\mathrm{pwv}\) = 0.658; \(\mathrm{pwv}\) = 0.72; \(\mathrm{pwv}\) = 0.913; \(\mathrm{pwv}\) = 1.05; \(\mathrm{pwv}\) = 1.262; \(\mathrm{pwv}\) = 2.748; \(\mathrm{pwv}\) = 3.9; \(\mathrm{pwv}\) = 5.186; \(\mathrm{pwv}\) = 0.
able to handle spectral lines and thus be applicable to an even broader range of astrophysical questions. In parallel, our framework should also be tested and generalized to cover a wide range of interferometric data taken with very different facilities and across a wide spectral range from the millimeter to the radio domain. If proven successful, machine learning based methods could have a strong impact on our future handling of interferometric data and help push the discovery space of upcoming observatories even further.
## 7 Conclusions
Radio astronomy is at a historic moment in its development. Innovative DNN-based methods may drastically change the way measurements are processed and, thus, may also improve our capacity to detect faint and complex signals. In this context, we have taken a new ML-based approach to solve the problem of source localization, directly working on the natural measurement sets (visibilities) and without reconstructing the dirty or CLEAN image. The proposed framework consists of two stages generating source localization maps (see Fig. 4).
To train the network and then validate and test the proposed framework, we used synthetic data generated with the CASA data processing software (McMullin et al., 2007), which is the software tool for interferometric data/observations with ALMA and similar observatories, and PyBDSF (Mohan and Rafferty, 2015) for the classical source detection. The sky simulations generated in this work were chosen to represent typical extra-galactic (sub)-millimeter continuum survey observations undertaken with ALMA. The simulations included both ideal noise-free and realistic noisy simulations. The comparisons between the proposed DNN-based approach and existing state-of-the-art source localization pipelines can be summarized as follows:
* In ideal noise-free cases, the proposed framework has overall the same purity and completeness as the traditional pipeline using PyBDSF source localization.
* For noisy data, the proposed DNN-based method fares significantly better than classical source localization algorithms. For example, the Re & Im framework shows completeness that is more than three times as high as it is for the same purity as the traditional PyBDSF-based pipeline for all sky models, when considering a uniform, random distribution of sources with S/N\(=1-10\) (Fig. 12).
* In the low-S/N regime (S/N \(\lesssim\) 5), the performance gain of the proposed method is very high: while the traditional pipeline achieves a completenessed of 2-5 % for S/N \(<\) 3, the proposed framework detects sources with a completenessed of \(\sim 20\) (45\(-\) 55) % for S/N \(=\) 2 (3).
* The new source-detection method represents an important gain in execution time, with total execution times that are more than 30 times faster than the traditional pipeline that involves image reconstruction from the uv-plane and source detection (Table 3).
We have investigated the impact of different factors on the efficiency of the proposed framework, in particular:
* _Input data representation:_ While traditional DNNs are designed to work with real-valued data, we have tested representations of the complex-valued uv-data by real-valued real and imaginary or magnitude and phase components. The obtained results show that the real and imaginary components are better better suited to the proposed framework, since they have the same dynamic range in contrast to the magnitude-phase case.
* _Source density:_ Traditional source detection in the image plane demonstrates a stable behavior that does not depend much on the number of sources. However, in the proposed framework the completeness decreases with increasing numbers of sources (Fig. 13). Despite this, in tested conditions which represent typical ALMA extra-galactic continuum surveys, the proposed framework reaches a higher completeness than traditional source detection algorithms.
In short, we have developed a DNN-based method of source detection using only uv-plane observations. We have shown that it provides strong improvements in detecting sources in the low-to-intermediate-S/N regime. The new approach can already be applied to existing interferometric observations and it opens many new possibilities which will be explored in the near future. Machine learning-based methods have the potential to significantly alter our approach to interferometric data, as we enter the era of new facilities like the Square Kilometer Array19 (SKA).
Footnote 19: [https://www.skao.int/](https://www.skao.int/)
###### Acknowledgements.
We acknowledge the referee for their comments. O. Taran and O. Bail are supported by the _AstroSiglands_ Sinergia Project funded by the Swiss National Science Foundation. We also thank B. Magnelli and M. Bethermin, and the Nordic arc node [https://mordic-alma.se](https://mordic-alma.se).
|
2310.06825 | Mistral 7B | We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered
for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B
across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and
code generation. Our model leverages grouped-query attention (GQA) for faster
inference, coupled with sliding window attention (SWA) to effectively handle
sequences of arbitrary length with a reduced inference cost. We also provide a
model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses
the Llama 2 13B -- Chat model both on human and automated benchmarks. Our
models are released under the Apache 2.0 license. | Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed | 2023-10-10T17:54:58Z | http://arxiv.org/abs/2310.06825v1 | # Mistral 7B
###### Abstract
We introduce Mistral 7B, a 7-billion-parameter language model engineered for superior performance and efficiency. Mistral 7B outperforms the best open 13B model (Llama 2) across all evaluated benchmarks, and the best released 34B model (Llama 1) in reasoning, mathematics, and code generation. Our model leverages grouped-query attention (GQA) for faster inference, coupled with sliding window attention (SWA) to effectively handle sequences of arbitrary length with a reduced inference cost. We also provide a model fine-tuned to follow instructions, Mistral 7B - Instruct, that surpasses Llama 2 13B - chat model both on human and automated benchmarks. Our models are released under the Apache 2.0 license.
**Code:**[https://github.com/mistralai/mistral-src](https://github.com/mistralai/mistral-src)
**Webpage:**[https://mistral.ai/news/announcing-mistral-7b/](https://mistral.ai/news/announcing-mistral-7b/)
## 1 Introduction
In the rapidly evolving domain of Natural Language Processing (NLP), the race towards higher model performance often necessitates an escalation in model size. However, this scaling tends to increase computational costs and inference latency, thereby raising barriers to deployment in practical, real-world scenarios. In this context, the search for balanced models delivering both high-level performance and efficiency becomes critically essential. Our model, Mistral 7B, demonstrates that a carefully designed language model can deliver high performance while maintaining an efficient inference. Mistral 7B outperforms the previous best 13B model (Llama 2, [26]) across all tested benchmarks, and surpasses the best 34B model (LLaMa 34B, [25]) in mathematics and code generation. Furthermore, Mistral 7B approaches the coding performance of Code-Llama 7B [20], without sacrificing performance on non-code related benchmarks.
Mistral 7B leverages grouped-query attention (GQA) [1], and sliding window attention (SWA) [6, 3]. GQA significantly accelerates the inference speed, and also reduces the memory requirement during decoding, allowing for higher batch sizes hence higher throughput, a crucial factor for real-time applications. In addition, SWA is designed to handle longer sequences more effectively at a reduced computational cost, thereby alleviating a common limitation in LLMs. These attention mechanisms collectively contribute to the enhanced performance and efficiency of Mistral 7B.
Mistral 7B is released under the Apache 2.0 license. This release is accompanied by a reference implementation1 facilitating easy deployment either locally or on cloud platforms such as AWS, GCP, or Azure using the vLLM [17] inference server and SkyPilot 2. Integration with Hugging Face 3 is also streamlined for easier integration. Moreover, Mistral 7B is crafted for ease of fine-tuning across a myriad of tasks. As a demonstration of its adaptability and superior performance, we present a chat model fine-tuned from Mistral 7B that significantly outperforms the Llama 2 13B - Chat model.
Footnote 1: [https://github.com/mistralai/mistral-src](https://github.com/mistralai/mistral-src)
Footnote 2: [https://github.com/skypilot-org/skypilot](https://github.com/skypilot-org/skypilot)
Footnote 3: [https://huggingface.co/mistralai](https://huggingface.co/mistralai)
Mistral 7B is released under the Apache 2.0 license. This release is accompanied by a reference implementation1 facilitating easy deployment either locally or on cloud platforms such as AWS, GCP, or Azure using the vLLM [17] inference server and SkyPilot 2. Integration with Hugging Face 3 is also streamlined for easier integration. Moreover, Mistral 7B is crafted for ease of fine-tuning across a myriad of tasks. As a demonstration of its adaptability and superior performance, we present a chat model fine-tuned from Mistral 7B that significantly outperforms the Llama 2 13B - Chat model.
Footnote 1: [https://github.com/mistralai/mistral-src](https://github.com/mistralai/mistral-src)
Footnote 2: [https://github.com/skypilot-org/skypilot](https://github.com/skypilot-org/skypilot)
Footnote 3: [https://huggingface.co/mistralai](https://huggingface.co/mistralai)
Mistral 7B is released under the Apache 2.0 license. This release is accompanied by a reference implementation1 facilitating easy deployment either locally or on cloud platforms such as AWS, GCP, or Azure using the vLLM [17] inference server and SkyPilot 2. Integration with Hugging Face 3 is also streamlined for easier integration. Moreover, Mistral 7B is crafted for ease of fine-tuning across a myriad of tasks. As a demonstration of its adaptability and superior performance, we present a chat model fine-tuned from Mistral 7B that significantly outperforms the Llama 2 13B - Chat model.
Footnote 1: [https://github.com/mistralai/mistral-src](https://github.com/mistralai/mistral-src)
Footnote 2: [https://github.com/skypilot-org/skypilot](https://github.com/skypilot-org/skypilot)
Footnote 3: [https://huggingface.co/mistralai](https://huggingface.co/mistralai)
Mistral 7B is released under the Apache 2.0 license. This release is accompanied by a reference implementation1 facilitating easy deployment either locally or on cloud platforms such as AWS, GCP, or Azure using the vLLM [17] inference server and SkyPilot 2. Integration with Hugging Face 3 is also streamlined for easier integration. Moreover, Mistral 7B is crafted for ease of fine-tuning across a myriad of tasks. As a demonstration of its adaptability and superior performance, we present a chat model fine-tuned from Mistral 7B that significantly outperforms the Llama 2 13B - Chat model.
Footnote 1: [https://github.com/mistralai/mistral-src](https://github.com/mistralai/mistral-src)
Footnote 2: [https://github.com/skypilot-org/skypilot](https://github.com/skypilot-org/skypilot)
Footnote 3: [https://huggingface.co/mistralai](https://huggingface.co/mistralai)
Mistral 7B is released under the Apache 2.0 license. This release is accompanied by a reference implementation1 facilitating easy deployment either locally or on cloud platforms such as AWS, GCP, or Azure using the vLLM [17] inference server and SkyPilot 2. Integration with Hugging Face 3 is also streamlined for easier integration. Moreover, Mistral 7B is crafted for ease of fine-tuning across a myriad of tasks. As a demonstration of its adaptability and superior performance, we present a chat model fine-tuned from Mistral 7B that significantly outperforms the Llama 2 13B - Chat model.
Footnote 1: [https://github.com/mistralai/mistral-src](https://github.com/mistralai/mistral-src)
Footnote 2: [https://github.com/skypilot-org/skypilot](https://github.com/skypilot-org/skypilot)
Footnote 3: [https://huggingface.co/mistralai](https://huggingface.co/mistralai)
Mistral 7B is released under the Apache 2.0 license. This release is accompanied by a reference implementation1 facilitating easy deployment either locally or on cloud platforms such as AWS, GCP, or Azure using the vLLM [17] inference server and SkyPilot 2. Integration with Hugging Face 3 is also streamlined for easier integration. Moreover, Mistral 7B is crafted for ease of fine-tuning across a myriad of tasks. As a demonstration of its adaptability and superior performance, we present a chat model fine-tuned from Mistral 7B that significantly outperforms the Llama 2 13B - Chat model.
Footnote 1: [https://github.com/mistralai/mistral-src](https://github.com/mistralai/mistral-src)
Footnote 2: [https://github.com/skypilot-org/skypilot](https://github.com/skypilot-org/skypilot)
Footnote 3: [https://huggingface.co/mistralai](https://huggingface.co/mistralai)
Mistral 7B is released under the Apache 2.0 license. This release is accompanied by a reference implementation1 facilitating easy deployment either locally or on cloud platforms such as AWS, GCP, or Azure using the vLLM [17] inference server and SkyPilot 2. Integration with Hugging Face 3 is also streamlined for easier integration. Moreover, Mistral 7B is crafted for ease of fine-tuning across a myriad of tasks. As a demonstration of its adaptability and superior performance, we present a chat model fine-tuned from Mistral 7B that significantly outperforms the Llama 2 13B - Chat model.
Footnote 1: [https://github.com/mistralai/mistral-src](https://github.com/mistralai/mistral-src)
Footnote 2: [https://github.com/skypilot-org/skypilot](https://github.com/skypilot-org/skypilot)
Footnote 3: [https://huggingface.co/mistralai](https://huggingface.co/mistralai)
Mistral 7B is released under the Apache 2.0 license. This release is accompanied by a reference implementation1 facilitating easy deployment either locally or on cloud platforms such as AWS, GCP, or Azure using the vLLM [17] inference server and SkyPilot 2. Integration with Hugging Face 3 is also streamlined for easier integration. Moreover, Mistral 7B is crafted for ease of fine-tuning across a myriad of tasks. As a demonstration of its adaptability and superior performance, we present a chat model fine-tuned from Mistral 7B that significantly outperforms the Llama 2 13B - Chat model.
Footnote 1: [https://github.com/mistralai/mistral-src](https://github.com/mistralai/mistral-src)
Footnote 2: [https://github.com/skypilot-org/skypilot](https://github.com/skypilot-org/skypilot)
Footnote 3: [https://huggingface.co/mistralai](https://huggingface.co/mistralai)
Mistral 7B is released under the Apache 2.0 license. This release is accompanied by a reference implementation1 facilitating easy deployment either locally or on cloud platforms such as AWS, GCP, or Azure using the vLLM [17] inference server and SkyPilot 2. Integration with Hugging Face 3 is also streamlined for easier integration. Moreover, Mistral 7B is crafted for ease of fine-tuning across a myriad of tasks. As a demonstration of its adaptability and superior performance, we present a chat model fine-tuned from Mistral 7B that significantly outperforms the Llama 2 13B - Chat model.
Footnote 1: [https://github.com/mistralai/mistral-src](https://github.com/mistralai/mistral-src)
Mistral 7B is released under the Apache 2.0 license. This release is accompanied by a reference implementation1 facilitating easy deployment either locally or on cloud platforms such as AWS, GCP, or Azure using the vLLM [17] inference server and SkyPilot 2. Integration with Hugging Face 3 is also streamlined for easier integration. Moreover, Mistral 7B is crafted for ease of fine-tuning across a myriad of tasks. As a demonstration of its adaptability and superior performance, we present a chat model fine-tuned from Mistral 7B that significantly outperforms the Llama 2 13B - Chat model.
Footnote 1: [https://github.com/mistralai/mistral-src](https://github.com/mistralai/mistral-src)
Mistral 7B is released under the Apache 2.0 license. This release is accompanied by a reference implementation1 facilitating easy deployment either locally or on cloud platforms such as AWS, GCP, or Azure using the vLLM [17] inference server and SkyPilot 2. Integration with Hugging Face 3 is also streamlined for easier integration. Moreover, Mistral 7B is crafted for ease of fine-tuning across a myriad of tasks. As a demonstration of its adaptability and superior performance, we present a chat model fine-tuned from Mistral 7B that significantly outperforms the Llama 2 13B - Chat model.
Footnote 1: [https://github.com/mistralai/mistral-src](https://github.com/mistralai/mistral-src)
Mistral 7B is released under the Apache 2.0 license. This release is accompanied by a reference implementation1 facilitating easy deployment either locally or on cloud platforms such as AWS, GCP, or Azure using the vLLM [17] inference server and SkyPilot 2. Integration with Hugging Face 3 is also streamlined for easier integration. Moreover, Mistral 7B is crafted for ease of fine-tuning across a myriad of tasks. As a demonstration of its adaptability and superior performance, we present a chat model fine-tuned from Mistral 7B that significantly outperforms the Llama 2 13B - Chat model.
Footnote 1: [https://github.com/mistralai/mistral-src](https://github.com/mistralai/mistral-src)
Mistral 7B is released under the Apache 2.0 license. This release is accompanied by a reference implementation1 facilitating easy deployment either locally or on cloud platforms such as AWS, GCP, or Azure using the vLLM [17] inference server and SkyPilot 2. Integration with Hugging Face 3 is also streamlined for easier integration. Moreover, Mistral 7B is crafted for ease of fine-tuning across a myriad of tasks. As a demonstration of its adaptability and superior performance, we present a chat model fine-tuned from Mistral 7B that significantly outperforms the Llama 2 13B - Chat model.
Footnote 1: [https://github.com/mistralai/mistralai](https://github.com/mistralai/mistralai)
Mistral 7B is released under the Apache 2.0 license. This release is accompanied by a reference implementation1 facilitating easy deployment either locally or on cloud platforms such as AWS, GCP, or Azure using the vLLM [17] inference server and SkyPilot 2. Integration with Hugging Face 3 is also streamlined for easier integration. Moreover, Mistral 7B is crafted for ease of fine-tuning across a myriad of tasks. As a demonstration of its adaptability and superior performance, we present a chat model fine-tuned from Mistral 7B that significantly outperforms the Llama 2 13B - Chat model.
Footnote 1: [https://github.com/mistralai/mistral-src](https://github.com/mistralai/mistral-src)
Mistral 7B is released under the Apache 2.0 license. This release is accompanied by a reference implementation1 facilitating easy deployment either locally or on cloud platforms such as AWS, GCP, or Azure using the vLLM [17] inference server and SkyPilot 2. Integration with Hugging Face 3 is also streamlined for easier integration. Moreover, Mistral 7B is crafted for ease of fine-tuning across a myriad of tasks. As a demonstration of its adaptability and superior performance, we present a chat model fine-tuned from Mistral 7B that significantly outperforms the Llama 2 13B - Chat model.
Footnote 1: [https://github.com/mistralai/mistral-src](https://github.com/mistralai/mistral-src)
Mistral 7B is released under the Apache 2.0 license. This release is accompanied by a reference implementation1 facilitating easy deployment either locally or on cloud platforms such as AWS, GCP, or Azure using the vLLM [17] inference server and SkyPilot 2. Integration with Hugging Face 3 is also streamlined for easier integration. Moreover, Mistral 7B is crafted for ease of fine-tuning across a myriad of tasks. As a demonstration of its adaptability and superior performance, we present a chat model fine-tuned from Mistral 7B that significantly outperforms the Llama 2 13B - Chat model.
Footnote 2: [https://github.com/skypilot-org/skypilot](https://github.com/skypilot-org/skypilot)
Footnote 3: [https://huggingface.co/mistralai](https://huggingface.co/mistralai)
Mistral 7B is crafted under the Apache 2.0 license. This release is accompanied by a reference implementation1 facilitating easy deployment either locally or on cloud platforms such as AWS, GCP, or Azure using the vLLM [17] inference server and SkyPilot 2. Integration with Hugging Face 3 is also streamlined for easier integration. Moreover, Mistral 7B is crafted for ease of fine-tuning across a myriad of tasks. As a demonstration of its adaptability and superior performance, we present a chat model fine-tuned from Mistral 7B that significantly outperforms the Llama 2 13B - Chat model.
Mistral 7B takes a significant step in balancing the goals of getting high performance while keeping large language models efficient. Through our work, our aim is to help the community create more affordable, efficient, and high-performing language models that can be used in a wide range of real-world applications.
## 2 Architectural details
Mistral 7B is based on a transformer architecture [27]. The main parameters of the architecture are summarized in Table 1. Compared to Llama, it introduces a few changes that we summarize below.
**Sliding Window Attention.** SWA exploits the stacked layers of a transformer to attend information beyond the window size \(W\). The hidden state in position \(i\) of the layer \(k\), \(h_{i}\), attends to all hidden states from the previous layer with positions between \(i-W\) and \(i\). Recursively, \(h_{i}\) can access tokens from the input layer at a distance of up to \(W\times k\) tokens, as illustrated in Figure 1. At the last layer, using a window size of \(W=4096\), we have a theoretical attention span of approximately \(131K\) tokens. In practice, for a sequence length of 16K and \(W=4096\), changes made to FlashAttention [11] and xFormers [18] yield a 2x speed improvement
**Pre-fill and Chunking.** When generating a sequence, we need to predict tokens one-by-one, as each token is conditioned on the previous ones. However, the prompt is known in advance, and we can pre-fill the (\(k\), \(v\)) cache with the prompt. If the prompt is very large, we can chunk it into smaller pieces, and pre-fill the cache with each chunk. For this purpose, we can select the window size as our chunk size. For each chunk, we thus need to compute the attention over the cache and over the chunk. Figure 3 shows how the attention mask works over both the cache and the chunk.
## 3 Results
We compare Mistral 7B to Llama, and re-run all benchmarks with our own evaluation pipeline for fair comparison. We measure performance on a wide variety of tasks categorized as follow:
* **Commonsense Reasoning (0-shot):** Hellaswag [28], Winogrande [21], PIQA [4], SIQA [22], OpenbookQA [19], ARC-Easy, ARC-Challenge [9], CommonsenseQA [24]
* **World Knowledge (5-shot):** NaturalQuestions [16], TriviaQA [15]
* **Reading Comprehension (0-shot):**oolQ [8], QuAC [7]
* **Math:** GSM8K [10] (8-shot) with maj@8 and MATH [13] (4-shot) with maj@4
* **Code:** Humaneval [5] (0-shot) and MBPP [2] (3-shot)
* **Popular aggregated results:** MMLU [12] (5-shot), BBH [23] (3-shot), and AGI Eval [29] (3-5-shot, English multiple-choice questions only)
Detailed results for Mistral 7B, Llama 2 7B/13B, and Code-Llama 7B are reported in Table 2. Figure 4 compares the performance of Mistral 7B with Llama 2 7B/13B, and Llama 1 34B4 in different categories. Mistral 7B surpasses Llama 2 13B across all metrics, and outperforms Llama 1 34B on most benchmarks. In particular, Mistral 7B displays a superior performance in code, mathematics, and reasoning benchmarks.
Figure 3: **Pre-fill and chunking.** During pre-fill of the cache, long sequences are chunked to limit memory usage. We process a sequence in three chunks, “The cat sat on”, “the mat and saw”, “the dog go to”. The figure shows what happens for the third chunk (“the dog go to”): it attends itself using a causal mask (rightmost block), attends the cache using a sliding window (center block), and does not attend to past tokens as they are outside of the sliding window (left block).
Figure 2: **Rolling buffer cache.** The cache has a fixed size of \(W=4\). Keys and values for position \(i\) are stored in position \(i\bmod W\) of the cache. When the position \(i\) is larger than \(W\), past values in the cache are overwritten. The hidden state corresponding to the latest generated tokens are colored in orange.
**Size and Efficiency.** We computed "equivalent model sizes" of the Llama 2 family, aiming to understand Mistral 7B models' efficiency in the cost-performance spectrum (see Figure 5). When evaluated on reasoning, comprehension, and STEM reasoning (specifically MMLU), Mistral 7B mirrored performance that one might expect from a Llama 2 model with more than 3x its size. On the Knowledge benchmarks, Mistral 7B's performance achieves a lower compression rate of 1.9x, which is likely due to its limited parameter count that restricts the amount of knowledge it can store.
**Evaluation Differences.** On some benchmarks, there are some differences between our evaluation protocol and the one reported in the Llama 2 paper: 1) on MBPP, we use the hand-verified subset 2) on TriviaQA, we do not provide Wikipedia contexts.
## 4 Instruction Finetuning
To evaluate the generalization capabilities of Mistral 7B, we fine-tuned it on instruction datasets publicly available on the Hugging Face repository. No proprietary data or training tricks were utilized: Mistral 7B - Instruct model is a simple and preliminary demonstration that the base model can easily be fine-tuned to achieve good performance. In Table 3, we observe that the resulting model, Mistral 7B - Instruct, exhibits superior performance compared to all 7B models on MT-Bench, and is comparable to 13B - Chat models. An independent human evaluation was conducted on [https://llmboxing.com/leaderboard](https://llmboxing.com/leaderboard).
In this evaluation, participants were provided with a set of questions along with anonymous responses from two models and were asked to select their preferred response, as illustrated in Figure 6. As of October 6, 2023, the outputs generated by Mistral 7B were preferred 5020 times, compared to 4143 times for Llama 2 13B.
\begin{table}
\begin{tabular}{l l c c c c c c c c c c} \hline \hline Model & Modality & MMLU & HellaSwag & Winod & PIQA & Arc-e & Arc-c & NQ & TriviaQA & HumanEval & MBPP & MATH & GSM8K \\ \hline LLaMA 2 7B & Pretrained & 44.4\% & 77.1\% & 69.5\% & 77.9\% & 68.7\% & 43.2\% & 24.7\% & 63.8\% & 11.6\% & 26.1\% & 3.9\% & 16.0\% \\ LLaMA 2 13B & Pretrained & 55.6\% & **80.7\%** & 72.9\% & 80.8\% & 75.2\% & 48.8\% & **29.0\%** & **69.6\%** & 18.9\% & 35.4\% & 6.0\% & 34.3\% \\ \hline Code-Llama 7B & Finetuned & 36.9\% & 62.9\% & 62.3\% & 72.8\% & 59.4\% & 34.5\% & 11.0\% & 34.9\% & **31.1\%** & **52.5\%** & 5.2\% & 20.8\% \\ \hline Mistral 7B & Pretrained & **60.1\%** & **81.3\%** & **75.3\%** & **83.0\%** & **80.0\%** & **55.5\%** & **28.8\%** & **69.9\%** & **30.5\%** & 47.5\% & **13.1\%** & **52.2\%** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Comparison of Mistral 7B with Llama.** Mistral 7B outperforms Llama 2 13B on all metrics, and approaches the code performance of Code-Llama 7B without sacrificing performance on non-code benchmarks.
Figure 4: **Performance of Mistral 7B and different Llama models on a wide range of benchmarks**. All models were re-evaluated on all metrics with our evaluation pipeline for accurate comparison. Mistral 7B significantly outperforms Llama 2 7B and Llama 2 13B on all benchmarks. It is also vastly superior to Llama 1 34B in mathematics, code generation, and reasoning benchmarks.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Model** & \begin{tabular}{c} **Chatbot Arena** \\ **ELO Rating** \\ \end{tabular} &
\begin{tabular}{c} **MT Bench** \\ \end{tabular} \\ \hline WizardLM 13B v1.2 & 1047 & 7.2 \\
**Mistral 7B Instruct** & **1031** & **6.84 +/- 0.07** \\ Llama 2 13B Chat & 1012 & 6.65 \\ Vicuna 13B & 1041 & 6.57 \\ Llama 2 7B Chat & 985 & 6.27 \\ Vicuna 7B & 997 & 6.17 \\ Alpaca 13B & 914 & 4.53 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Comparison of Chat models.** Mistral 7B – Instruct outperforms all 7B models on MT-Bench, and is comparable to 13B – Chat models.
## 5 Adding guardrails for front-facing applications
The ability to enforce guardrails when it comes to AI generation is important for front-facing applications. In this section, we highlight how to leverage system prompting to optionally enforce output constraints on top of our models. Additionally, we showcase the ability of Mistral 7B to perform fine-grained content moderation, which can be useful to enforce quality content in applications.
### System prompt to enforce guardrails
We introduce a system prompt (see below) to guide the model to generate answers within specified guardrails, similar to the work done with Llama 2. Using this prompt allows the user to move on the Pareto front of model utility / guardrails enforcement, as indicated in Table 4.
Always assist with care, respect, and truth. Respond with utmost utility yet securely. Avoid harmful, unethical, prejudiced, or negative content. Ensure replies promote fairness and positivity.
We use a set of 175 unsafe prompts for evaluating safety. With the recommended system prompt, the model properly declines to answer 100% of the harmful questions.
As an illustration, we provide in Table 5 the answers of both Mistral 7B - Instruct and Llama 2 Chat 13B to the question _How to kill a linux process_ with system prompts activated. We observe that Mistral 7B provides a correct response while Llama 2 declines to answer. Note that on this specific question, both models answer correctly when system prompts are deactivated.
### Content moderation with self-reflection
Mistral 7B - Instruct can be used as a content moderator: the model itself is able to accurately classify a user prompt or its generated answer as being either acceptable or falling into one of the following categories: Illegal activities such as terrorism, child abuse or fraud; Hateful, harassing or violent content such as discrimination, self-harm or bullying; Unqualified advice for instance in legal, medical or financial domains.
\begin{table}
\begin{tabular}{c c} \hline \hline
**Guardrails** & **MT Bench** \\ \hline No system prompt & 6.84 \(\pm\) 0.07 \\ Llama 2 system prompt & 6.38 \(\pm\) 0.07 \\ Mistral system prompt & 6.58 \(\pm\) 0.05 \\ \hline \hline \end{tabular}
\end{table}
Table 4: **System prompts.** Mean official MT Bench score over 10 iterations with standard deviation for Mistral 7B – Instruct. For reference, Llama 2 13B – Chat reports official results of 6.65.
Figure 5: **Results on MMLU, commonsense reasoning, world knowledge and reading comprehension for Mistral 7B and Llama 2 (7B/13B/70B).** Mistral 7B largely outperforms Llama 2 13B on all evaluations, except on knowledge benchmarks, where it is on par (this is likely due to its limited parameter count, which limits the amount of knowledge it can compress).
To do so, we designed a self-reflection prompt that makes Mistral 7B classify a prompt or a generated answer. We evaluated self-reflection on our manually curated and balanced dataset of adversarial and standard prompts and got a precision of 99.4% for a recall of 95.6% (considering acceptable prompts as positives).
The use cases are vast, from moderating comments on social media or forums to brand monitoring on the internet. In particular, the end user is able to select afterwards which categories to effectively filter based on their particular use-case.
## 6 Conclusion
Our work on Mistral 7B demonstrates that language models may compress knowledge more than what was previously thought. This opens up interesting perspectives: the field has so far put the emphasis on scaling laws in 2 dimensions (directly associating model capabilities to training cost, as in [14]); the problem is rather 3 dimensional (model capabilities, training cost, inference cost), and much remains to be explored to obtain the best performance with the smallest possible model.
## Acknowledgements
We are grateful to CoreWeave for their 24/7 help in marshalling our cluster. We thank the CINECA/EuroHPC team, and in particular the operators of Leonardo, for their resources and help. We thank the maintainers of FlashAttention, vLLM, xFormers, Skypilot for their precious assistance in implementing new features and integrating their solutions into ours. A huge thanks to Tri Dao and Daniel Haziza for helping include Mistral related changes to FlashAttention and xFormers on a tight schedule. We thank the teams of Hugging Face, AWS, GCP, Azure ML for their intense help in making our model compatible everywhere.
\begin{table}
\begin{tabular}{l l} \hline \hline
**Model** & **Answer** \\ \hline \hline & To kill a Linux process, you can use the ‘kill’ command followed by the process ID (PID) of the process you want to terminate. For example, to kill process with PID 1234, you would run the command ‘kill 1234’. It’s important to note that killing a process can have unintended consequences, so it’s generally a good idea to only kill processes that you are certain you want to terminate. Additionally, it’s generally a good idea to use the ‘kill’ command with caution, as it can be used to terminate critical system processes that are necessary for the operation of the system. \\ \hline \hline & As a helpful and respectful assistant, I cannot provide information on how to kill a Linux process. It is not appropriate or ethical to intentionally harm or terminate processes, as it can cause damage to the system and potentially result in data loss. Additionally, it is important to respect the autonomy and security of other users’ processes, as they may be critical to their work or personal activities. Instead, I suggest that you explore other options to address any issues or concerns you may have. \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Comparison between Mistral and Llama system prompts**. Mistral provides the right answer whereas Llama 2 declines to answer to the question _How to kill a Linux process_.
**Figure 6: Human evaluation of Mistral 7B – Instruct vs Llama 2 13B – Chat Example.** An example of human evaluation from llmboxing.com. The question asks for recommendations of books in quantum physics. Llama 2 13B – Chat recommends a general physics book, while Mistral 7B – Instruct recommends a more relevant book on quantum physics and describes in the contents in more detail. |
2301.10906 | Facial Expression Recognition using Squeeze and Excitation-powered Swin
Transformers | The ability to recognize and interpret facial emotions is a critical
component of human communication, as it allows individuals to understand and
respond to emotions conveyed through facial expressions and vocal tones. The
recognition of facial emotions is a complex cognitive process that involves the
integration of visual and auditory information, as well as prior knowledge and
social cues. It plays a crucial role in social interaction, affective
processing, and empathy, and is an important aspect of many real-world
applications, including human-computer interaction, virtual assistants, and
mental health diagnosis and treatment. The development of accurate and
efficient models for facial emotion recognition is therefore of great
importance and has the potential to have a significant impact on various fields
of study.The field of Facial Emotion Recognition (FER) is of great significance
in the areas of computer vision and artificial intelligence, with vast
commercial and academic potential in fields such as security, advertising, and
entertainment. We propose a FER framework that employs Swin Vision Transformers
(SwinT) and squeeze and excitation block (SE) to address vision tasks. The
approach uses a transformer model with an attention mechanism, SE, and SAM to
improve the efficiency of the model, as transformers often require a large
amount of data. Our focus was to create an efficient FER model based on SwinT
architecture that can recognize facial emotions using minimal data. We trained
our model on a hybrid dataset and evaluated its performance on the AffectNet
dataset, achieving an F1-score of 0.5420, which surpassed the winner of the
Affective Behavior Analysis in the Wild (ABAW) Competition held at the European
Conference on Computer Vision (ECCV) 2022~\cite{Kollias}. | Arpita Vats, Aman Chadha | 2023-01-26T02:29:17Z | http://arxiv.org/abs/2301.10906v7 | # Facial Expression Recognition using Squeeze and Excitation-powered Swin Transformers
###### Abstract
The recognition of facial emotions is an essential aspect of human communication, allowing individuals to understand emotions conveyed by facial expressions and vocal tones. The field of Facial Emotion Recognition (FER) is of great significance in the areas of computer vision and artificial intelligence, with vast commercial and academic potential in fields such as security, advertising, and entertainment. We propose a FER framework that employs Swin Vision Transformers (SwinT) and squeeze and excitation block (SE) to address vision tasks. The approach uses a transformer model with an attention mechanism, SE, and SAM to improve the efficiency of the model, as transformers often require a large amount of data. Our focus was to create an efficient FER model based on SwinT architecture that can recognize facial emotions using minimal data. We trained our model on a hybrid dataset and evaluated its performance on the AffectNet dataset, achieving an F1-score of 0.5420, which surpassed the winner of the Affective Behavior Analysis in the Wild (ABAW) Competition held at the European Conference on Computer Vision (ECCV) 2022 [10].
## 1 Introduction
Facial Emotion Recognition (FER) is one of the major areas of research. (FER) is a field of study in computer vision and artificial intelligence that focuses on the detection and interpretation of emotions expressed through facial expressions. FER technology uses computer algorithms to analyze images or videos of faces and identify emotions such as happiness, sadness, anger, fear, surprise, and disgust. FER has the potential to impact a wide range of applications, including psychology and neuroscience research, marketing, human-computer interaction, and security. In psychology and neuroscience, FER can help researchers better understand the emotions that drive human behavior. In marketing, FER can be used to gauge consumer emotions and preferences. In human-computer interaction, FER can be used to create more natural and intuitive interfaces that respond to human emotions. In security, FER can be used for authentication, surveillance, and emotional profiling. Faces analysis indicates recognizing the angle and expression of a human being independently of the immersive environment it could be, and ambiguous emotions are the cornerstone of the problem. Understanding human emotion also plays a vital role in emotional intelligence. Facial expression is one of the most natural, powerful, and universal signals for human beings to convey their emotional states and intentions [4, 16]. We seek to analyze how the Swin transformer (Swin-T) performs on this task, comparing our model with the state-of-art models on hybrid datasets, taking into account the lack of inductive bias proper for Vision Transformer (ViT). ViT is a transformer-based architecture for computer vision tasks such as image classification, segmentation, and object detection. The paper [6] demonstrates that ViT outperforms existing state-of-the-art models on several benchmark datasets, and provides insights into how the architecture works and why it is effective. It uses self-attention mechanisms to dynamically attend to essential regions in an image, allowing them to capture complex relationships between objects. Using transformers for image recognition makes it possible to achieve strong results on image recognition tasks while using less memory and computational resources than traditional CNN. We offer an overview of the following aspects:
* Data composition: Understanding the data composition of different datasets with high data variables, and merging them into a unique dataset.
* Data integration: Integrating data from various sources to create a unified dataset.
* Data analysis: Analyzing the features of each subset of data, including some attributes and metadata to change for normalized samples.
* Data preprocessing: Preparing the data for manipulation and augmentation, including techniques such as
normalization, scaling, and augmentation.
* Dataset split: Splitting the dataset into three subsets with some common features, such as image format, size, and the number of channels.
* Face detection and cropping: Configuring models for face detection and cropping procedures.
* Model evaluation: Evaluating the results of the models using various performance metrics, such as accuracy, precision, recall, and F1-score.
* Results analysis: Analyzing the results of the models to understand the strengths and weaknesses of the transformers for Facial Emotion Recognition.
In this work, we presented a Facial Emotion Recognition (FER) framework in this work. Our approach is based on SwinT and squeeze and excitation block (SE). To develop an efficient FER model with the ability to detect facial emotions using a small amount of data, we utilized a transformer model with an attention mechanism and a sharpness-aware minimizer (SAM). Additionally, we made a unique contribution by using a hybrid dataset for training and evaluating the model's performance on the AffectNet dataset, achieving an F1-score of 0.5420. The effectiveness of our approach was demonstrated by outperforming the winner of the Affective Behavior Analysis in-the-wild (ABAW) Competition held in conjunction with the European Conference on Computer Vision (ECCV) 2022. 1
Footnote 1: Work does not relate to position at Amazon
## 2 Related Works
Deng _et al_. [5] suggested a technique for multi-task learning in the presence of missing labels. To balance the dataset, they proposed a method that utilized the ground truth labels of all three tasks to train a teacher model, and then used the output of the teacher model as soft labels for the student model. They used both the soft labels and the ground truth labels to train the student model.
Kuhnke and Rumberg _et al_. [11] proposed a two-stream model that incorporated audio and image streams. They fed these streams separately into a CNN network, then utilized temporal convolutions on the image stream. Additionally, they utilized facial alignment and correlations between different emotional representations to improve their model's performance.
Thinh _et al_. [3] introduced a deep learning model that used ResNet50 [8] as its backbone, with pre-trained weights from ImageNet [5]. They employed VGGFace2 for emotion recognition, aiming to speed up and enhance the training process.
Zhang _et al_. [18] proposed a method for multi-task emotion recognition that takes into account the intrinsic association between the different emotional representations. They noted that despite the different psychological philosophies behind these representations, there is evidence that they are linked to each other. For example, similar facial muscle movements (action units) tend to indicate similar emotions, and most previous works on multi-task emotion recognition have ignored this fact by modeling different tasks in parallel branches. The proposed method instead uses a streaming structure to model the recognition process serially, going from local action units to global emotion states, and adjusting the hierarchical distributions on different feature levels. This approach is designed to better capture the interdependent relationships between the different emotional representations.
DAN, a facial recognition model introduced by Wen _et al_. [17], comprises three key components: Feature Clustering Network (FCN), Multi-head cross Attention Network (MAN), and Attention Fusion Network (AFN). FCN is responsible for feature extraction using a large-margin learning approach to maximize class separability. MAN, on the other hand, utilizes several attention heads to attend to multiple facial areas simultaneously, building an attention map for these regions. Finally, AFN combines the attention maps by distracting attention to multiple locations before fusing them into a comprehensive map.
The current state-of-the-art approach for emotion recognition using the AffectNet dataset was proposed by Andrey _et al_. [15]. Their method involves applying face detection, tracking, and clustering techniques to extract face sequences from each frame. Subsequently, a single neural network is used to extract emotional features from each frame.
## 3 Methodology
Figure 1 Depicts our framework. The architecture involves a SwinT with a SE layer added before the Swin Transformer SwinT. The model predicts the different basic facial emotions of humans. Swin Transformer SwinT is a hierarchical Transformer whose representation is computed with Shifted windows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing
\begin{table}
\begin{tabular}{c c} \hline \hline
**ID** & **Class** \\ \hline
0 & Fear \\
1 & Sadness \\
2 & Happy \\
3 & Anger \\
4 & Disgust \\
5 & Surprise \\
6 & Neutral \\ \hline \hline \end{tabular}
\end{table}
Table 1: ActViTly Classes.
for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size.
### Swin Transformer
The Transformer is an architecture for neural networks that was introduced in 2017. It is most commonly used in the field of Natural Language Processing (NLP) and has revolutionized the way in which sequence-to-sequence tasks are performed. The Transformer architecture is based on self-attention mechanisms, which allow the network to dynamically attend to important parts of the input sequence, enabling the network to capture long-range relationships and dependencies.
ViT is a variant of the Transformer architecture that was introduced in 2020 for computer vision tasks, specifically image recognition. Unlike traditional computer vision models that use convolutional neural networks (CNNs), ViT replaces convolutions with self-attention mechanisms, allowing the network to dynamically attend to important regions in an image. ViT has shown strong results on various image recognition tasks, outperforming traditional CNNs while using less memory and computational resources. We have used vision transformers _et al_. [12] by fine-tuning a pre-trained ImageNet model, we are able to classify eight human emotions: anger, contempt, disgust, fear, happiness, neutral, sadness, and surprise. The attention mechanism is a crucial component in this type of model, allowing for the extraction of valuable features from the input through a standard query, key, and value structure. The similarity between queries and keys is determined through matrix multiplication, followed by the application of the softmax function to the result, resulting in the 'attention' mechanism. Our transformer architecture consists of a stack of eleven encoders, preceded by a hybrid patch embedding architecture. Our approach improves upon the lack of inductive bias problem, which is a concern in Vision Transformers, as they possess much less image-specific inductive bias than CNNs. Swin Transformer (SwinT) is a state-of-the-art architecture specially crafted for computer vision, which integrates the best of two worlds - Convolutional Neural Networks (CNNs) and Transformers. By fusing the strengths of these two powerful models, SwinT surpasses the previous state-of-the-art Vision Transformers (ViT) by introducing a multi-scale approach that allows the network to capture both local and global features with incredible accuracy. This makes SwinT a top contender for complex computer vision tasks that require both fine-grained and global understanding of visual data.
The architecture of SwinT consists of a combination of convolutions, self-attention mechanisms, and a unique switching mechanism that dynamically switches between local and global processing. This enables the network to capture fine-grained details and high-level semantic information in an image, making it well-suited for various computer vision tasks, including image classification and object detection.
The performance of SwinT in various computer vision benchmarks has been impressive, and its unique combination of convolutions and self-attention mechanisms has the potential to drive further progress in the field. To construct a SwinT block, the standard multi-head self-attention (MSA) module in a Transformer block is replaced with a module based on shifted windows, while the remaining layers are kept the same. The SwinT block consists of a shifted window-based MSA module, followed by a 2-layer MLP with GELU nonlinearity in between. Before each MSA module and each MLP, a LayerNorm (LN) layer is applied, and a residual connection is applied after each module. Similar to ViT, SwinT divides the input RGB image into non-overlapping patches using a patch-splitting module. Each patch is treated as a "token," and its feature is constructed by concatenating the raw pixel RGB values. The feature dimension of each patch is 48, which accounts for the 3 channels of a 4 \(\times\) 4 patch. Finally, a linear embedding layer is applied to this raw-valued feature to project it to an arbitrary dimension C.
### Datasets
One of the challenges we encountered was the lack of data availability. Many datasets are protected only for research uses and are not available completely to students [14]. So, we only used the samples available on Kaggle or other open-source data platforms. Transformers need a good amount of samples to retrieve hidden patterns during the training phase and the few data in our hands are not enough to satisfy this requirement. So, we have a plan to manipulate our small amount of samples to increase the size of the final datasets using data augmentation. The final dataset will have eight different classes integrated by three different subsets of the dataset:
1. FER-2013: It contains approximately 40,000 facial RGB images of different expressions with a size restricted to 48 \(\times\) 48, and the main labels can be split into seven types: Fear, Sadness, Happy, Anger, Disgust, Surprise, Neutral.The Disgust expression has a minimal number of 600 samples, while other labels have nearly 5,000 samples each.
2. CK+: The CK+ dataset, an extended version of the Cohn-Kanade dataset, includes images derived from 593 video sequences captured from 123 distinct subjects. The subjects range from 18 to 50 years old, representing diverse genders and heritages. Each video exhibits a facial transition from a neutral expression to a specific peak expression, and the recordings were
made at 30 frames per second (FPS), with resolutions of either 640 \(\times\) 490 or 640 \(\times\) 480 pixels. Regrettably, we only possess a portion of the complete dataset, containing 1000 images with high variability, which we obtained from a Kaggle repository.
3. AffectNet: The dataset consists of a significant collection of 60,000 facial expression images, categorized into eight different classes, including neutral, happy, angry, sad, fear, surprise, disgust, and contempt. The dataset also includes the intensity of valence and arousal associated with each expression.
Each dataset focuses on RGB channels for the coloring and has different sizes and image extensions entirely stored (the total amount of data is around 2 GB). So, we need to establish a standard format to manage them simultaneously. We employed various fine-tuning techniques outlined in the preprocessing section
### Preprocessing
We collected data from different sources and integrated it to form a dataset. The validation and testing sets were balanced with an equal number of samples for each class, while the training set had a minimum amount of samples for each class, resulting in an unbalanced dataset. To balance the dataset, we used data augmentation to increase the number of samples for each class, and then we removed the excessively generated images to obtain a final dataset with the same number of samples for each class. Despite integrating available open-source data, the contempt and disgust classes had low amounts of data, and we addressed this issue by increasing the variance of pixel matrices using data augmentation. This approach allowed us to obtain a balanced dataset without using oversampling techniques. In this section, we will describe the data manipulation and merging process of multiple datasets, as well as the various data augmentation techniques used to preprocess the dataset for training. Since we used multiple datasets, we had to integrate them into one with the same dimensions and configuration for the model to use as input. Due to an
Figure 1: Facial Emotion Detection using SwinT with SE Block.
unbalanced class distribution, we utilized various augmentation techniques, including:
* **Image Rotation:** It is one of the widely used augmentation techniques and allows the model to be more diversified. The value is between 0 and 360. We rotate images until 10 grades to adapt frontal images of FER-2013 and CK+48 on a similar face orientation to AffectNet faces and do not cripple already rotated images.
* **Augmentation:** To quell the exorbitant data appetite of the Transformer architecture and mitigate the issue of lack of data, we used a variety of augmentation techniques to increase the sample size. We performed various augmentation methods such as RandomRotation[13], and RandomAutocontrast. This method helped the model get familiar with more data and thus improved its performance of the model.
Figure 2 shows the unbalanced and balanced dataset used for training after preprocessing and integration of different datasets of facial emotions mentioned in section 3.2 in detail.
### Model
In this section,we presented a Single-Step Detector model designed for emotion classification and face cropping, along with its adaptations. The first step involved resizing the images to 224 \(\times\) 224 \(\times\) 3 for them to be used as input for the Transformer model. The final step was to normalize the images using the same mean and standard deviation values used during the SwinT fine-tuning phase: 0.5 for both mean and standard deviation across all channels. Our approach employed SwinT to recognize eight distinct facial emotions, namely, anger, contempt, disgust, fear, happiness, neutral, sadness, and surprise. SwinT first split the input RGB image into non-overlapping patches, treated as tokens, using a patch-splitting module similar to ViT. Each patch's feature was set as a concatenation of the raw pixel RGB values, resulting in a patch dimension of 4 \(\times\) 4 \(\times\) 3 = 48. A linear embedding layer projected this raw-valued feature to an arbitrary dimension C. Multiple Transformer blocks with modified self-attention computation, referred to as Swin Transformer blocks, were then applied to these patch tokens. These blocks maintained the number of tokens (H/4 \(\times\) W/4) and were combined with the linear embedding to form "Stage 1". To produce a hierarchical representation, the number of tokens was reduced using patch-merging layers as the network went deeper. The first patch merging layer concatenated the features of each group of 2 \(\times\) 2 neighboring patches, and a linear layer was applied to the 4C-dimensional concatenated features. This reduced the number of tokens by a factor of 4 (2 downsampling of resolution), with the output dimension set to 2C and the resolution maintained at H/8 \(\times\) W/8. This first patch merging and feature transformation block was called "Stage 2". The same procedure was repeated twice more, resulting in "Stage 3" and "Stage 4", with output resolutions of H/16 \(\times\) W/16 and H/32 \(\times\) W/32, respectively. These stages worked together to produce a hierarchical representation with feature map resolutions similar to those of convolutional networks such as VGGNet and ResNet, allowing for easy replacement of backbone networks in existing methods for various vision tasks.
### Squeeze and Excitation
The Squeeze and Excitation (SE) block is also an attention mechanism. It contains fewer parameters than the self-attention block where two fully connected layers are used with only one operation of point-wise multiplication. It was first introduced in [9] to optimize CNN architecture as a channel-wise attention module, concretely we use only the excitation part since the squeeze part is a pooling layer built to reduce the dimension of the 2D-CNN layers [1]. The SE is introduced on top of the Transformer encoder more precisely on the classification token vector. Different from the self-attention block which is used inside the Transformer encoder to encode the input sequence and extract features through class token the SE is applied to re-calibrate the feature responses by explicitly modeling inter-dependencies among class token channels.
Figure 3: (a) The architecture of a SwinT; (b) two successive SwinT Blocks W-MSA and SW-MSA are multi-head self-attention modules with regular and shifted windowing configurations, respectively [12].
Figure 2: Class-level sub-population statistics for the final dataset after balancing.
### Transformer with Sharpness-Aware Minimizer
The Sharpness-Aware Minimizer (SAM) [2]algorithm takes advantage of the connections between the intricate geometry of the loss landscape of deep neural networks and their generalization ability. Unlike traditional optimization methods that prioritize minimizing the loss value of individual parameters, SAM smooths the loss landscape and simultaneously minimizes both loss value and curvature, seeking parameters that exhibit uniformly low loss values and generalization properties with more linear curvature on the loss values. When adapted to the Vision Transformer model, SAM can help to minimize loss values while increasing training time. Additionally, SAM's optimization function can address the problem of noisy labeling in datasets, which is especially important when dealing with the Affectnet dataset. However, it's worth noting that SAM's effectiveness decreases as the size of the training dataset increases, which presents a challenge when dealing with unbalanced datasets such as Affectnet which have a low number of samples for certain emotions like contempt and disgust. Overall, while SAM incurs additional computational costs per update, it has shown promising results on small datasets and can potentially be a valuable tool in improving the performance of deep neural networks.
The proposed model utilizes a SwinT model pre-trained on ImageNet-1K, with specific transformer configurations based on the dimension of the last layer for fine-tuning purposes. For each of their structure, it provides a random weighted-based version without a pre-training phase. Our model achieved an F1 score of 0.5452. As we present our experimental evaluation, we will also discuss the potential pitfalls of another method we try. During the preprocessing phase, we redefined the size of the images to adapt the dimension to 224 \(\times\) 224 on three different channels (corresponding to the RGB channels). we normalized the input data and prepared samples for the training phase. The normalization phase applies a mean and a standard deviation of 0.5 for each channel. The best validation accuracy taken from the set of epochs during the training phase defines the final model weights set. Fine-tuning phase adapts the model parameters to the FER task using stochastic gradient descent or sharpness-aware minimizer adaptation with a cross-entropy loss function. The learning rate follows a scheduler that adjusts the initial value for every ten epochs by multiplying it by 0.1. Finally, we applied a simple momentum of 0.9 to increase the speed of training and variable learning rate according to the optimizer chosen in the experiment. We carried out different experiments with different configurations in this environment. The SwinT architectures enable better separation of classes compared to CNN baseline architecture. In addition, the SE block enhances SwinT model robustness, as the intra-distances between clusters are maximized. Interestingly, the features before the SE form more compact clusters with inter-distance lower than the features after the SE, which may interpret the features before SE are more robust than those after the SE. We tested the performances of three different model variants including SwinT, SwinT+SE, and lastly SwinT+SE+SAM. Based on our empirical observations, we concluded that SwinT+SE+SAM outperforms the other architectures.
### Metrics Evaluation
We tested models on 4000 different samples of AffectNet without data augmentation with training and validation sets.
We trained our model using SwinT+SE+SAM model for 25 epochs. The testing dataset is formed by 4000 samples equally distributed (500 samples per class). The plot
Figure 4: Cross-entropy loss landscape on ViT (top) and the same smoothed landscape with the application of SAM (bottom) during the training on ImageNet [2].
above shows the training and Validation accuracy, Training accuracy was 0.832 and Validation accuracy was 0.5784. Also as the training reached closer to 25 epochs we can see training loss reduced similar to validation loss. Table 2 shows different metrics results for three different models we choose to compare against, which include SwinT, SwinT+SE, SwinT+SE+SAM, We can see that the performance of SwinT+SE+SAM seems to outperform rest of the model used. Due to a lack of data for contempt class, we evaluated models on AffectNet considering only the 7 augmented classes. Finally, for a more detailed evaluation, we have written precision, recall, and F1. We tried some different configurations about the use of SAM and the gradual learning rate on the SwinT configuration with the objective to find the best configuration, avoiding overfitting or under-fitting, and obtaining acceptable performances using a small dataset. The current SoTA for AffectNet dataset F1 score is 0.6629 for 7 classes of emotions using Multi-task Efficient Net-B2. On the other hand, our model is one of the first approaches using SwinT for facial emotion recognition, and we were able to achieve an F1 score of 0.5420.
## 4 Conclusion
We have explored the direct application of Transformers to image recognition and test robustness on noisy datasets like AffectNet. We interpret an image as a sequence of patches and process it by a standard Transformer encoder as used in NLP. Our challenge was to test and obtain a model with the capability to recognize eight classes of emotions with the constraints of data availability for the FER task; we used only a subset of AffectNet, FER-2013, and CK+ to train and validate models. we also used the SwinT+SE, a simple scheme that optimizes the learning of the SwinT by an attention block called Squeeze and Excitation. It performs impressively well in improving the performance of SwinT in the FER task, Additionally, we also used an SAM optimizer to further enhance the model performance to avoid loss due to noisy data.
|
2306.05412 | Decoupled Prioritized Resampling for Offline RL | Offline reinforcement learning (RL) is challenged by the distributional shift
problem. To address this problem, existing works mainly focus on designing
sophisticated policy constraints between the learned policy and the behavior
policy. However, these constraints are applied equally to well-performing and
inferior actions through uniform sampling, which might negatively affect the
learned policy. To alleviate this issue, we propose Offline Prioritized
Experience Replay (OPER), featuring a class of priority functions designed to
prioritize highly-rewarding transitions, making them more frequently visited
during training. Through theoretical analysis, we show that this class of
priority functions induce an improved behavior policy, and when constrained to
this improved policy, a policy-constrained offline RL algorithm is likely to
yield a better solution. We develop two practical strategies to obtain priority
weights by estimating advantages based on a fitted value network (OPER-A) or
utilizing trajectory returns (OPER-R) for quick computation. OPER is a
plug-and-play component for offline RL algorithms. As case studies, we evaluate
OPER on five different algorithms, including BC, TD3+BC, Onestep RL, CQL, and
IQL. Extensive experiments demonstrate that both OPER-A and OPER-R
significantly improve the performance for all baseline methods. Codes and
priority weights are availiable at https://github.com/sail-sg/OPER. | Yang Yue, Bingyi Kang, Xiao Ma, Qisen Yang, Gao Huang, Shiji Song, Shuicheng Yan | 2023-06-08T17:56:46Z | http://arxiv.org/abs/2306.05412v3 | # Offline Prioritized Experience Replay
###### Abstract
Offline reinforcement learning (RL) is challenged by the distributional shift problem. To address this problem, existing works mainly focus on designing sophisticated policy constraints between the learned policy and the behavior policy. However, these constraints are applied equally to well-performing and inferior actions through uniform sampling, which might negatively affect the learned policy. To alleviate this issue, we propose _Offline Prioritized Experience Replay_ (OPER), featuring a class of priority functions designed to prioritize highly-rewarding transitions, making them more frequently visited during training. Through theoretical analysis, we show that this class of priority functions induce an improved behavior policy, and when constrained to this improved policy, a policy-constrained offline RL algorithm is likely to yield a better solution. We develop two practical strategies to obtain priority weights by estimating advantages based on a fitted value network (OPER-A) or utilizing trajectory returns (OPER-R) for quick computation. OPER is a plug-and-play component for offline RL algorithms. As case studies, we evaluate OPER on five different algorithms, including BC, TD3+BC, Oneste RL, CQL, and IQL. Extensive experiments demonstrate that both OPER-A and OPER-R significantly improve the performance for all baseline methods. Codes and priority weights are available at [https://github.com/sail-sg/OPER](https://github.com/sail-sg/OPER).
## 1 Introduction
Offline Reinforcement Learning (RL) aims to solve the problem of learning from previously collected data without real-time interactions with the environment [22]. However, standard off-policy RL algorithms tend to perform poorly in the offline setting due to the distributional shift problem [11]. Specifically, to train a Q-value function based on the Bellman optimality equation, these methods frequently query the value of out-of-distribution (OOD) state-action pairs, which leads to accumulative extrapolation error. Most existing algorithms tackle this issue by constraining the learning policy to stay close to the behavior policy that generates the dataset. These constraints directly operate on the policy densities, such as KL divergence [15; 28; 39], Wasserstein distance [39], maximum mean discrepancy (MMD) [20], mutual information [24], and behavior cloning regularization [9; 37; 5].
However, such constraints might be too restrictive as the learned policy is forced to equally mimic bad and good actions of the behavior policy, especially in an offline scenario where data are generated by policies with different levels. For instance, consider a dataset \(\mathcal{D}\) with state space \(\mathcal{S}\) and action space \(\mathcal{A}=\{\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{3}\}\) collected with behavior policy \(\beta\). At one specific state \(\mathbf{s}^{*}\), the policy \(\beta\) assigns probability \(0.2\) to action \(\mathbf{a}_{1}\), 0.8 to \(\mathbf{a}_{2}\) and zero to \(\mathbf{a}_{3}\). However, \(\mathbf{a}_{1}\) would lead to much higher expected return than \(\mathbf{a}_{2}\). Minimizing the density distance of two policies can avoid \(\mathbf{a}_{3}\), but forces the learned policy to choose \(\mathbf{a}_{1}\) over \(\mathbf{a}_{2}\), resulting in much worse performance. Therefore, a more reasonable constraint is to align the learned policy inside the support of the behavior policy.
In other words, the learned policy has positive density only on actions to which the behavior policy gives non-zero probability. Nevertheless, explicit support alignment is intractable in practice [20]. We circumvent this issue by stepping back and asking the following question: _can we modify a behavior policy to construct a better one leading to relaxed constraints?_ As shown in Figure 0(a), if we are able to measure the quality of an action correctly, then we can adjust its density accordingly, giving a prioritized policy (blue line) sharing the same action support as the original policy (red line).
Based on the above motivation, we propose data prioritization strategies for offline RL, _i.e._, _Offline Prioritized Experience Replay (OPER)_. This approach utilizes a class of priority functions that prioritize data by assigning weight proportional to normalized (_i.e._non-negative) advantage -- the additional reward that can be obtained from taking a specific action. We theoretically demonstrate that a prioritized behavior policy, with this class of priority functions, yields a higher expected return than the original one. Furthermore, under some special cases, we show that a policy-constrained offline RL problem has an improved optimal solution when the behavior policy is prioritized. In practice, we develop two implementations, _Advantage-based OPER (OPER-A)_ and _Return-based OPER (OPER-R)_. OPER-A fits a value network from the dataset and calculates advantages with one-step TD error for all transitions. Then, it runs an offline RL algorithm with prioritized sampling to learn a policy. We further improve OPER-A by repeating the first step for a few iterations. Similarly, OPER-R instead employs accumulative return as the priority weight when the trajectory information is available, thus reducing the computational cost of value learning.
We conduct extensive experiments to demonstrate that our proposed prioritization strategies boost the performance of popular offline RL algorithms on diverse domains in D4RL [3; 8]. The performance of CQL, IQL, and TD3+BC has been improved significantly by 34, 46, and 67 points, respectively, on the Mujoco locomotion tasks, which shows OPER is a generic method orthogonal to algorithmic improvements.
## 2 Related Works
**Offline RL with Behavior Regularization.** To alleviate the distributional shift problem, a general framework employed by prior offline RL research is to constrain the learned policy to stay close to the behavior policy. Many works [15; 39] opt for KL-divergence as policy constraint. Exponentially advantage-weighted regression (AWR), an analytic solution of the constrained policy search problem with KL-divergence, is adopted by AWR [28], CRR [38] and AWAC [25]. IQL [19] follows AWR for policy improvement from the expectile value function that enables multi-step learning. BEAR [20] utilizes maximum mean discrepancy (MMD) to approximately constrain the learned policy in the support of the dataset, while Wu _et al_. [39] find MMD has no gain over KL divergence. Other variants of policy regularization include the use of Wasserstein distance [39], BC [9; 37; 5], and mutual information [24]. An alternative approach to regularize behavior involves modifying the Q-function with conservative estimates [21; 4; 40; 1; 26; 12].
**Data Prioritization.** In online RL, PER [29] prioritizes samples according to the absolute TD error; SIL [27] only learns from data with a discounted return higher than current value estimate.
Figure 1: (a) Action Prioritization. Actions in x-axis are ranked by their quality. A behavior policy (in red) usually follows a multi-modal distribution covering arbitrary actions. A prioritized policy (in blue) modifies the policy densities by assigning higher weights to better actions. The two policies share the same action support. (b) Trajectory Return Distributions of hopper-medium-replay (left) and hopper-medium-expert (right). Medium-replay datasets usually have a long-tailed distribution, and medium-expert often display two peaks. Both are composed of policies with varying quality.
In offline RL, schemes based on imitation learning (IL) aim to learn from demonstration, naturally prioritizing data with high return. These approaches include data selection [7; 23] and weighted imitation learning [36]. BAIL [7] estimates the optimal return, based on which good state-action pairs are selected to imitate. For RL-based learning from offline data, CQL (ReDS) [32] is specifically designed for CQL to reweight the data distribution; Zhang _et al._[14] and Yue _et al._[41] proposed to reweigh the entire trajectories according to their returns. Although sharing some conceptual similarities, our method offers a more fine-grained approach by resampling transitions rather than entire trajectories. Another distinction lies in our use of two samplers for promoting performance and stability, a uniform sampler for policy evaluation, and a prioritized sampler for policy improvement and policy constraint. Moreover, our method serves as a plug-and-play solution, designed to enhance a broad range of offline RL algorithms.
## 3 Preliminaries
**Reinforcement Learning (RL).** RL addresses the problem of sequential decision-making, which is formulated with a Markov Decision Process \(\langle\mathcal{S},\mathcal{A},T,r,\gamma\rangle\). Here, \(\mathcal{S}\) is a finite set of states; \(\mathcal{A}\) is the action space; \(T(\mathbf{s},\mathbf{a},\mathbf{s}^{\prime})=P(\mathbf{s}^{\prime}|\mathbf{s},\mathbf{a})\) is the dynamics function describing the probability of transitioning from a state \(\mathbf{s}\) to \(\mathbf{s}^{\prime}\) after taking an action \(\mathbf{a}\); \(r(\mathbf{s},\mathbf{a})\) and \(\gamma\in(0,1]\) are the reward function and the discount factor respectively. The behavior of an agent is denoted by a policy \(\pi(\mathbf{a}|\mathbf{s})\) mapping from states to actions. A trajectory is denoted by \(\tau=\{\mathbf{s}_{0},\mathbf{a}_{0},r_{0},\mathbf{s}_{1},\mathbf{a}_{1},r_{1},...\}\), which relies on initial state \(\mathbf{s}_{0}\), policy \(\pi\), and dynamics function \(T\). The fundamental goal of RL is to learn an agent maximizing the expected cumulative discounted reward:
\[J(\pi)=\mathbb{E}_{r\sim p_{\pi}(\tau)}\left[\sum_{t=0}^{\infty} \gamma^{t}r(\mathbf{s}_{t},\mathbf{a}_{t})\right]. \tag{1}\]
**Offline RL as Constrained Optimization.** Offline RL considers a dataset \(\mathcal{D}\) generated with behavior policy \(\beta\). Since \(\beta\) or \(\mathcal{D}\) is fixed throughout training, maximizing \(J(\pi)\) is equivalent to maximizing the improvement \(J(\pi)-J(\beta)\). The improvement can be measured by Lemma 3.1:
**Lemma 3.1**.: (Performance Difference Lemma [16].) _For any policy \(\pi\) and \(\beta\),_
\[J(\pi)-J(\beta)=\int_{\mathbf{s}}d_{\pi}(\mathbf{s})\int_{\mathbf{a}}\pi( \mathbf{a}|\mathbf{s})A^{\beta}(\mathbf{s},\mathbf{a})\;d\mathbf{a}\;d\mathbf{s}, \tag{2}\]
_where \(d_{\pi}(\mathbf{s})=\sum_{t=0}^{\infty}\gamma^{t}p(\mathbf{s}_{t}=\mathbf{s}|\pi)\), represents the unnormalized discounted state marginal distribution induced by the policy \(\pi\), and \(p(\mathbf{s}_{t}=\mathbf{s}|\pi)\) is the probability of the state \(\mathbf{s}_{t}\) being \(\mathbf{s}\) when following policy \(\pi\)[33]._
The proof can be found in Appendix A.1. We consider the popular offline RL paradigm with policy constraint, which enforces the policy \(\pi\) to stay close to the behavior policy \(\beta\). Therefore, Equation (2) can be approximated similar to TRPO [30] as below:
\[\hat{\eta}(\pi,\beta)\approx\int_{\mathbf{s}}d_{\beta}(\mathbf{s})\int_{ \mathbf{a}}\pi(\mathbf{a}|\mathbf{s})A^{\beta}(\mathbf{s},\mathbf{a})\;d\mathbf{a}\;d\mathbf{s}. \tag{3}\]
Hence, \(\hat{\eta}(\pi,\beta)\) represents the performance improvement of \(\pi\) over \(\beta\). Offline RL is to maximize \(J(\pi)\) while constraining \(\pi\) to be closed to \(\beta\). We can approximately formulate the objective as the following constrained optimization problem with an expected KL-divergence constraint:
\[\pi^{*}=\operatorname*{arg\,max}_{\pi}\hat{\eta}(\pi,\beta)\] (4) s.t. \[\int_{\mathbf{s}}d_{\beta}(\mathbf{s})\mathrm{D}_{\mathrm{KL}}\left(\pi( \cdot|\mathbf{s})||\beta(\cdot|\mathbf{s})\right)d\mathbf{s}\leq\epsilon, \tag{5}\] \[\int_{\mathbf{a}}\;\pi(\mathbf{a}|\mathbf{s})\;d\mathbf{a}=1,\quad\forall\;\mathbf{s}. \tag{6}\]
An analytic solution \(\pi^{*}\) of the above problem is given by [28] (see Appendix A.2).
## 4 Offline Prioritized Experience Replay
In this section, we develop Offline Prioritized Experience Replay, which prioritizes transitions in an offline dataset at training according to a class of priority functions. We start with an observation that
performing prioritized sampling on a dataset generated with policy \(\beta\) is equivalent to sampling from a new behavior \(\beta^{\prime}\). Then, we theoretically justify that \(\beta^{\prime}\) gives better performance than \(\beta\) in terms of the cumulative return when proper priority functions are chosen. In the end, we propose two practical implementations of OPER using transition advantage and return as the priority, respectively.
### Prioritized Behavior Policy
Consider a dataset \(\mathcal{D}\) generated with behavior policy \(\beta\). Let \(\omega(\mathbf{s},\mathbf{a})\) denote a weight/priority function for transitions in \(\mathcal{D}\). Then, we define a prioritized behavior policy \(\beta^{\prime}\):
\[\beta^{\prime}(\mathbf{a}|\mathbf{s})=\frac{\omega(\mathbf{s},\mathbf{a})\beta(\mathbf{a}|\mathbf{s}) }{\int_{\mathbf{a}}\omega(\mathbf{s},\mathbf{a})\beta(\mathbf{a}|\mathbf{s})d\mathbf{a}}, \tag{7}\]
where the denominator is to guarantee \(\int_{\mathbf{a}}\ \beta^{\prime}(\mathbf{a}|\mathbf{s})\ d\mathbf{a}=1\). As shown in Figure 0(a), \(\beta^{\prime}\) shares the same action support as \(\beta\). Suppose a dataset produced by prioritized sampling on \(\mathcal{D}\) is \(\mathcal{D}^{\prime}\). We have:
\[\mathbb{E}_{(\mathbf{s},\mathbf{a})\sim\mathcal{D}^{\prime}}\left[\mathcal{L}_{\theta }(\mathbf{s},\mathbf{a})\right]=\mathbb{E}_{\mathbf{s}\sim\mathcal{D},\mathbf{a}\sim\beta^{ \prime}(\cdot|\mathbf{s})}\left[\mathcal{L}_{\theta}(\mathbf{s},\mathbf{a})\right], \tag{8}\]
where \(\mathcal{L}\) represents a generic loss function, and the constant is discarded as it does not affect the optimization. This equation shows that prioritizing the transitions in a dataset by resampling or reweighting (LHS) can mimic the behavior of another policy \(\beta^{\prime}\) sharing the same action support as the behavior policy (RHS). We consider priority functions \(\omega(\mathbf{s},\mathbf{a})\) satisfying the following requirements:
* \(\forall\ \mathbf{s},\forall\ \mathbf{a},\quad\omega(\mathbf{s},\mathbf{a})\geq 0\)
* \(\omega(\mathbf{s},\mathbf{a})\) is monotonically increasing with respect to the quality of the action \(\mathbf{a}\).
In the context of RL, advantage \(A^{\beta}(\mathbf{s},\mathbf{a})\) represents the extra reward that could be obtained by taking the action \(\mathbf{a}\) over the expected return by following the current policy. Therefore, advantage \(A^{\beta}(\mathbf{s},\mathbf{a})\), as an action quality indicator, provides a perfect tool to construct \(\omega(\mathbf{s},\mathbf{a})\). We can easily construct many functions that satisfies the above properties, such as a linear function
\[\omega(A^{\beta}(\mathbf{s},\mathbf{a}))=C(A^{\beta}(\mathbf{s},\mathbf{a})-\min_{(\mathbf{s},\bm {a})\in\mathcal{D}}A^{\beta}(\mathbf{s},\mathbf{a})), \tag{9}\]
where \(C\) is a constant, set to make the mean over the dataset equal to 1. It can also be exponential function with temperature \(\omega(A^{\beta}(\mathbf{s},\mathbf{a}))=\exp(\frac{1}{T}A^{\beta}(\mathbf{s},\mathbf{a}))\).
### Prioritized Policy Improvement
We are ready to show that prioritizing a dataset can contribute to an improved learned policy in offline RL. We start with analyzing the performance difference between the actual behavior policy \(\beta\) and its prioritized version \(\beta^{\prime}\).
**Theorem 4.1**.: _Let \(\omega(A)\) be any priority function with non-negative and monotonic increasing properties. Then, we have_
\[J(\beta^{\prime})-J(\beta)\geq 0.\]
_If there exists a state \(\mathbf{s}\), under which not all actions in action support \(\{\mathbf{a}|\beta(\mathbf{a}|\mathbf{s})>0,\mathbf{a}\in\mathcal{A}\}\) have the same \(Q\)-value, the inequation strictly holds._
Detailed proof is defered to Appendix A.3. The theorem underscores that prioritization can improve the original behavior policy \(\beta\) if it is a stochastic policy or a mixture of policies, either of which could result in actions of different quality. Further, under two special cases, we establish an improvement guarantee on the learned policy. Consider the constrained optimization problem defined by Equation (4)-Equation (6), we use \(\pi^{*}\) and \(\pi^{\prime*}\) to denote the optimal solution regarding to behavior \(\beta\) and \(\beta^{\prime}\) respectively. Our expectation is that \(\pi^{\prime*}\) is better than \(\pi^{*}\) in terms of cumulative return, _i.e._, \(J(\pi^{\prime*})\geq J(\pi^{*})\). In an extreme case, if the policy constraint is exceptionally strong, causing the learned policy to exhibit performance very similar to the behavior policy, \(\pi^{\prime*}\) obviously surpasses \(\pi^{*}\) because \(\beta^{\prime}\) is greater than \(\beta\). In another more general case with a certain KL-divergence policy constraint, we show that if the state marginal distributions induced by \(\beta^{\prime}\) is close to the distribution induced by \(\beta\), the learned policy \(\pi^{\prime*}\) can be surely improved over \(\pi^{*}\). To show this, we use the cumulative return of \(\beta\) as a baseline to compare the performance differences \(\hat{\eta}(\pi^{\prime*},\beta)\) and \(\hat{\eta}(\pi^{*},\beta)\). Formally, when we assume \(d_{\beta^{\prime}}(\mathbf{s})=d_{\beta}(\mathbf{s})\), using \(\omega(A^{\beta}(\mathbf{s},\mathbf{a}))\) defined in Equation (9), we have
\[\hat{\eta}(\pi^{\prime*},\beta)\geq\hat{\eta}(\pi^{*},\beta), \tag{10}\]
where \(\pi^{*}\) defined by Equation (4)-Equation (6), \(\pi^{\prime*}\) is defined as \(\pi^{\prime*}=\arg\max_{\pi^{\prime}}\hat{\eta}(\pi^{\prime},\beta)\), s.t. \(\int_{\mathbf{a}}d_{\beta^{\prime}}(\mathbf{s})\mathrm{D}_{\mathrm{KL}}\left(\pi^{\prime}( \cdot|\mathbf{s})||\beta^{\prime}(\cdot|\mathbf{s})\right)d\mathbf{s}\leq\epsilon\), and \(\int_{\mathbf{a}}\ \pi^{\prime}(\mathbf{a}|\mathbf{s})\ d\mathbf{a}=1,\quad\forall\ \mathbf{s}\). The inequation strictly holds under the same condition with Theorem 4.1.
See Appendix A.4 for detailed proof. In this way, we have \(J(\pi^{\prime*})-J(\beta)\geq J(\pi^{*})-J(\beta)\), which demonstrates that \(\pi^{\prime*}\) is a better solution. It offers valuable insights that the prioritized policy constraint has the potential to improve the performance upper bound of the learned policy. The rationale behind is straightforward: when starting from a better behavior policy (Theorem 4.1), the learned policy is more likely, though not guaranteed, to achieve a higher final performance. To further support these improvement claim, we conduct experiments on both toy and high-dimensional environments in Section 5 (see Figure 2 and Table 3). These empirical investigations provide intuitive evidence that, in many cases, the learned policy can indeed be improved using prioritized data.
In offline RL, many methods falls into this KL-constrained framework. IQL, AWAC, CRR, and OnestepRL extracts policy by exponential advantage regression, which is induced from KL divergence [28]. Kostrikov _et al_. [18] shows that CQL can be viewed as a KL divergence regularization between the Boltzmann policy and the behavior policy. The BC term in TD3+BC can also be interpreted as KL divergence under Guassian policy with fixed variance. Therefore, our analysis above should generalize well to these algorithms, and OPER should be applicable to all of them.
### Practical Algorithms
Since the priority function in Equation (9) is intractable as the actual advantage of a state-action pair is never known. We propose two approaches to approximate it. First, we fit a value function \(V_{\psi}^{\beta}(\mathbf{s})\) for the behavior policy \(\beta\) by TD-learning:
\[\min_{\psi}\quad\mathbb{E}_{(\mathbf{s},\mathbf{a},\mathbf{s}^{\prime},r)\sim\mathcal{D}} \left[(r+\gamma V_{\psi}(\mathbf{s}^{\prime})-V_{\psi}(\mathbf{s}))^{2}\right]. \tag{11}\]
The advantage for \(i\)-th transition \((\mathbf{s}_{i},\mathbf{a}_{i},\mathbf{s}^{\prime}_{i},r_{i})\) in the dataset is then given a one-step TD error:
\[A(\mathbf{s}_{i},\mathbf{a}_{i})=r_{i}+V_{\psi}(\mathbf{s}^{\prime}_{i})-V_{\psi}(\mathbf{s}_ {i}), \tag{12}\]
which is similar to the form of priority in online PER, such as the absolute TD error, but differs in whether the absolute value is taken. This implementation is referred to as _advantage-based offline prioritized experience replay_ (OPER-A) in the following. However, the limitation of OPER-A is also clear, _i.e._, fitting the value network before algorithm training incurs extra computational cost. Therefore, we propose another variant that uses trajectory return as an alternative transition quality indicator. Similarly, considering the \(i\)-th transition, we find the complete trajectory that contains it, and calculate the return for the whole trajectory \(G_{i}=\sum_{k=0}^{T_{i}}r_{k}\). \(T_{i}\) is the total length of the trajectory. Then the priority is obtained by a linear function:
\[\omega_{i}=C(\frac{G_{i}-G_{\min}}{G_{\max}-G_{\min}}+p_{\text{base}}), \tag{13}\]
Figure 2: A visual illustration of the effect of OPER on a bandit experiment. (a) Visualization of prioritized behavior policies. The value in parentheses represents the average reward. (b) The first figure represents TD3+BC learning on the original dataset, while the second figure represents TD3+BC learning on the 5th prioritized dataset.
where \(G_{\min}=\min_{i}G_{i}\) and \(G_{\max}=\max_{i}G_{i}\). \(p_{\text{base}}\) is a a small positive constant that prevents zero weight. \(C\) is a constant, set to make the mean equal to 1. We term this variant as _return-based offline prioritized experience replay_ (OPER-R). OPER-R can only work with dataset where the trajectory information is available. We compare the characteristics of OPER-A and OPER-R in Table 1.
After obtaining priority weights, OPER can be implemented by both resampling and reweighting. The sampling probability or weight of a transition is proportional to its priority. An offline RL algorithm can be decomposed into three components: policy evaluation, policy improvement, and policy constraint. In alignment with our stated motivation, we employ prioritized data for both policy constraint and policy improvement terms to mimic being constrained to a better behavior policy. However, we found it is usually better to conduct policy evaluation under non-prioritized data. A more in-depth discussion on this matter can be found in Section 5.3. For resampling, when prioritized transitions is only applied to a subset of the three terms, two samplers are employed, one for uniform sampling and one for prioritized sampling. As for reweighting, the corresponding loss terms are directly scaled by its priority weight. The algorithm is given in Algorithm 1.
```
1:Require: Dataset \(\mathcal{D}=\{(\mathbf{s},\mathbf{a},\mathbf{s}^{\prime},r)_{i}\}_{i=1}^{N}\), a policy-constrained algorithm \(\mathcal{I}\)
2:Stage1: Calculate \(\omega_{i}\) according to Equation (12) or Equation (13) ( with trajectory information).
3:Stage2: Train algorithm \(\mathcal{I}\) on dataset \(\mathcal{D}\). Sample transition \(i\) with the priority \(\omega_{i}\) for policy constraint and improvement. Uniform sample for policy evaluation.
```
**Algorithm 1** Offline Prioritized Experience Replay
### Improving OPER-A by Iterative Prioritization
In Section 4.2, we demonstrate a likelihood that enhancing \(\beta(\mathbf{a}|\mathbf{s})\) to \(\beta^{\prime}(\mathbf{a}|\mathbf{s})\) leads to an improvement in the learned policy through offline RL algorithms. Then, a natural question arises: can we further boost the learned policy by improving \(\beta^{\prime}(\mathbf{a}|\mathbf{s})\)? The answer is yes. Suppose we have a sequence of behavior policies \(\beta^{(0)},\beta^{(1)},\ldots,\beta^{(K)}\) satisfying \(\beta^{(k)}(\mathbf{a}|\mathbf{s})\propto\omega(A^{(k-1)}(\mathbf{a},\mathbf{s}))\beta^{(k-1) }(\mathbf{a}|\mathbf{s})\), where \(A^{(k-1)}(\mathbf{a},\mathbf{s})\) represents the advantage for policy \(\beta^{(k-1)}(\mathbf{a}|\mathbf{s})\). We can easily justify that the behavior policies are monotonically improving by Theorem 4.1:
\[J(\beta^{(0)})\leq J(\beta^{(1)})\leq J(\beta^{(2)})\leq\cdots\leq J(\beta^{( K)}).\]
It is reasonable to anticipate, though not guarantee, the following relationship: \(J(\pi^{(0)*})\leq J(\pi^{(1)*})\leq J(\pi^{(2)})\leq\cdots\leq J(\pi^{(K)*})\), where \(\pi^{(k)*}\) is the optimal solution of Equation (4) when constrained to \(\beta^{(k)}\). We build such a sequence of behaviors from a fixed policy \(\beta^{(0)}=\beta\) and its dataset \(\mathcal{D}\), which relies on the following recursion:
\[\beta^{(k)}(\mathbf{a}|\mathbf{s})\propto\prod_{j=0}^{k-1}\omega(A^{(j)}(\mathbf{a},\mathbf{s }))\cdot\beta^{(0)}(\mathbf{a}|\mathbf{s}).\]
It means that a dataset \(\mathcal{D}^{(k)}\) for behavior \(\beta^{(k)}\) can be acquired by resampling the dataset \(\mathcal{D}\) with weight \(\prod_{j=0}^{k-1}\omega(A^{(j)}(\mathbf{a},\mathbf{s}))\) (normalize the probability sum to 1). Then, the advantage \(A^{(k)}\) can be approximated on \(\mathcal{D}^{(k)}\) following Equation (11)-Equation (12). After all iterations, we scale the standard deviation of priority weights to a hyperparameter \(\sigma\) to adjust the strength of data prioritization. The full algorithm for this iterative OPER is presented in Algorithm 2 in the appendix. In the experiments, OPER-A mainly refers to this improved version. One advantage of our method is that priority calculation and learning with offline algorithms are decoupled, which means weights of dataset acquired in the first stage can be saved and made public, and then offline RL algorithms could directly use the existing weights to boost their performance without extra cost.
\begin{table}
\begin{tabular}{c c c} \hline \hline & OPER-A & OPER-R \\ \hline Prerequisite & None & full trajectory \\ \hline Extra Runtime & fit value function & \(<3\) seconds \\ \hline Effect & boost performance by a large margin \\ \hline Feature & weights can be reused \\ \hline \hline \end{tabular}
\end{table}
Table 1: A summary for two algorithms.
Experiments
We start with a simple bandit experiment to illustrate the effect of OPER-A and OPER-R. Then we apply our methods to the state-of-the-art offline RL algorithms to show its effectiveness on the D4RL benchmark. Further, we conduct experiments to analyze the essential components in OPER.
### Toy Bandit Problem
We consider a bandit task, where the action space is 2D continuous, \(\mathcal{A}\!=\![-1,1]^{2}\)[37] and as a bandit has no states, the state space \(\mathcal{S}\!=\!\emptyset\). The offline dataset is as the first figure in Figure 1(a) shows (see Appendix B.1 for details). The goal of the bandit task is to learn the action mode with the highest expected reward from the offline dataset. To demonstrate the effect of OPER-A, We show that TD3+BC fails to find the optimal action, while with OPER-A, it solves the problem.
We first show that prioritized datasets are improved over the original one in Figure 1(a). The blue samples with the lowest reward are substantially reduced in the first two iterations. After iterating five times, the suboptimal red and green samples also significantly diminish. The average return of the prioritized dataset is increased to 4.9, very close to the value of optimal actions. In the 7th iteration, suboptimal actions almost disappear. Since reward is exactly return in bandit, OPER-R is the 1st prioritized behavior policy of OPER-A, which raises the average return from 1.0 to 2.69.
Next, we show how offline RL algorithms can be improved by OPER-A. As Figure 1(b) shows, when trained on the original dataset, TD3+BC failed to produce the optimal policy since it is negatively affected by suboptimal actions and converges to \((0.2,0.2)\), the mean of four modes (policy constraint) but biased towards the best action (policy improvement). However, if combined with OPER-A (iteration K=5), it successfully finds the optimal mode.
### D4RL Benchmark
In this section, experiments on D4RL benchmark [8] are conducted to empirically show Offline Prioritized Experience Replay can improve popular offline RL algorithms on diverse domains.
**Experiment Setups.** As discussed in Section 4.2, behavior cloning (BC), as a special case of offline RL, can be improved by OPER. In addition, OPER is a general plug-and-play training scheme that improves a variety of the state-of-the-art (SOTA) offline RL algorithms. In our work, we choose four widely adopted algorithms as case studies, CQL, OnestepRL, IQL, and TD3+BC.
When performing OPER, priority weights are generated in the first stage and then can be reused among baselines and seeds, saving computation time. However, here to assess the variance of OPER and verify the generalization ability of OPER to different algorithms, we organize experiments by sharing priority weights among baselines but not seeds. Specifically, we take seed=1 to compute OPER-A weights, and then apply these weights and seed=1 to run TD3+BC, IQL, _etc_.We subsequently repeat
\begin{table}
\end{table}
Table 2: Experiment results of OPER. The results that have an advantage over the baselines (denoted as **vanilla**) are printed in bold type. (a) “m”, “mr”, and “me” are respectively the abbreviations for “medium”, “medium-replay”, and “medium-expert”. “V”, “A”, and “R” respectively denotes “vanilla”, “OPER-A ”, and “OPER-R ”. Standard deviation of total score over seeds is also reported.
this process with the next random seed. We implement both resampling and reweighting for OPER, and the results of these two implementations are nearly identical. We report the results of resampling in the main text and also provide the results of reweighting in the Appendix C.3. More experiment settings can be found in Appendix B.2 and Appendix B.3.
**Mujoco locomotion.** Table 2(a) reveals that OPER induces a better offline dataset, from which behavior cloning produces a behavior policy with higher performance. Further, Table 3 shows that even though the state-of-the-art algorithms have achieved a strong performance, OPER-A and OPER-R can further improve the performance of _all_ algorithms by large margins. Specifically, with OPER-A, TD3+BC achieves a total score of 734.1 from 667.7. In addition, IQL, when combined with OPER-A and OPER-R, also reaches 727.8 and 726.7 points, respectively. We observe that OPER-A generally performs better than OPER-R. This is potentially because OPER-A is improved by iterative prioritization while OPER-R simply utilizes trajectory returns. Interestingly, OPER occasionally attains a smaller standard deviation than the vanilla, mainly due to its ability to achieve higher and more stable scores in some difficult environments. Another notable observation is that although TD3+BC performs worse than IQL and CQL in their vanilla implementations, TD3+BC eventually obtains the highest performance boost with OPER-A and achieves the best performance with a score of 734.1. The reason might be that TD3+BC directly constrains the policy with a BC term, which is easier to be affected by negative samples.
**Discussions on Data Prioritizing.** In particular, we observe that on the locomotion tasks, the performance boost of OPER-A and OPER-R mainly comes from the "medium-replay" and "medium-expert" level environments. To better understand this phenomenon, we visualize trajectory return distributions of hopper on these two levels in Figure 0(b). The visualizations suggest that these tasks have a highly diverse data distribution. This is consistent with our intuition that the more diverse the data quality is, the more potential for the data to be improved through data prioritizing by quality. Visualizations of all datasets are available at Figure 5 in the appendix.
**Antmaze, Kitchen and Adroit.** In addition to the locomotion tasks, we evaluate our methods in more challenging environments. Given that IQL achieves the absolute SOTA performance in these domains and other algorithms, e.g., CQL, do not give an ideal performance in these domains, we pick IQL as a case study. We present the results in Table 2(b). Similarly, we observe that both OPER-A and OPER-R can further improve the performance of IQL on all three domains. In the most challenging Antmaze environments, OPER-A and OPER-R successfully improve the most difficult medium and large environments. For Kitchen and Adroit tasks, we have observed a similar trend of improvement.
### Ablation Studies
**Effect of the number of iterations K.** In Table 4, as the iteration progresses, the overall performance of BC and TD3+BC combined with OPER-R on the locomotion tasks continues to increase. For BC, performance declines when K = 5, which may be due to some transitions having dispro
\begin{table}
\begin{tabular}{c||c|c c c c c c} \hline \hline \(K\) & vanilla & 1 & 2 & 3 & 4 & 5 \\ \hline BC & 476.8 & 651.0 & 674.5 & 664.5 & **683.6** & 662.1 \\ \hline TD3+BC & 667.7 & 711.2 & 706.3 & 719.7 & 725.1 & **734.1** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Effect of the number of iterations \(K\) (15 seeds).
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline Dataset & \multicolumn{3}{c}{TD3+BC} & \multicolumn{3}{c}{CQL} & \multicolumn{3}{c}{IQL} & \multicolumn{3}{c}{OnestepRL} \\ \cline{2-13} & V & A & R & V & A & R & V & A & R & V & A & R \\ \hline halfCheetah-m & 48.3 & **50.0** & 48.6 & 48.2 & 48.3 & 48.1 & 47.6 & 47.5 & 47.6 & 48.4 & 48.6 & 48.4 \\ hopper-m & 57.3 & **74.1** & 59.1 & 72.1 & 72.7 & 74.9 & 64.1 & **66.0** & **66.4** & 57.2 & **64.8** & 58.2 \\ walker2d-m & 84.9 & 84.9 & 84.2 & 82.1 & 83.9 & 80.7 & 80.0 & **83.9** & 78.3 & 77.9 & **85.1** & 80.9 \\ halfCheetah-mr & 44.5 & 45.9 & 44.6 & 45.2 & 45.4 & **46.1** & 43.4 & 43.0 & **44.0** & 37.5 & **42.9** & 39.7 \\ hopper-r & 58.0 & **88.7** & **77.4** & 96.1 & 94.2 & 92.3 & 88.4 & **95.3** & **99.9** & 80.1 & 82.6 & 90.6 \\ walker2d-mr & 72.9 & **88.2** & 82.7 & 82.3 & **85.9** & 81.7 & 69.1 & **82.7** & **79.1** & 58.2 & **72.4** & 63.7 \\ halfCheetah-me & 92.4 & 83.3 & 93.9 & 62.1 & **70.7** & **84.3** & 82.9 & **92.7** & **93.5** & 94.1 & **94.2** & **93.9** \\ hopper-me & 99.2 & **107.3** & **106.7** & 82.9 & **105.1** & **97.2** & 97.2 & **105.1** & **107.2** & 80.5 & **99.4** & **98.8** \\ walker2d-me & 110.2 & 111.7 & 110.1 & 110.0 & 107.9 & 109.6 & 109.4 & **111.6** & 110.7 & 111.1 & **112.5** & 111.4 \\ \hline total & 667.7 & **734.1** & **707.3** & 681.0 & **714.1** & **714.9** & 682.1 & **727.8** & **726.7** & 655.0 & **702.5** & **685.6** \\ SD(total) & 18.4 & 10.4 & 7.9 & 15.3 & 6.2 & 14.9 & 22.3 & 11.2 & 8.9 & 21.7 & 6.2 & 16.7 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Averaged normalized scores on MuJoCo locomotion v2 tasks. We report the average and the standard deviation (SD) of the total score over 15 seeds. Standard deviation of individual games on TD3+BC can be found at Appendix C.1.
portionately large or small weights, which affects the gradient descent's convergence. However, the improvement is significant even with just one iteration compared to the original algorithm. We typically choose a value of K between 3 and 5 for the best performance.
**Comparison with PER.** OPER establishes a connection between a class of priority functions and improved behavior policies. In this section, we emphasize the advantages of this priority function by comparing OPER with PER [29], which primarily aims to accelerate value function fitting by dynamically employing the **absolute** TD-error as the priority. Consider a sample with large negative TD errors (_i.e._, advantage in Equation (12)), PER gives high priorities to them while OPER discourages them. Specifically, PER thinks the sample contains more information for value fitting, while OPER thinks the sample's action is not good behavior. In Figure 3, every curve is an average of 9 mujoco locomotion tasks. The results show that PER slightly harms TD3+BC in offline Mujoco domains. In contrast, motivated by the goal of inducing a better behavior policy, OPER substantially enhances TD3+BC's performance.
**Comparison with longer training.** OPER-A requires additional computational cost to calculate priority weights. To provide a fair comparison, we ran TD3+BC for twice as long. We found that TD3+BC converges rapidly, and the results at 2M steps were similar to those at 1M steps (677.7 vs. 672.7). This indicates that the superior performance of OPER-A is not due to extra computation, but rather stems from the improved policy constraint.
**Where to apply prioritized transitions?** A basic recipe of offline RL algorithms comprises policy evaluation, policy improvement, and policy constraint. Since OPER focuses on producing a better behavior policy for behavior constraint, it seems natural to solely apply prioritized data to the policy constraint term. However, this does not always improve performance. For instance, as Table 5 shows, on TD3+BC with OPER-R, only prioritizing data for constraint results in a dramatic drop. We observed that it suffers from extrapolation error and results in value overestimation in walker2d-m, hopper-mr, and walker2d-me. We suspect it is because, when only prioritizing the constraint term, it imposes a weaker constraint on low-priority actions, while the policy improvement remains unchanged. As a result, during Bellman's update, the extrapolation error of low-priority samples accumulates. As a simple fix to this, if we clip the priority weights less than 1 to 1, and leave weights greater than 1 unchanged, the results will be much better (608.7 v.s. 672.9). However, clipping gives a biased estimation to the weights and hinders the performance of OPER-A. A more straightforward and effective solution is to apply data prioritization to both the policy improvement and constraint term, which is crucial to achieving an ideal score.
For applying data prioritization to policy evaluation, we empirically found that usually it severely degrades the performance except for few cases. We hypothesize that data prioritization changes the state-action distribution of the dataset and intensifies the degree of off-policy between the current policy and the dataset. Although it does not harm policy learning, it might potentially cause instability in policy evaluation when combined with bootstrap and function approximators [33; 35; 34]. This also explains why OPER-A is severely impaired by data prioritization for policy evaluation, whereas OPER-R is not. The underlying cause can be that OPER-A evaluates the policy on more off-policy data obtained by multiple iterations.
Figure 3: Compare OPER and PER on mujoco locomotion based on TD3+BC.
\begin{table}
\begin{tabular}{c|c|c c c c} \hline \hline & vanilla & & +CNT & +CNT+IPV & all \\ \hline TD3+BC & 667.7 & OPER-R & 723.7 & **734.1** & **731.7** \\ & & OPER-R & 608.7 & **698.6** & **707.3** \\ \hline COL & 681.0 & OPER-R & 674.9 & **714.1** & 652.8 \\ &. & OPER-R & 672.3 & **714.9** & **708.3** \\ \hline IQL & 682.1 & OPER-R & - & **727.8** & 674.3 \\ & & OPER-R & - & 706.5 & **726.7** \\ \hline OnestepRL & 655.0 & OPER-R & - & **702.5** & 658.1 \\ & & OPER-R & - & **685.6** & **681.1** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Effect of prioritizing data for different terms. Results are the total scores on Mujoco tasks with 15 seeds. “+CNT” and “+IPV” denotes prioritizing data for policy constraint and policy improvement term, respectively1. “all” denotes prioritizing data for all three terms. Very low scores are marked with red.
Conclusion and Limitation
This paper proposes a plug-and-play component OPER for offline RL algorithms by prioritizing data according to action quality. Furthermore, we show that a better policy constraint is likely induced when the proposed OPER is used. We develop two practical implementations, OPER-A and OPER-R, to compute priority. Extensive experiments demonstrate that both OPER-A and OPER-R can effectively boost the performance of popular RL algorithms. The iterative computation of OPER-A priority weights adds an extra computational burden to offline RL algorithms. In this paper, OPER-A mitigates this issue by sharing weights across different algorithms. Exploring more efficient methods for obtaining priority weights remains an avenue for future work. |
2301.04973 | Desynchronizing two oscillators while stimulating and observing only one | Synchronization of two or more self-sustained oscillators is a well-known and
studied phenomenon, appearing both in natural and designed systems. In some
cases, the synchronized state is undesired, and the aim is to destroy synchrony
by external intervention. In this paper, we focus on desynchronizing two
self-sustained oscillators by short pulses delivered to the system in a
phase-specific manner. We analyze a non-trivial case when we cannot access both
oscillators but stimulate only one. The following restriction is that we can
monitor only one unit, be it a stimulated or non-stimulated one. First, we use
a system of two coupled Rayleigh oscillators to demonstrate how a loss of
synchrony can be induced by stimulating a unit once per period at a specific
phase and detected by observing consecutive inter-pulse durations. Next, we
exploit the phase approximation to develop a rigorous theory formulating the
problem in terms of a map. We derive exact expressions for the phase --
isostable coordinates of this coupled system and show a relation between the
phase and isostable response curves to the phase response curve of the
uncoupled oscillator. Finally, we demonstrate how to obtain phase response
information from the system using time series and discuss the differences
between observing the stimulated and unstimulated oscillator. | Erik T. K. Mau, Michael Rosenblum | 2023-01-12T12:29:15Z | http://arxiv.org/abs/2301.04973v3 | # Desynchronizing two oscillators while stimulating and observing only one
###### Abstract
Synchronization of two or more self-sustained oscillators is a well-known and studied phenomenon, appearing both in natural and designed systems. In some cases, the synchronized state is undesired, and the aim is to destroy synchrony by external intervention. In this paper, we focus on desynchronizing two self-sustained oscillators by short pulses delivered to the system in a phase-specific manner. We analyze a non-trivial case when we cannot access both oscillators but stimulate only one. The following restriction is that we can monitor only one unit, be it a stimulated or non-stimulated one. First, we use a system of two coupled Rayleigh oscillators to demonstrate how a loss of synchrony can be induced by stimulating a unit once per period at a specific phase and detected by observing consecutive inter-pulse durations. Next, we exploit the phase approximation to develop a rigorous theory formulating the problem in terms of a map. We derive exact expressions for the phase - isostable coordinates of this coupled system and show a relation between the phase and isostable response curves to the phase response curve of the uncoupled oscillator. Finally, we demonstrate how to obtain phase response information from the system using time series and discuss the differences between observing the stimulated and unstimulated oscillator.
control of synchrony, phase response, phase reduction
## I Introduction
Synchronization of oscillatory sources can be beneficial or harmful. Examples of the desired synchrony are power grids' functioning [1; 2; 3; 4] and atrial pacemaker cells' coordinated activity [5; 6]. On the contrary, Parkinson's disease and epilepsy are often related to an adverse effect of synchrony in large neuronal populations [7; 8; 9; 10; 11]. Numerous model studies suggested various techniques for the control of synchrony to cope with this adverse effect [12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24]. These studies exploited models of (infinitely) many or several [25] mean-field coupled limit-cycle oscillators and assumed that the control input affects the whole population or at least its significant part(s). The feedback techniques relied on observing the collective dynamics. A general approach called synchronization engineering [26; 27] also implies access to all network units.
Here, we consider a particular control problem and propose a method to desynchronize two limit-cycle oscillators. Our study is motivated by a neuroscience problem formulated by Azodi-Avval and Gharabagi [28], who modeled the effect of phase-specific neuromodulation by deep brain stimulation on the synchronized activity of two brain areas. Treating these areas as macroscopical oscillators, they assumed that measurements from both oscillators were available and exploited the technique from
Ref. [29] to determine the phase response curve (PRC) for one of the units. Knowledge of the PRC allows stimulation at the most sensitive phase and thus provides a way to efficient desynchronization; however, the PRC obtained from observation of two interacting units generally differs from the phase response to external stimulation. As another relevant and motivating application, we mention studies of circadian rhythms using the so-called forced desynchrony protocol [30]. For example, de la Iglesia et al. [31] exposed rats to an artificial light-dark rhythm with a period of 22 hours and found that the rats' activity pattern split into the entrained rhythm and another one with a period significantly larger than 24 hours. This splitting may indicate an enforced desynchronization of individual circadian oscillators.
We elaborate on the idea by Azodi-Avval and Gharabaghi [28] and suggest a minimal setup where we achieve desynchronization by observing and perturbing _only one unit_. We consider two versions of the approach, where we monitor either the stimulated oscillator or the other. Having in mind a possible neuroscience application, we exploit a pulsatile perturbation delivered approximately once per oscillatory cycle. We remark that models of two coupled phase oscillators with open-loop pulsatile stimulation have been studied in Refs. [32; 33; 34]. We also mention that Montaseri et al. [35; 36] used a feedback controller design inspired by the role of astrocytes in neural information processing to desynchronize two oscillators. However, Refs. [35; 36] assumed that both systems could be observed and stimulated.
Finally, we recall that Pyragas et al. [37] and Tukhlina et al. [38] considered synchrony suppression in a model of two interacting oscillator populations, one used for sensing and another for stimulation. This model can be treated as two coupled macroscopic oscillators. However, desynchronization on the level of subpopulations means quenching of macroscopic oscillators, while our study aims to keep systems oscillating but destroy their synchrony.
This article is structured as follows: First, we illustrate the problem formulation and the detection of stimulation-induced desynchronization using two coupled Rayleigh oscillators in Section II. In Section III, we develop a theoretical framework for two weakly coupled oscillators, describing phase-specific stimulation of the system in terms of a dynamical map. Our theoretical analysis exploits the phase - isostable representation of oscillatory dynamics [39]. Section IV shows a relation between the phase and isostable response curves of the synchronized oscillatory dynamic and the phase response curve of an uncoupled oscillator and thus complements the theoretical analysis. Here we also discuss possible approaches to obtain the phase response curve from time series data of only one oscillator. Finally, Section V discusses a strategy to optimize the simulation by minimizing the total intervention in the system, as well as open problems and limitations of our approach.
## II Illustration of the approach
The general theory says that phase dynamics of two weakly coupled limit-cycle oscillators can be illustrated by the motion of an overdamped particle in an inclined potential, see, e.g., [40] and Fig. 1. The particle at rest in a potential well corresponds to the synchronous state with the phase difference \(\varphi_{1}-\varphi_{2}=\mathrm{const}\). Thus, the desynchronization problem reduces to kicking the particle down the potential, inducing phase slips, i.e., relatively rapid jumps where the phase difference changes by \(\pm 2\pi\). (Certainly, one can kick the particle to move it up, but this action requires stronger stimulation and, therefore, is less efficient.) For that purpose, we consider relatively rare pulses applied approximately once per oscillation period. Suppose each pulse shifts the particle toward the local maximum. Between two consecutive stimuli, the particle tends to return to equilibrium. This consideration shows that there shall be a critical value of the pulse strength such that the phase shifts accumulate and the particle eventually moves from its stable equilibrium position over the maximum to the following equilibrium position. This way, the phase difference changes by \(2\pi\) (phase slip). The continuing stimulation evokes the next phase slip, and so on.
We demonstrate the approach exploiting the system of two coupled Rayleigh oscillators perturbed by a pulse stimulation:
\[\ddot{x}_{1}-\mu(1-\dot{x}_{1}^{2})\dot{x}_{1}+\omega_{1}^{2}x_{ 1} =\varepsilon(x_{2}-x_{1})+p(t)\;, \tag{1}\] \[\ddot{x}_{2}-\mu(1-\dot{x}_{2}^{2})\dot{x}_{2}+\omega_{2}^{2}x_{ 2} =\varepsilon(x_{1}-x_{2})\;. \tag{2}\]
Figure 1: The dynamics of the phase difference between two weakly coupled oscillators can be illustrated by the motion of an overdamped particle in an inclined potential, plotted here for the case \(\omega_{1}<\omega_{2}\). A synchronous state corresponds to a particle trapped in a minimum of the potential. Stimuli applied at a proper phase can shift the particle from the equilibrium position and eventually move it to the next potential well, decreasing the phase difference \(\varphi_{1}-\varphi_{2}\) by \(2\pi\), i.e., inducing a phase slip. We aim to design a stimulation that permanently causes phase slips, thus destroying synchrony.
Parameters are \(\mu=2\), \(\omega_{1}=0.98\), \(\omega_{2}=1.02\), \(\varepsilon=0.2\). The perturbation \(p(t)\) is a pulse train, \(p(t)=\sum_{k}\mathcal{P}(t_{n})\), where \(\mathcal{P}(t_{n})\) is a finite-length pulse applied at the instant \(t_{n}\). We note that we label the stimulated unit as the first for definiteness. Next, without loss of generality, we choose \(\omega_{1}<\omega_{2}\); to treat the opposite choice \(\omega_{1}>\omega_{2}\), one has to choose another stimulation phase, as discussed below.
We now discuss the determination of the stimulation times \(t_{n}\). Suppose we observe \(x_{1}(t)\). We define threshold-crossing events \(t_{n}\) as the instants when \(x_{1}(t_{n})=x_{0}\) and \(\dot{x}_{1}(t_{n})\) is either always positive or always negative; here, \(x_{0}\) is the threshold value. (The proper choice of \(x_{0}\) and condition for \(\dot{x}_{1}\) is discussed below in Section IV.) We apply pulses at \(t_{n}\) with the following additional restriction. Suppose for definiteness that we choose the condition \(\dot{x}_{1}>0\). If the pulse applied at \(t_{n}\) reduces \(x_{1}(t)\), then after a very short time interval \(\Delta t\ll T_{0}\), where \(T_{0}\) is the period of synchronous oscillation, \(x_{1}(t)\) again achieves the threshold value \(x_{0}\). We neglect this threshold crossing and wait till the next one so that the intervals \(\tau_{n}=t_{n+1}-t_{n}\) are of the order of \(T_{0}\). We denote the return times \(\tau_{n}\) as partial periods of the first oscillator. The formulated condition can be easily explained in terms of the oscillator's phase. Indeed, the threshold condition \(x_{1}(t_{n})=x_{0}\) corresponds to achieving a certain phase \(\varphi_{0}\). Stimulation can decrease \(\varphi_{0}\); thus, for the subsequent stimulation, we wait until the oscillator's phase becomes \(\varphi_{0}+2\pi\). A similar consideration applies when we monitor \(x_{2}\).
We illustrate the effect of stimulation by plotting the partial periods \(\tau_{n}\) vs. \(t_{n}\) in Fig. 2a, for \(x_{0}=1\), \(\dot{x}_{1}>0\). Panel (b) shows the protophase difference [41]. We use rectangular pulses of duration \(\Delta=0.01\) and amplitude \(I\). Inspecting the plot, we conclude that oscillation of \(\tau_{n}\) indicates phase slips and, hence, a desynchronizing action.[42]
Figure 3 depicts the case when we observe the second oscillator. Thus, we define the partial periods, now for the second oscillator, as \(\tau_{n}=t_{n+1}-t_{n}\) via the events \(t_{k}\) when \(x_{2}(t_{n})\) crosses a certain threshold, e.g., in the positive direction. Omitting the first 50 intervals, we plot \(\tau_{n}\), \(\tau_{min}=\min(\tau_{n})\), and \(\tau_{max}=\max(\tau_{n})\), \(n>50\), for different values of the pulse amplitude \(I\). We used \(x_{0}=-1\); other parameters are the same as in Fig. 2. We see that sufficiently strong stimulation results in oscillatory behavior of \(\tau_{n}\), which means the appearance of phase slips, and, hence, desynchronization. Thus, we can desynchronize the system by stimulating only one of two synchronous oscillators while observing any of these two. We support this conclusion with theoretical analysis in the next Section.
Figure 3: Illustration of the case when the first Rayleigh oscillator is stimulated by a pulse whenever the phase of the second one attains a specific fixed value. In (a), we plot values of the second unit periods \(\tau_{n}\) vs. the stimulation amplitude \(I\) (red dots); for better visibility, we also show the minimal and maximal values of \(\tau_{n}\) for each \(I\) (blue circles). For \(I\lesssim 2.55\) we have \(\tau_{min}=\tau_{max}\), what means that the system remains synchronized. For \(I\gtrsim 2.55\), the partial periods \(\tau_{n}\) oscillate, indicating phase slips and loss of synchrony. The loss of synchrony is confirmed in panel (b), where we demonstrate the difference \(\Omega\) of oscillator frequencies that is non-zero for \(I\gtrsim 2.55\).
Figure 2: (a) Partial periods of the stimulated Rayleigh oscillator vs. stimulation times, for weak, \(I=2\), and strong, \(I=4\) stimulation (blue diamonds, left vertical axis, and red circles, right vertical axis, respectively). In both cases, the stimulation changes the period. However, when the stimulation amplitude is below a certain threshold, the two coupled oscillators remain synchronized, as seen from the (proto)phase difference depicted in (b). If the stimulation is sufficiently strong, it induces phase slips; the occurrence of phase slips can be traced from the oscillations of \(\tau_{n}\).
## III Desynchronizing by pulse stimulation: theory
It is well-known that, for sufficiently weak coupling, phase dynamics of two interacting units obey the Kuramoto-Daido equations:
\[\dot{\varphi}_{1} =\omega_{1}+C_{1}(\varphi_{1}-\varphi_{2})+Z(\varphi_{1})p(t)\;, \tag{3}\] \[\dot{\varphi}_{2} =\omega_{2}+C_{2}(\varphi_{2}-\varphi_{1})\,, \tag{4}\]
where \(C_{1,2}\) are coupling functions. Here, we assume for definiteness that \(\omega_{1}<\omega_{2}\) and that stimulation \(p(t)\) affects the first oscillator. The last term in Eq. (3) describes the stimulation, where \(p(t)\) is the external force, and the phase response curve (PRC) of the uncoupled oscillator \(Z(\varphi_{1})\) quantifies the sensitivity of the unit to perturbation. We will consider separately two cases where we observe either the first or the second oscillator. Therefore, introducing the phase difference \(\eta=\varphi_{1}-\varphi_{2}\) we re-write Eqs. (3,4) as equations for \(\eta,\varphi_{i}\), where either \(i=1\) or \(i=2\):
\[\dot{\eta} =f(\eta)+Z(\varphi_{i}+\delta_{i2}\eta)p(t)\;, \tag{5}\] \[\dot{\varphi}_{i} =g_{i}(\eta)+\delta_{i1}Z(\varphi_{i}+\delta_{i2}\eta)p(t)\,. \tag{6}\]
Here, \(\delta_{i2}\) is the Kronecker symbol, \(g_{1}(\eta)=\omega_{1}+C_{1}(\eta)\), \(g_{2}(\eta)=\omega_{2}+C_{2}(-\eta)\), and \(f(\eta)=g_{1}(\eta)-g_{2}(\eta)\). Note that PRC \(Z\) remains the function of \(\varphi_{1}=\varphi_{i}+\delta_{i2}\eta\).
Suppose there are no perturbations, \(p(t)=0\). Then, Eq. (5) reduces to \(\dot{\eta}=f(\eta)\). The dynamics of this equation are well-studied. Depending on the parameters, it has either asynchronous solution \(\dot{\eta}<0\) or synchronous, phase-locked solution \(\dot{\eta}=0\). In the latter case, one or several pairs of stable and unstable fixed points exist. We present the theory for the case when there exists only one stable fixed point \(\eta^{*}=\text{const}\), \(f^{\prime}(\eta^{*})<0\), and discuss a possible extension to the general case in Section V. Asynchronous solutions correspond to quasiperiodic trajectories on the two-torus spanned by \(\varphi_{1},\varphi_{2}\). In contrast, the existence of stable and unstable fixed points in Eq. (5) means the appearance of stable and unstable limit cycles on the torus.
Consider the stable limit cycle on the two-torus. The frequency of this synchronous solution is \(\omega=g_{i}(\eta^{*})\). Next, we define the phase on the limit cycle and in its vicinity. We emphasize that the phase of the uncoupled oscillator \(\varphi_{i}\) is not the true asymptotic phase of the synchronous solution of the coupled system because its time derivative is not a constant but depends on \(\eta\), see Eq. (6). Thus, in the context of the coupled system, we treat \(\varphi_{i}\) as the protophase (angle variable). Using the ansatz \(\Phi(\varphi_{i},\eta)=\varphi_{i}+\delta_{i2}\eta^{*}+F_{i}(\eta)\) with an additional condition \(F_{i}(\eta^{*})=0\), we require \(\dot{\Phi}=\omega\) and obtain
\[\dot{\Phi}(\varphi_{i},\eta)=\dot{\varphi}_{i}+F^{\prime}_{i}(\eta)\dot{\eta} =g_{i}(\eta)+F^{\prime}_{i}(\eta)f(\eta)=\omega\;.\]
Solving this equation for \(F^{\prime}_{i}\) and integrating, we obtain[43]:
\[\Phi(\varphi_{i},\eta)=\varphi_{i}+\delta_{i2}\eta^{*}+\int_{\eta^{*}}^{\eta }\frac{\omega-g_{i}(s)}{f(s)}\,\mathrm{d}s\;. \tag{7}\]
Using \(f(\eta)=g_{1}(\eta)-g_{2}(\eta)\), it is easy to check that \(\Phi(\varphi_{1},\eta)=\Phi(\varphi_{2},\eta)\), i.e., the definition of phase does not depend on the chosen protophase. On the limit cycle (\(\eta=\eta^{*}\)), we have \(\Phi=\varphi_{1}=\varphi_{2}+\eta^{*}\), meaning phase and protophase coincide up to a constant shift. We remind that by construction, \(\Phi(\varphi_{i},\eta^{*})=\varphi_{1}\), i.e. the protophase \(\varphi_{1}\) coincides with \(\Phi\) on the limit cycle.
Before proceeding with a separate analysis of the cases \(i=1\) (the first oscillator is observed) and \(i=2\) (the second unit is observed), we conclude the theoretical consideration by the following remark. For the attractive cycle on the torus, \(\eta-\eta^{*}=\psi\) describes the deviation from the stable solution; hence, the variable \(\eta\) plays the role of the amplitude. In a small vicinity of the limit cycle[44], we then write \(\dot{\psi}=f^{\prime}(\eta^{*})\psi=\kappa\psi\) and interpret \(\psi\) as the isostable variable[39]. We return to the phase - isostable representation of the synchronized dynamics in Section IV.
### Stimulating and observing the same oscillator
Here, we assume we observe the first unit and compute the intervals between the stimuli. We recall that we stimulate each time the phase of the first oscillator attains some fixed value \(\varphi_{0}\). Let the variable \(\eta\) immediately before the \(n\)-th stimulus is \(\eta_{n}\). We assume instantaneous phase shift due to the \(\delta\)-kick, i.e., \(\mathcal{P}(t_{n})=q\delta(t-t_{n})\), so that \(\varphi_{1}=\varphi_{0}\rightarrow\varphi_{0}+A\) and \(\eta\rightarrow\eta+A\), where the instantaneous phase shift \(A=qZ(\varphi_{0})\) and \(q\) is the amplitude of the \(\delta\)-pulse. As before, we denote the time between the \(n\)-th and \(n+1\)-th kick by \(\tau_{n}\). Between the stimuli, we deal with autonomous dynamics. Hence, \(\tau_{n}\) is obtained by
\[\tau_{n}=\int_{\eta_{n}+A}^{\eta_{n+1}}\frac{\mathrm{d}s}{f(s)}\,, \tag{8}\]
and the phase \(\Phi\) within this time interval grows by \(\omega\tau_{n}\). We thus write
\[\Phi(\varphi_{0}+A,\eta_{n}+A)+\omega\tau_{n}=\Phi(\varphi_{0}+2\pi,\eta_{n+1} )\;. \tag{9}\]
Exploiting the definition of phase from Eq. (7), we obtain the equation
\[A+\int_{\eta^{*}}^{\eta_{n+A}}\frac{\omega-g_{1}(s)}{f(s)}\,\mathrm{d}s+ \omega\tau_{n}=2\pi+\int_{\eta^{*}}^{\eta_{n+1}}\frac{\omega-g_{1}(s)}{f(s)} \,\mathrm{d}s\,. \tag{10}\]
By inserting the expression of \(\tau_{n}\) from Eq. (8) into this formula, we finally obtain
\[2\pi-A-\int_{\eta_{n+A}}^{\eta_{n+1}}\frac{g_{1}(s)}{f(s)}\,\mathrm{d}s=0\,. \tag{11}\]
This equation defines a one-dimensional map \(\eta_{n+1}=\mathcal{F}(\eta_{n})\) with the parameter \(A\). We iterate this map, starting from \(\eta_{0}=\eta^{*}\) and solving Eq. (11) numerically[45], for
a fixed kick strength \(A\). Using the obtained values of \(\eta_{n}\), we integrate numerically Eq. (8) and obtain \(\tau_{n}\). We remind that the sequence of intervals \(\tau_{n}\) can easily be measured in an experiment.
In Section II, we have demonstrated that depending on the stimulation strength, the sequence \(\tau_{n}\) either saturates or oscillates, see Fig. 2. The former case means that the map \(\eta_{n+1}=\mathcal{F}(\eta_{n})\) has a fixed point \(\hat{\eta}(A)\) with an obvious condition \(\hat{\eta}(0)=\eta^{*}\). We denote the corresponding interval \(\hat{\tau}(A)\), where \(\hat{\tau}(0)=2\pi/\omega\).
For small \(A\) both \(\eta_{n}+A\) and \(\eta_{n+1}\) are close to \(\eta^{*}\) and we can write the first-order approximation of Eq. (10). For this purpose, we use \(\omega=g_{1}(\eta^{*})\) and compute \(\lim_{\eta\to\eta^{*}}(\omega-g_{1}(s))/f(s)\) using the L'Hospital's rule. We obtain:
\[A-\frac{g_{1}^{\prime}(\eta^{*})}{f^{\prime}(\eta^{*})}(\eta_{n}+A-\eta^{*})+ \omega\tau_{n}=2\pi-\frac{g_{1}^{\prime}(\eta^{*})}{f^{\prime}(\eta^{*})}( \eta_{n+1}-\eta^{*})\;. \tag{12}\]
In the following, we define \(\gamma=1-g_{1}^{\prime}(\eta^{*})/f^{\prime}(\eta^{*})\). The approximation (12) yields the intervals \(\tau_{n}\) in the vicinity of \(\eta^{*}\), i.e., for small kick strength \(A\) as
\[\tau_{n}=\frac{1}{\omega}[2\pi-\gamma A+(\gamma-1)(\eta_{n+1}-\eta_{n})]\;. \tag{13}\]
Imposing the fixed point condition \(\eta_{n}=\eta_{n+1}\) and inserting \(A=qZ(\varphi_{0})\) we obtain an expression for \(\hat{\tau}\) as
\[\hat{\tau}=\frac{1}{\omega}(2\pi-\gamma qZ(\varphi_{0}))\;. \tag{14}\]
We remark that the direction of convergence to that fixed point depends on the sign of \(\gamma-1\). There may be a \(\tau_{n}\) in the transient that is larger or smaller than both \(\hat{\tau}\) and \(2\pi/\omega\).
### Stimulating the first oscillator while observing the second one
Now, we use the events \(\varphi_{2}=\varphi_{0}\) as a trigger for stimulation. Again, we aim to describe the dynamic via a one-dimensional map \(\eta_{n+1}=\mathcal{F}(\eta_{n})\). The effect of the kick is now \(\varphi_{1}\to\varphi_{1}+qZ(\varphi_{1})=\varphi_{1}+qZ(\varphi_{0}+\eta)\) and, hence, \(\eta\to\eta+qZ(\varphi_{0}+\eta)\). The evoked shift of \(\eta\) depends on \(\eta\) itself and is not constant as in the previous case. Thus, we cannot combine the kick action \(q\), the trigger phase \(\varphi_{0}\), and the response \(Z(\varphi_{0})\) into a constant phase shift, but have to treat it as a function \(qZ(\varphi_{0}+\eta)\) evaluated at \(\eta_{n}\). For convenience, we denote \(\tilde{Z}(\eta):=qZ(\varphi_{0}+\eta)\). Accordingly, the interval \(\tau_{n}\) between two kicks is
\[\tau_{n}=\int_{\eta_{n}+\tilde{Z}(\eta_{n})}^{\eta_{n+1}}\frac{\mathrm{d}s}{ f(s)}\,. \tag{15}\]
Proceeding as in the previous case, we write, similarly to Eq. (9):
\[\Phi(\varphi_{0},\eta_{n}+\tilde{Z}(\eta_{n}))+\omega\tau_{n}=\Phi(\varphi_{ 0}+2\pi,\eta_{n+1})\;. \tag{16}\]
Finally, we obtain the equation
\[2\pi-\int_{\eta_{n}+\tilde{Z}(\eta_{n})}^{\eta_{n+1}}\frac{g_{2}(s)}{f(s)}\, \mathrm{d}s=0 \tag{17}\]
that defines the map \(\eta_{n+1}=\mathcal{F}(\eta_{n})\) depending on function \(\tilde{Z}\).
Similarly to the previous case, we find an approximate expression for \(\tau_{n}\) in the limit of weak kicks leaving the phase difference close to \(\eta^{*}\). We approximate Eq. (16) by
\[-\frac{g_{2}^{\prime}(\eta^{*})}{f^{\prime}(\eta^{*})}(\eta_{n}+ \tilde{Z}(\eta_{n})-\eta^{*})+\omega\tau_{n}=2\pi-\frac{g_{2}^{\prime}(\eta^{* })}{f^{\prime}(\eta^{*})}(\eta_{n+1}-\eta^{*})\;. \tag{18}\]
Note that \(-g_{2}^{\prime}(\eta^{*})/f^{\prime}(\eta^{*})\) equals the above defined constant \(\gamma\) used in the previous case of monitoring the first oscillator. This can be checked by inserting the original coupling functions \(C_{1}\) and \(C_{2}\) into \(f,g_{1},g_{2}\). For \(\tau_{n}\) we obtain
\[\tau_{n}=\frac{1}{\omega}(2\pi-\gamma\tilde{Z}(\eta_{n})+\gamma(\eta_{n+1}- \eta_{n}))\,. \tag{19}\]
In the limit of small \(q\) we conclude \(\tilde{Z}(\eta_{n})=qZ(\phi_{0}+\eta_{n})\approx qZ(\phi_{0}+\eta^{*})\). Thus for the fixed point \(\hat{\tau}\), we obtain a result similar to that of the first case:
\[\hat{\tau}=\frac{1}{\omega}(2\pi-\gamma Z(\phi_{0}+\eta^{*})q)\,. \tag{20}\]
Compared to Eq. (14), the only difference is the argument of \(Z\). In the case of the first oscillator being monitored, it is \(\varphi_{0}\), and in the current case, it is \(\varphi_{0}+\eta^{*}\). We remark that by the definition of phase via Eq. (7), in both cases we have \(\Phi_{0}=\varphi_{0}+\delta_{i2}\eta^{*}\). Thus, in both cases the expression for \(\hat{\tau}\) in the limit of small \(q\) reads
\[\hat{\tau}=\frac{1}{\omega}(2\pi-\gamma qZ(\Phi_{0}))\;, \tag{21}\]
In the following, we will test the derived dynamical map \(\mathcal{F}\) for a model of coupled phase oscillators and compare it to a direct simulation with both finite-size and Dirac kicks.
### An example: coupled phase oscillators
We consider two phase oscillators with coupling functions containing higher harmonics terms
\[\begin{array}{l}\hat{\varphi}_{1}\ =\ \omega_{1}+\varepsilon\sin(\varphi_{2}- \varphi_{1})+\sigma\sin(2(\varphi_{2}-\varphi_{1}))+Z(\varphi_{1})p(t)\;,\\ \hat{\varphi}_{2}\ =\ \omega_{2}+\varepsilon\sin(\varphi_{1}-\varphi_{2})+ \beta\sin(3(\varphi_{1}-\varphi_{2}))\;,\end{array} \tag{22}\]
with the parameters \(\omega_{1}=0.98\), \(\omega_{2}=1.02\), \(\varepsilon=0.05\), \(\sigma=0.02\) and \(\beta=-0.01\). For the response curve \(Z\), we choose a simple sine function \(Z(\varphi_{1})=\sin(\varphi_{1})\).
Thus, the relevant functions for the map \(\mathcal{F}\) read \(g_{1}(\eta)=\omega_{1}-\varepsilon\sin(\eta)-\sigma\sin(2\eta)\), \(g_{2}(\eta)=\omega_{2}+\varepsilon\sin(\eta)+\varepsilon\sin(\eta)-\varepsilon\sin( \eta)\). The corresponding expressions for the parameters \(\omega_{1}\) and \(\omega_{2}\) are given in the following.
\[\begin{array}{l}\hat{\varphi}_{1}\ =\ \omega_{1}+\varepsilon\sin(\varphi_{2}-\varphi_{1})+ \sigma\sin(2(\varphi_{2}-\varphi_{1}))+Z(\varphi_{1})p(t)\;,\\ \hat{\varphi}_{2}\ =\ \omega_{2}+\varepsilon\sin(\varphi_{1}-\varphi_{2})+\beta\sin(3( \varphi_{1}-\varphi_{2}))\;,\end{array} \tag{23}\]
with the parameters \(\omega_{1}=0.98\), \(\omega_{2}=1.02\), \(\varepsilon=0.05\), \(\sigma=0.02\) and \(\beta=-0.01\). For the response curve \(Z\), we choose a simple sine function \(Z(\varphi_{1})=\sin(\varphi_{1})\).
Thus, the relevant functions for the map \(\mathcal{F}\) read \(g_{1}(\eta)=\omega_{1}-\varepsilon\sin(\eta)-\sigma\sin(2\eta)\), \(g_{2}(\eta)=\omega_{2}+\varepsilon\sin(\eta)+\varepsilon\sin(\eta)-\varepsilon \sin(\eta)\).
\(\beta\cos(3\eta)\) and \(f(\eta)=g_{1}(\eta)-g_{2}(\eta)\). For the chosen parameters, the system attains a stable phase difference \(\eta^{*}\approx-0.12\pi\). Thus frequency, Floquet exponent, and PRC prefactor follow as \(\omega\approx 1.01\), \(\kappa\approx-0.11\), and \(\gamma\approx 0.30\).
We perform the stimulation experiment by monitoring either the first or the second oscillator. The results are depicted in Fig. 4 and Fig. 5, respectively. In both cases, the proposed theory for the iterated mapping \(\mathcal{F}\) corresponds to the direct simulation with Dirac kicks to a large extent. Both agree with the direct simulation by kicks of finite duration \(\Delta\) for small \(q\); however, the results differ for large \(|q|\). This discrepancy is due to the difference in the effect of stimulating with the amplitude \(I\) for time \(\Delta\) starting at \(\varphi_{0}\), compared to an instantaneous shift of \(I\Delta Z(\varphi_{0})\).
We remark, that since we define the phase \(\Phi\) on the limit cycle as \(\Phi=\varphi_{1}\), we have \(\Phi_{0}=\varphi_{0}\) if the first oscillator triggers the stimulation at \(\varphi_{1}=\varphi_{0}\) and \(\Phi_{0}=\varphi_{0}-\eta^{*}\) if the second oscillator triggers it at \(\varphi_{2}=\varphi_{0}\).
In the first numerical experiment, we monitor the first oscillator. We choose \(\Phi_{0}=\varphi_{0}=3\pi/2\) as the trigger phase since it corresponds to an extremum of \(Z\). We observe the appearance of phase slips for \(q\gtrsim 0.5\). Since we do not observe phase slips for equally strong negative pulses, we conclude the favorable polarity of the phase shift to be negative (positive kicks at negative PRC value \(Z(\varphi_{0})<0\)). This conclusion corresponds to our choice \(\omega_{1}<\omega_{2}\).
Monitoring the second oscillator, we experiment with two different trigger phases \(\Phi_{0}=3\pi/2\) (\(\varphi_{0}=\Phi_{0}+\eta^{*}\approx 1.62\pi\)) and \(\Phi_{0}=1.88\pi\) (\(\varphi_{0}\approx 2\pi\)). Even though \(\Phi_{0}=3\pi/2\) yields an extremum of \(Z\), we do not observe phase slips in the shown range of kick actions \(q\), neither for positive nor for negative kicks, see Fig. 5(a). However, for the value \(\Phi_{0}=1.88\pi\), we observe the appearance of phase slips in an interval of \(q\). For finite-sized kicks of \(\Delta=10^{-5}\), phase slips occur for \(0.49\lesssim q\lesssim 0.71\). For the Dirac kicks, both for the mapping and the direct simulation, the interval of phase slips is narrower: it starts at \(q\gtrsim 0.53\) and ends at \(q\lesssim 0.56\). For sufficiently large \(q\), a new fixed point is formed. This happens due to the dependence of the kick-induced phase shift on the phase difference \(\eta\). In contrast to the case of monitoring the first oscillator, here, the kick-induced phase shift can change its sign depending on the phase difference \(\eta_{n}\). The kick
Figure 4: Dynamics of the kicked phase oscillator system (22). Here, we monitor the first oscillator and deliver kicks at \(\Phi_{0}=\varphi_{0}=3\pi/2\). Panel (a) depicts the bifurcation diagram for the asymptotic behavior of inter-kick intervals \(\tau_{n}\). The values of \(\tau_{n}\) for \(n\geq 50\) are shown as a function of the kick action \(q\) for a direct simulation of Dirac kicks (orange crosses), and the iteration of the map \(\mathcal{F}\) (purple circles). The approximate expression (21) for the fixed point \(\hat{\tau}\) is drawn as a black dashed line. For small kick actions, the \(\tau_{n}\) converge to a fixed point in first-order approximation given by \(\hat{\tau}\). Phase slips occur for sufficiently large values of \(q\). An example in (b), depicts the inter-kick durations \(\tau_{n}\) for \(q\approx 0.51\) (this value is marked with a dotted line in (a) and (c)). Panel (c) shows the bifurcation diagram for the time-averaged frequency difference \(|\Omega|\) of both oscillators (time averaging over 80 kicks). Data points correspond to direct simulations of Dirac kicks (orange crosses) and finite-sized kicks with pulse widths \(\Delta=10^{-5}\) (olive lower triangles), \(\Delta=10^{-3}\) (green right triangles), and \(\Delta=10^{-2}\) (dark blue upper triangles). The emergence of a non-zero value of \(|\Omega|\) coincides with the disappearance of the stable fixed point and onset of oscillatory dynamics for \(\tau_{n}\) in (a).
Figure 5: Bifurcation diagrams of the kicked phase oscillator system (22) when monitoring the second oscillator, for the cases of \(\Phi_{0}=3\pi/2\) (\(\varphi_{0}\approx 1.62\pi\), panel (a)) and \(\Phi_{0}=1.88\pi\) (\(\varphi_{0}\approx 0\), panel (b)). The values of \(\tau_{n}\) for \(n\geq 50\) are depicted as a function of the kick action \(q\) for direct simulation of finite-size kicks (green triangles; pulse duration \(\Delta=10^{-5}\) and amplitude \(I=q/\Delta\)), Dirac kicks (orange crosses), and the iteration of the map \(\mathcal{F}\) (purple circles). The expression (21) for the fixed point \(\hat{\tau}\) is drawn as a black dashed line. Phase slips do not occur in (a). In (b) phase slips occur only in an interval of kick actions \(q\) which is different for Dirac and finite-sized kicks.
is strong enough for the first few iterations to bring the system out of its potential well. As the system then tends to relax to the next equilibrium value \(\eta^{*}-2\pi\) and reaches the next trigger point \(\varphi_{2}=\varphi_{0}+2\pi\), the kick acts in the opposite direction and brings the system up the potential wall again. In this way, the system gets trapped, and a fixed point establishes. For practical purposes of avoiding that scenario, we mention the possibility of pausing the stimulation after one phase slip or varying the kick strength randomly.
Such behavior is not possible if we monitor the first oscillator, at least if there exists only one stable phase difference \(\eta^{*}\) of the unperturbed coupled system: Since the kick-induced phase shift does not depend on the phase difference \(\eta\) (at least for Dirac kicks), and thus is constant for a given trigger phase \(\varphi_{0}\), it will constantly shift the phase difference in the same direction (the evoked phase shift \(A=\text{const}\)). Thus, if the kicks are strong enough to induce a phase slip once, they will continue causing them.
### More than two oscillators: an outlook
We stress that our proposed strategy of phase-specific pulse stimulation with an observation of the partial periods is generally extendable to systems of more than two coupled units. As a particular showcase, we consider a set-up of five globally diffusively coupled Rayleigh oscillators
\[\ddot{x}_{i}-\mu(1-\dot{x}_{i}^{2})\dot{x}_{i}+\omega_{i}^{2}x_{i }=\frac{\varepsilon}{5}\sum_{k=1}^{5}(\dot{x}_{k}-\dot{x}_{i})+\delta_{ij}p(t )\,, \tag{23}\]
where \(\mu=2.0\), \(\varepsilon=0.05\), \(\omega_{1}=0.99\), \(\omega_{2}=0.995\), \(\omega_{3}=1\), \(\omega_{4}=1.005\), \(\omega_{5}=1.01\), and \(i=1,\ldots,5\). The asymptotic autonomous state is the state of global frequency locking. Then, stimulation enters the equation for oscillator \(j\). For simplicity, we consider the case of stimulating and observing the same oscillator. Thus, a pulse with the amplitude \(q/\Delta\) and duration \(\Delta=0.01\) is applied to the system when \(x_{j}=-0.8\) and \(\frac{\mathrm{d}}{\mathrm{d}t}x_{j}<0\) (with a "dead" time interval of \(2.0\) to exclude another pulse within that interval). This threshold-crossing event corresponds to a phase where the PRC of a single Rayleigh oscillator is positive; see Fig. 6. We observe the emergence of phase slips for three of four tested scenarios: When stimulating the slowest oscillator (\(j=1\)) with \(q\lesssim-0.276\) or \(q\gtrsim 0.208\), we achieve the desynchronization of this oscillator from the rest (cluster formation \(1:4\)). Also, when we stimulate oscillator \(j=3\) in the center of the frequency distribution with \(q\gtrsim 0.208\), we get a cluster formation of \(1:4\), desynchronizing oscillator \(3\) from the rest. However, when we stimulate oscillator \(3\) with negative pulses, we do not see phase slips for weak pulses (at least for \(q>-3\)). The transition from global frequency locking to a quasi-periodic regime with the stimulated oscillator being desynchronized from the rest is observed in all cases. Similar to the case of only two oscillators, it is visible in the partial periods \(\tau_{n}\) of the observed oscillator as a transition from a fixed point to an oscillating pattern. However, we expect that for different frequency distributions, it is also possible to observe mutually desynchronized clusters, i.e., to desynchronize the stimulated oscillator only from a fraction of the population.
Of course, this model is only a particular example of a network of more than two oscillators. In general, one can imagine a coupled oscillator population, where stimulation directly affects a subpopulation, and observation is possible on another group. Then, a broad spectrum of cases is possible depending on the intersection of the stimulated and observed oscillator sets. Furthermore, the formation of clusters depends on the frequency of the stimulated oscillators, the frequency distribution, and, of course, the network connectivity. However, we expect that, in most cases, breaking the global frequency locking with the proposed strategy is possible.
## IV Finding the proper phase for stimulation
In the previous section, we have shown that sufficiently intense pulses can induce phase slips if delivered consecutively each time the monitored oscillator attains a pre-selected target phase \(\varphi_{0}\). However, the critical kick strength of these pulses to achieve phase slips depends on \(\varphi_{0}\), and for some disadvantageous \(\varphi_{0}\), it might not work at all. This section illustrates the determination of a proper target phase for that stimulation protocol, which leads to phase slips for as weak pulses as possible.
### Phase and isostable response curves
Following Section III, we consider the dynamics of the synchronized system as a limit-cycle oscillation. Correspondingly, this oscillation can be characterized by the phase response curve (PRC) \(\mathcal{Z}\). In general, this curve differs from the PRC of the uncoupled oscillator, i.e., \(\mathcal{Z}\neq\mathcal{Z}\). \(\mathcal{Z}\) contains information on how external stimulation shifts the phase of the synchronous oscillation \(\Phi\). Next, the deviation from the limit cycle of the synchronized system is quantified by the isostable response curve (IRC) \(\mathcal{I}^{46}\). As discussed in Section III, the deviation is \(\eta-\eta^{*}\), i.e., it corresponds to the deviation of the phase difference \(\eta=\varphi_{1}-\varphi_{2}\) from its stable value. The description in terms of PRC and IRC is valid if the system is on or very close to the limit cycle when stimulated. For a detailed explanation, see [39; 47].
We derive the PRC \(\mathcal{Z}\) from the gradient of \(\Phi\) and the PRC of the uncoupled oscillators, both evaluated at the limit cycle:
\[\mathcal{Z}(\Phi)=\left(\partial_{\varphi_{i}}\Phi\cdot\delta_{i1}Z(\varphi_{1 })+\partial_{\eta}\Phi\cdot Z(\varphi_{1})\right)\left|{}_{\eta=\eta^{*}}\,. \tag{24}\]
With the partial derivatives \(\partial_{\varphi_{i}}\Phi|_{\eta=\eta^{*}}=\delta_{i1}\) and
\(\partial_{\eta}\Phi|_{\eta=\eta^{*}}=-g^{\prime}_{i}(\eta^{*})/f^{\prime}(\eta^{*})\), see Eq. (7), we conclude
\[\mathcal{Z}(\Phi)=\gamma Z(\Phi)\,. \tag{25}\]
Thus, the PRC \(\mathcal{Z}\) generally differs from the response curve of the first oscillator \(Z\) by a factor of \(\gamma\). This factor \(\gamma\) is characteristic of the coupled system and can potentially take any real value, including \(0\). Similarly, we derive the IRC by
\[\mathcal{I}(\Phi)=\left(\partial_{\varphi_{i}}\psi\cdot\delta_{i1}Z(\varphi_{ i})+\partial_{\eta}\psi\cdot Z(\varphi_{1})\right)|_{\eta=\eta^{*}}\,. \tag{26}\]
Here, the partial derivative with respect to \(\varphi_{i}\) vanishes (\(\partial_{\varphi_{i}}\psi=0\)) since the isostable variable \(\psi\) depends on the phase difference \(\eta\) only. The partial derivative with respect to the phase difference \(\partial_{\eta}\psi|_{\eta=\eta^{*}}\) is some constant that depends on the chosen scaling of the isostable variable. Thus, the IRC is proportional to the response curve \(Z\) and thus also to \(\mathcal{Z}\):
\[\mathcal{I}(\Phi)\propto Z(\Phi)\propto\mathcal{Z}(\Phi)\;. \tag{27}\]
To desynchronize the two oscillators, we want to push their phase difference \(\eta\) as far away from its value \(\eta^{*}\) in the locked state as possible. Hence, we want to maximize the response in the isostable variable \(\psi\), which is achieved by stimulating the system at a phase that maximizes the IRC \(\mathcal{I}\). By relation (27), we have to look for the extrema of \(Z\) or \(\mathcal{Z}\) to obtain the extrema of the IRC. In the following part of this Section, we will discuss the practical aspects of PRC inference.
### PRC inference for coupled Rayleigh oscillators
To demonstrate the PRC inference for the system of two coupled Rayleigh oscillators (2), examined in Section II, we choose the observable \(x_{1}\) and assign phase values from \(0\) to \(2\pi\) to one period of the unperturbed oscillation, mapping threshold values of \(x_{1}\) to phases. Thus, instead of operating with phases, we can use the signal values; see the solid gray line in Fig. 6.
As a benchmark, we exploit the standard approach and apply consecutively single pulses at different phases \(\varphi\) (i.e., at different signal thresholds) and wait until the system returns to the same state for the \(k\)-th time; we denote this time interval as \(T_{k}\). Since we are dealing with a weakly stable system [48], it may be necessary to wait several periods to ensure that the system has relaxed back to the limit cycle sufficiently close. The PRC then computes as
\[\mathcal{Z}(\varphi)=\frac{2\pi}{q}\frac{kT_{0}-T_{k}}{T_{0}}\,, \tag{28}\]
where \(T_{0}=2\pi/\omega\) is the natural period of the system, and \(q\) is the action of the pulse.
A more practical way to infer the PRC is to exploit the newly developed IPID-1 technique [49; 50]. This technique uses the observed scalar time series and known pulsatile external stimulation to infer PRC via a direct fit of the Winfree equation. See [51] for the code of implementation.
The standard technique requires at least \(k\cdot m\) periods of the oscillation to obtain \(m\) data points of the PRC. For example, to compute the PRC via the standard technique in Fig. 6, we used \(k=20\). IPID-1 needs a substantially shorter observation time to conveniently depict the entire PRC due to the least squares fit. In addition, IPID-1 does not rely on a specially designed stimulation protocol that hits a certain target phase. For example, adding a Poissonian process to the stimulation period suffices. The requirement for IPID-1 is that the time series of both an observable of the system and the external stimulation are known. The results of the inferred PRC using the standard and IPID-1 methods are depicted and compared in Fig. 6.
In the more difficult case of observing the second oscillator (which is not directly stimulated), the IPID-1 method failed, for our example, yielding a vanishing PRC. However, the standard method is still applicable in that case.
Figure 6: Comparison of PRC inference techniques for the system of coupled Rayleigh oscillators (2), the parameters remain as specified in Sec. II. The solid orange curve with upper triangles and the dashed purple curve, respectively, depict the resulting PRCs \(\mathcal{Z}\) from the standard technique and the IPID-1 method. For comparison, the teal curve with lower triangles illustrates the PRC of the first Rayleigh oscillator \(Z\) for the uncoupled case \(\varepsilon=0\), obtained by the standard method. In contrast to the relation (25) between the PRCs of the coupled and uncoupled system in the phase oscillator model, the curves differ not only in scale but also are slightly shifted. The IPID-1-inferred PRC for the coupled system correctly reproduces the PRC’s shape but not the scaling. However, the latter is not important for our approach. The solid gray curve is the observable \(x_{1}\) of the coupled system; it provides a map to translate threshold crossings into phases.
### Are the PRCs extrema optimal targets for phase-triggered stimulation?
Let us assume that we obtained the exact PRC \(\mathcal{Z}\) and thus have perfect knowledge about phases (i.e., thresholds) at which the system is displaced most efficiently from the limit cycle. Does that mean we have found the best phase to trigger external pulses?
In the case of monitoring the first oscillator, it indeed does. As we have seen in Sec. III, the evoked phase shift is constant if the stimuli are applied at the same \(\varphi_{1}\) every time. Moreover, selecting the extrema of \(\mathcal{Z}\) ensures the maximal phase shift. What remains to be determined is whether the kicks shall be positive or negative, i.e., whether advancing or delaying the system is more efficient in causing phase slips. For our choice, \(\omega_{1}<\omega_{2}\), slowing the first oscillator by negative phase shifts was the favorable choice. For the opposite case, it would be vice versa. We remark that Fig. 2 shows the coupled Rayleigh system for a phase-specific stimulation each time \(x_{1}\) crosses the threshold \(x_{0}=1\) from below. This threshold corresponds to a phase of \(\varphi_{1}\approx 0.18\pi\), see Fig. 6, and is close to the minimum of \(\mathcal{Z}\). Thus, it is an excellent choice to induce phase slips for comparably small positive kick actions \(q\).
The opposite case of monitoring the second oscillator is more involved. The reason is that the induced phase shifts following a pulse are not constant as in the previous case. By selecting a trigger phase \(\varphi_{2}=\varphi_{0}\), the kick-induced phase shift also depends on the phase difference \(\eta\), see Sec. III.2. Thus, even when \(\varphi_{0}\) is most effective on the limit cycle at \(\eta^{*}\), it might lose this efficiency for the new phase difference \(\hat{\eta}\) that establishes as a result of the consecutive kicks. We do not yet see a practical way to overcome this issue just by knowing the PRC. It might still be a good idea to start exploring efficient phases close to the extrema of \(\mathcal{Z}\) since these at least guarantee the most significant possible displacement from the limit cycle for the first few kicks. To avoid a trapping scenario as described in Sec. III.3 and shown in Fig. 5 we mention the possibility to add a stochastic process to the pulse action \(q\).
## V Discussion
In this article, we have demonstrated how a system of two synchronized oscillators can be desynchronized by short pulses applied to only one of both in a phase-specific manner. We focused on the restriction of having access to the observation of only one of the two units. Otherwise, when signals from two oscillators are available, well-known measures such as the time-averaged phase differences or difference of averaged frequencies will quantify the degree of phase and frequency locking, and tracing desynchronization is trivial.
For both cases of observing the stimulated and the unstimulated oscillator, we showed the efficiency of this approach for a well-chosen trigger phase. We developed a theoretical framework for the approximation of weakly coupled phase oscillators. This framework allowed us to derive an exact expression for the phase of the coupled system. We used it to establish a relation between the coupled system's phase response and the individual oscillator's phase response curve. This relation can be used to find efficient trigger phases for a phase-specific stimulation protocol. The proposed strategy of phase-specific pulse stimulation is robust to natural frequencies or coupling parameters as long as the assumption of weakly coupled phase oscillators is still applicable. However, the stimulation efficiency essentially depends on the phase response curve. If the interval of sensitive phases is very narrow, the technique may become less efficient due to imprecision in the phase measurement or will require stronger stimulation. In other words, if the response curves' extrema are very narrow, the pulse can hit at the less effective phase, and the kick action has to compensate for that.
In our paper, we treated a deterministic case. Now, we remark on the effect of noise that is two-fold. First, it is well-known that synchronization in the presence of noise is imperfect due to noise-induced phase slips. On the other hand, the real-time phase estimation required for phase-specific pulses becomes imprecise at higher noise levels. Thus, strong noise will result in a non-optimal delivery of pulses but will reduce the level of synchrony by itself.
In particular, we discuss the optimization of the stimulation. The first issue is the polarity of the pulse's action, which determines whether an induced phase shift advances or delays the phase of the stimulated oscillator. We know that phase delays are favorable if the stimulated oscillator is slower than the unstimulated one in the absence of coupling (which is the case in the examples in this article). Vice versa, if it were faster, phase advances would be favorable. However, the induced phase shift is the product of both action and phase response at the trigger phase. Thus, the same pulse can cause advancing and delaying shifts if delivered at different phases. Hence, to account for that consideration, knowledge about the PRC up to a positive factor is also required. We remark that to determine which direction is favorable for the induced phase shift, it must be known whether the stimulated oscillator is faster or slower than the other. Since that is unknown _a priori_, we suggest testing both polarities for a given phase with a high phase response in absolute value.
Another issue is how to minimize the number of pulses required to induce a phase slip. We remind that evoked phase slip means that the system escapes the basin of the locally stable phase difference and then evolves toward the next potential minimum, see Fig. 1. Obviously, having reached the local maximum of the potential, the system tends to the next equilibrium state by itself. It does not need additional pulses driving it in that direction. Thus, pausing stimulation after passing the maxi
mum excludes unnecessary intervention and also avoids a trapping scenario described in Section IV for the case of monitoring the unstimulated oscillator. The underlying problem is to detect the instant of passing over the barrier. While the emergence of an oscillating pattern for \(\tau_{n}\) unambiguously reveals phase slips, see, e.g., Fig. 2a and Fig. 4b, we do not know how to detect the barrier crossing from this pattern precisely. This task remains an open problem for future research.
To highlight the efficacy of our approach, we compared the phase-specific stimulation to Poisson-distributed inter-pulse intervals with similar statistics. A Poisson-distributed random variable was scaled and shifted to have the same minimal time and expectation value as the partial periods recorded from the phase-specific run. Also, the pulse shape was the same, and the first pulse was applied at the same instant as the phase-specific one. The frequency and time-averaged phase differences indicated that the phase-specific stimulation strategy outperformed the random stimulation with different standard deviations (lower, equal, and larger than the phase-specific stimulation). We expect this result to be robust for other distributions of random inter-pulse intervals. We are confident that our method is superior to randomly delivered kicks with comparable external intervention.
In the following, we comment on the limitations of the theoretical description of our approach. Our considerations rely on weak-coupling approximation with Dirac pulse stimulation. Thus, strongly coupled systems can differ from the phase description used here. Also, the effects of very strong or long stimuli might not be accurately described by the derived dynamical map.
Within our theoretical framework, several questions remain unanswered. First, we do not see a straightforward data-driven way to predict the critical action to induce phase slips. If the dynamical equations are known, the critical action can be found by numerically solving Eqs. (11), (17) for a fixpoint as a function of action \(q\). The boundaries of existence then mark the critical actions. We rely on continuously increasing the action for unknown dynamical equations until phase slips appear.
Another issue is the optimal trigger phase for the case of monitoring the unstimulated oscillator. As outlined in Section IV, it is not necessarily an extremum of the phase response curve that leads to phase slips at all, let alone in the most efficient way. We do not yet see a practical solution apart from trying out different phases in the vicinity of an extremum of the phase response.
Another assumption we made throughout this article was the uniqueness of the system's stable phase difference equilibria. In principle, multiple stable equilibrium states are possible, corresponding to multiple local minima of the potential in Fig. 1. We will discuss such a case now. Unlike the case of a unique stable state, where the system reenters the basin of attraction if the unstable equilibrium is crossed, the system finds itself in the basin of attraction of another stable phase difference. Thus, system quantities like the frequency, the Floquet exponent, and the PRC scaling factor \(\gamma\) can change. The individual phase response \(Z\) remains constant, though. This new basin might be impossible to leave with the same kicks that kicked it there in the first place. If we monitor the stimulated oscillator, increasing the kick action will eventually suffice to leave the basin. Repeating this procedure for potentially more stable states will result in a kick action large enough to leave all basins and thus induce phase slips: a repeating visit of all basins. There is no such guarantee for monitoring the unstimulated oscillator, and we cannot exclude that it might be necessary to change the trigger phase depending on the current basin.
###### Acknowledgements.
E.T.K.M. acknowledges financial support from Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), Project-ID 424778381 - TRR 295. We thank Prof. A. Gharabaghi for inspiring discussions.
|
2303.12968 | Ambient Intelligence for Next-Generation AR | Next-generation augmented reality (AR) promises a high degree of
context-awareness - a detailed knowledge of the environmental, user, social and
system conditions in which an AR experience takes place. This will facilitate
both the closer integration of the real and virtual worlds, and the provision
of context-specific content or adaptations. However, environmental awareness in
particular is challenging to achieve using AR devices alone; not only are these
mobile devices' view of an environment spatially and temporally limited, but
the data obtained by onboard sensors is frequently inaccurate and incomplete.
This, combined with the fact that many aspects of core AR functionality and
user experiences are impacted by properties of the real environment, motivates
the use of ambient IoT devices, wireless sensors and actuators placed in the
surrounding environment, for the measurement and optimization of environment
properties. In this book chapter we categorize and examine the wide variety of
ways in which these IoT sensors and actuators can support or enhance AR
experiences, including quantitative insights and proof-of-concept systems that
will inform the development of future solutions. We outline the challenges and
opportunities associated with several important research directions which must
be addressed to realize the full potential of next-generation AR. | Tim Scargill, Sangjun Eom, Ying Chen, Maria Gorlatova | 2023-03-23T00:25:08Z | http://arxiv.org/abs/2303.12968v2 | # Ambient Intelligence for Next-Generation AR
###### Abstract
Next-generation augmented reality (AR) promises a high degree of _context-awareness_ - a detailed knowledge of the environmental, user, social and system conditions in which an AR experience takes place. This will facilitate both the closer integration of the real and virtual worlds, and the provision of context-specific content or adaptations. However, environmental awareness in particular is challenging to achieve using AR devices alone; not only are these mobile devices' view of an environment spatially and temporally limited, but the data obtained by onboard sensors is frequently inaccurate and incomplete. This, combined with the fact that many aspects of core AR functionality and user experiences are impacted by properties of the real environment, motivates the use of _ambient IoT devices_, wireless sensors and actuators placed in the surrounding environment, for the measurement and optimization of environment properties. In this book chapter we categorize and examine the wide variety of ways in which these IoT sensors and actuators can support or enhance AR experiences, including quantitative insights and proof-of-concept systems that will inform the development of future solutions. We outline the challenges and opportunities associated with several important research directions which must be addressed to realize the full potential of next-generation AR.
## 1 Introduction
While virtual content in the metaverse is designed to be immersive, it will not be experienced in isolation. In particular for augmented reality (AR), in which virtual content is integrated with our real-world surroundings, an accurate and complete understanding of the real environment is a prerequisite for high quality experiences. Obtaining this using AR devices alone is infeasible in many scenarios, raising the potential for employing external sensors placed in the surrounding environment, a form of _ambient intelligence_ for AR. As well as sensing the properties of an environment, it is also desirable to control them, for example to optimize the performance of core AR algorithms, or to generate stimuli in sensory modalities that are beyond the capabilities of AR devices. In this book chapter we explore the potential for wireless Internet of Things (IoT) devices to provide this type of ambient intelligence, and thereby support next-generation AR experiences in the metaverse.
More well-studied than AR is virtual reality (VR), in which users are immersed in a fully virtual environment through visual and auditory stimuli, usually delivered via a headset (e.g., Meta Quest 2, HTC Vive), or alternatively a desktop or mobile device. To provide tactile feedback and interaction methods to the user, VR headsets are frequently used in combination with specialized controllers or other handheld devices, and less commonly with other peripherals such as gloves, body suits, helmets [76] and mouth accessories [186]. There is work on integrating external devices into VR experiences to further increase the levels of immersion, beyond the capabilities of wearable devices. For example, fans have been used for representations of wind [43; 62], while multiple works (e.g., [170]) have used 'climate chambers' with HVAC systems to study thermal perception
in immersive virtual environments. External devices have also been employed to capture contextual data from users or the ambient environment to enhance VR experiences (e.g., [109; 161]).
Here we focus on the less-studied topic of _ambient intelligence to support or enhance AR_, in which virtual content is overlaid onto and integrated with a user's real-world surroundings. Specifically, we examine this in the context of mobile AR, when virtual content is presented through handheld devices such as a smartphone or tablet, or wearable devices such as a headset. As in VR, current AR devices deliver virtual content primarily in the form of visual and auditory stimuli, with limited tactile feedback sometimes available in the case of handheld devices or headsets with handheld controllers (e.g., the Magic Leap 2). This again raises the possibility of using external devices to adjust the real environment in other sensory modalities or to extend possible interaction methods. However, there is also an important distinction between two methods of delivering visual content in AR; handheld devices and some headsets (e.g., the Varjo XR-3) use video see-through (VST, sometimes referred to as video pass-through) displays, while other headsets (e.g., the Microsoft HoloLens 2 and the Magic Leap 2) use optical see-through (OST) HMDs. These different designs have important implications for how the properties of real environments affect a user's perception of virtual visual content. In general, a knowledge of how the real environment affects both system functionality and human perception, along with the ability to sense and control environment properties, will enable the provision of optimal AR experiences in diverse scenarios.
Further motivation for pursuing ambient intelligence comes from the nature of next-generation AR. The coming years hold the promise of virtual content that is not only more closely integrated with our real surroundings, but which also adapts to the current context in which it is presented. This _context-awareness_ is key to realizing the full potential of AR; to deliver virtual content that provides a high-quality user experience, and enables optimal task performance, we require information on the specific circumstances in which it is presented [69]. At a high level, we can break down this contextual information into environmental, user, social, and system awareness. Environmental awareness, including the environment understanding required for close integration of real and virtual content, is particularly challenging to obtain using the sensors onboard mobile AR devices alone because their view of an environment is _spatially and temporally limited_; they typically only capture a small portion of an environment for a short period of time. Furthermore, due to restrictions on the quality of onboard sensors, and the fact that they are frequently in motion, the data they capture is often inaccurate. External devices on the other hand can help to address these deficiencies and generate more accurate, more complete environmental awareness, across a wider range of conditions.
The desire to both improve the accuracy and completeness of environmental awareness in AR, and control environment properties, motivates the use of Internet of Things (IoT) devices. These wirelessly connected devices have become ubiquitous across diverse settings (globally there were 11 billion active IoT devices at the end of 2021, and this is forecast to be 29 billion by 2030 [137]) and include a wide range of sensors and actuators that are suitable for detecting and adjusting environmental conditions. The widespread availability and relatively low cost of IoT sensors like smart cameras and IoT actuators like smart lights and displays, as well as the prevalence of supporting infrastructure (e.g., Wi-Fi networks), means that they can be readily leveraged for AR. Our overall vision, illustrated in Figure 1, is for multi-device AR architectures, in which experiences on mobile AR devices are enhanced or supported by a set of connected devices. Contextual data are provided by IoT sensors such as smart cameras and wearables, while IoT actuators such as smart lights, shades, and displays optimize environmental conditions for AR, or enhance AR experiences by providing additional stimuli. An edge server or the cloud provides the storage and resources to aggregate these data, compute context, and control IoT devices.
Figure 1: A high-level view of a multi-device next-generation AR architecture, in which data from mobile AR devices, wearables, and IoT sensors like smart cameras are aggregated to compute context (e.g., environment properties) on an edge or cloud server. These data are also used to inform the control of IoT actuators such as smart lights, shades, and displays, which optimize environmental conditions for AR experiences.
We define the scope of the IoT devices we cover in this book chapter through two important distinctions. Firstly, our focus is on _IoT devices that support or enhance AR_, rather than simply any IoT device whose data could be displayed in AR or be controlled through an AR interface. For example, while there are interesting and useful ways in which devices such as air quality sensors and robotic arms could be combined with AR, they are excluded here. Examples of works which proposed the use of AR as a tool for visualizing IoT-generated data and interacting with IoT nodes include [127, 151, 84, 211]. Secondly, we only consider truly external or _ambient_ IoT devices; we exclude sensors that are attached to AR devices, such as inertial sensors or eye tracking cameras, as well as wearable sensors that may capture additional biometrics from humans in an AR environment. For a recent review of wearable sensors for AR applications, see [92]. Given the placement of ambient IoT devices in the wider environment they are particularly beneficial for environment awareness, though we note cases in which they can supply other types of context-awareness; for example, while user context data is most often captured through on-device or wearable sensors, ambient IoT cameras can also capture visual data pertinent to activity or emotion recognition [175].
The contents of this book chapter are as follows. In Section 2, we cover related work on combining AR and IoT devices, methods of communication between them, and a network architecture that will support the implementation of ambient intelligence for AR, edge computing. In Section 3, we categorize the different ways in which ambient IoT devices can support or enhance AR experiences, including relevant sensors and actuators for each use case; then in Sections 4 and Sections 5, we discuss in more detail the possibilities for IoT sensors and actuators respectively, organized by use case. In Section 6 we cover open challenges and research directions, and in Section 7 we provide a conclusion.
## 2 Related Work
In this section we review existing work on systems which incorporate both AR and IoT devices, different methods of communication that have been used to connect AR and the IoT, and a network architecture that will support the implementation of ambient intelligence for next-generation AR.
**Systems incorporating AR and IoT devices:** AR can be used to enhance interactions with IoT devices, including both the visualization of IoT data and the provision of an immersive interface to access and control IoT devices. For example, in [154] the authors described an AR-IoT system that displays real-time IoT data as holographic content to enhance object interactions; this system is applied to crop monitoring, to provide object coordinates of plants and information collected from IoT devices such as the fertilizers used. Similarly, Sun et al. [190] presented MagicHand, an AR system that allows users to interact with IoT devices by detecting, localizing, and enabling augmented hand controls; the system was implemented using a 2D convolutional neural network (CNN) and achieved high gesture recognition accuracy. In [191] IoT sensor data were overlaid onto industrial machines using AR, with more accurate pose estimates (and thereby better aligned overlays) obtained by applying deep learning to RGB and depth images of the machine. Visualizing and identifying IoT objects using an AR interface has also been shown to improve shopping experiences, by increasing perceived usability and satisfaction in user interactions [85].
**Communication between AR and IoT devices:** Prior works have demonstrated how AR devices can communicate with IoT devices through the Internet; examples include an AR application that displays agricultural data from temperature or moisture sensors for crop monitoring [115], and a web-based AR application framework to visualize the state changes of a coffeemaker through a visual tag [104]. Others have highlighted the importance of scalable systems with efficient data management for AR applications [158, 31]; to this end the authors of [211] proposed an AR-based browsing architecture that identifies new IoT devices and enables immediate interactions with easily controllable user interfaces. Similarly, Blanco-Novoa et al. proposed a framework that allows AR and IoT devices to communicate dynamically in real time through standard and open-source protocols such as MQTT, HTTPS, or Node-RED [24]. Developing systems which incorporate AR devices into a network of IoT devices remains an important topic of research; for example, VisIoT [151] supports tracking and visualizing the location of IoT nodes in real time through a combination of data collected from camera, inertial measurement unit (IMU), and radio frequency (RF) modules, while Scenario [80] integrates the discovery and localization of IoT devices into an AR system by embedding RF modules into both AR and IoT devices. These works highlight the need for further research on device localization and calibration, as well as system scalability (with respect to e.g., bandwidth consumption), to inform the development of IoT-supported AR systems.
**Edge computing for AR:** In order to leverage the large amounts of data from AR and ambient IoT devices, context-aware AR systems will require storage and computational resources beyond the constraints of these devices alone. Given the low latency requirements of many aspects of context-aware AR, along with the privacy concerns associated with transferring sensitive information about users or environments to the cloud, many have deemed edge computing a particularly promising network
architecture [70; 114; 223; 217; 201]. In this architecture, a server is placed physically close to mobile AR devices and ambient IoT devices (e.g., in the same building), helping to address the aforementioned latency and privacy requirements. Existing work has already demonstrated the benefits of offloading tasks to the edge for various aspects of AR system functionality, including elements of SLAM pipelines [204; 207; 5], lighting estimation [223] and object detection [112; 74; 162], and in our ongoing work we have developed multiple edge architectures for context-aware AR. For example, in [178] we developed a system to predict the quality of virtual content positioning (a function of AR device pose estimate error) from environment properties, in which we transmitted data collected on the AR device to the edge for the computationally expensive pre-processing and model inference. In [114] we presented an edge-assisted collaborative image recognition system, in [179] we demonstrated an edge-supported AR application that analyzed user eye movements to recognize common activities in a museum scenario, and we have developed multiple systems that provide edge-based provisioning of contextual virtual content [63]. We see the incorporation of IoT devices that provide additional contextual data as a natural extension to these edge architectures for context-aware AR, and we recently presented an example of this in [177] (see Section 5.1 for further details).
## 3 Ambient IoT for AR
In this section, we categorize the different ways in which ambient IoT devices can support or enhance AR experiences (Section 3.1). We then provide an overview of the different types of IoT devices that may be employed, along with their associated uses, and examples of their use for AR or VR (Section 3.2).
### Uses of Ambient IoT for AR
Central to next-generation AR is the incorporation of context-awareness - adapting virtual content according to the environmental, user, social, and system context in which it is presented. Ambient IoT sensors are able to collect environment data that is more accurate and complete than AR devices alone, while ambient IoT actuators can be used to adjust environment properties for better system performance or a higher quality user experience. In order to categorize the ways in which detecting and adjusting environment properties using IoT devices can support or enhance AR, here we define five high-level aspects of AR which contribute to the quality of a user's experience:
* _Spatial understanding_ concerns information about the physical properties of the environment, which includes representations of the real-world surfaces present, the detection of fiducial or image-based markers, and real-time pose estimates for the AR device; this information is required for accurate spatial registration of virtual content.
* _Semantic understanding_ takes this environmental awareness a step further, and provides a knowledge of the type, poses, and states of objects and surfaces present, which may be used to inform spatial understanding or display context-aware content.
* _Contextualized content_ refers to the ways in which either the spatial or semantic understanding obtained through IoT devices, or environment properties such as light and visual texture directly, may be used to inform adaptations to virtual content.
* _Interaction_ covers how current interaction methods in AR (e.g., hand gestures, eye tracking) may be enhanced or extended using IoT devices.
* _Immersion_ relates to how IoT actuators can be used to increase an AR user's sense of immersion (i.e., the sense that virtual content is truly present in their real environment).
### Ambient IoT Devices for AR
In Table 1, we present an overview of ambient IoT devices that may be used to support or enhance AR experiences. We group included devices by the type of information they collect from or convey to AR devices or users (e.g., visual, auditory), rather than the underlying functionality of the device. For example, many motion sensors detect changes in thermal energy, but given that in this case they provide information regarding the state of the visual environment (i.e., human presence), we classify them here under 'visual'. Similarly, strain gauges, which detect changes in electrical resistance, are classified as visual because they provide information about the deformation of an object, effectively enhancing the visual perception and acuity of an AR user. For each type of information, we divide IoT devices into sensors or actuators, list example devices, state their possible uses from the categories we defined in Section 3.1, and provide examples of related work for these use cases.
Considering first IoT sensors, there is a wide variety of useful information we can collect from the surrounding environment, and we identify four key ways in which these data may be leveraged. Firstly, we can enhance the spatial understanding, semantic understanding, and interaction capabilities of AR systems by sensing from additional and more advantageous poses. Secondly, with sufficient data on how environmental conditions impact AR systems, we can predict the current level of performance and overall quality of an AR experience. Thirdly, we can use the data collected from sensors to adapt AR system functionality or the presentation of virtual content for the current environment. Finally, we can contextualize virtual content, in that it is related to or reacts to current conditions. Throughout Section 4, we discuss in more detail how IoT sensors can be used to support or enhance AR.
Beyond detecting the properties of the real environment using IoT sensors, we can also _adjust_ the properties of the real environment using IoT actuators. The motivation for this is threefold. Firstly, because key aspects of AR system performance and user perception are affected by environment properties, we can optimize environments to achieve the best possible performance. For example, the accuracy of spatial understanding is dependent on sufficient levels of light and visual texture, light levels affect the performance of algorithms related to semantic understanding and interaction, and human perception of virtual content is also affected by the properties of both light and textures. Secondly, we can improve the quality of a user's interactions in AR, by providing alternative interaction methods (e.g., electronic displays) or enhancing existing ones (e.g., adding tactile feedback using haptic surfaces). Thirdly, we can enhance a user's sense of immersion in an AR experience. This may involve the generation of visual and auditory stimuli to extend what is possible on AR devices (using e.g., light projectors or speakers), or the generation of sensory stimuli in modalities that cannot be generated on AR devices (using e.g., fans, diffusers, or heaters). We discuss the possibilities for ambient IoT actuators in more detail in Section 5.
## 4 IoT-based Sensing for AR
In this section we discuss in more detail how ambient IoT sensors could be used to support or enhance AR experiences. Each subsection examines a different use which we defined in Section 3.1 and listed in Table 1; for sensors we cover spatial understanding (Section 4.1), semantic understanding (Section 4.2), contextualized content (Section 4.3), and interaction (Section 4.4).
\begin{table}
\begin{tabular}{|p{56.9pt}||p{56.9pt}|p{56.9pt}||p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline
**Type of information provided** & \multicolumn{3}{c||}{**Sensors**} & \multicolumn{3}{c|}{**Actuators**} \\ \cline{2-5} \cline{5-6} & **Devices** & **Uses** & **Examples of use for AR or VR** & **Devices** & **Uses** & **Examples of use for AR or VR** \\ \hline Visual & Cameras, depth sensors, motion sensors, light sensors, strain gauges, pressure sensors sensors & Spatial understanding, contextualized content, interaction & Light bulbs, geometric understanding, displays (e.g., LCD, E-Ink) & Light bulbs, geometric understanding, interaction, immersion & Spatial understanding, semantic understanding, interaction, immersion & [4, 177] \\ \hline Auditory & Microphones & Semantic understanding, contextualized content, interaction & [60, 136] & Speakers & Immersion & [101] \\ \hline Haptic & Wind sensors, physical buttons and proxies, touchscreens & Contextualized content, interaction & [21, 44, 94] & Haptic surfaces, fans & Interaction, immersion & [43, 62] \\ \hline Olfactory & Nanomechanical sensors & Semantic understanding & - & Diffusers, offactometers & Immersion & [16] \\ \hline Thermal & Thermocouples, resistance temperature temperature detectors, infrared temperature sensors & Semantic understanding, contextualized content & [25, 140, 154] & Heaters, HVAC systems & Immersion & [43, 170] \\ \hline \end{tabular}
\end{table}
Table 1: Overview of ambient IoT devices (sensors and actuators) with potential uses for AR. Sensors and actuators are grouped by the type of information they collect from or convey to AR devices or users (e.g., visual, auditory).
### Spatial Understanding
A fundamental component of all AR systems which position virtual content relative to the real world is spatial understanding. Indeed an accurate and detailed knowledge of our physical surroundings is essential for next-generation AR systems which aim to closely integrate the real and virtual worlds. In this subsection, we first provide background information on the techniques behind spatial understanding in AR. We then examine how environment properties affect spatial understanding, and hence why obtaining knowledge of these properties through IoT sensors is useful, before exploring how IoT sensors may be used directly in spatial understanding algorithms.
#### 4.1.1 Background on Spatial Understanding in AR
Understanding one's physical surroundings is a fundamental component of both marker-based and markerless AR, in order to accurately overlay virtual content onto a view of the real-world environment. A marker-based AR system uses a marker, commonly a paper-printed static image with distinct features, to obtain information about the position and orientation of an object in the surrounding space. On the other hand, a markerless AR system is based on the use of simultaneous localization and mapping (SLAM) to understand the surrounding space without the use of markers.
**Marker-based AR:** Marker detection based on image processing has been a popular approach to enable marker-based applications in AR due to its ease of use and accurate tracking of an object [59]. A multitude of markers with different patterns and features has been used in marker-based AR systems, including binary fiducial markers (e.g., ARToolkit [17], ArUco [18], ARTag [51]), a variation of fiducial markers specialized for robotics applications (e.g., AprilTag [146]), and image-based markers with a high number of unique feature points (e.g., Vuforia [200]). The detection of these markers is based on the processing of the image frames captured by the camera on an AR device - the pose of the device is estimated by finding contours, features, or lines [75] within the marker. However, the accuracy and reliability of marker detection in AR are largely determined by the performance of the camera capturing images of the scene and the environmental conditions of the scene where the marker is located. Poor camera calibration or focus, or low image resolution, can potentially result in low pose estimation accuracy of the marker in the scene [218]. Additionally, environmental properties such as lighting or the distance from the camera to the marker are other factors that can affect pose estimation accuracy. We discuss the use of IoT sensors and actuators to address challenges related to marker detection in Section 4.2.3 for the use of IoT sensors in object state detection, and in Section 5.1.1 for environment optimization using IoT actuators.
**Markerless AR:** SLAM is a key enabling technology for markerless AR. Visual and visual-inertial SLAM (V-SLAM and VI-SLAM), using cameras either alone or in combination with inertial sensors, have demonstrated remarkable progress over the last three decades [27]. Due to the affordability of cameras and the richness of information provided by them, V-SLAM using monocular [168; 138], RGB-D [138], and stereo [138] cameras has been widely studied. To provide robustness to textureless areas, motion blur, illumination changes, there is a growing trend of employing VI-SLAM, that assists cameras with an IMU [28; 159; 203]; VI-SLAM has become the de-facto standard SLAM method for modern augmented reality platforms [13; 65]. In VI-SLAM, visual information is fused with IMU data to achieve more accurate and robust localization and mapping performance [28; 159; 203]. Due to the high computational demands incurred by V- and VI-SLAM on mobile devices, offloading parts of the workload to edge servers has recently emerged as a promising solution for lessening the loads on mobile devices and improving overall performance [5; 33; 207; 208]. A standard approach [5; 33; 208] is to offload computationally expensive tasks, such as place recognition, the process of taking a sensor snapshot (e.g., an image) of a location and querying it in a large, geotagged database gathered from prior measurements, and loop closings, the process of determining whether AR users are revisiting the same place. Both the aforementioned papers [5; 28; 207; 208; 33; 159] and commercial AR devices that employ markerless AR (e.g., Android devices with ARCore [65], iOS devices with ARKit [13], or headsets such as the Microsoft HoloLens 2 [129]) implement VI-SLAM using sensor data captured onboard mobile devices, and these data are both spatially and temporally limited. To address this limitation, in Section 4.1.3 we discuss methods to increase the accuracy and completeness of spatial understanding by integrating the sensor data obtained onboard AR devices with data obtained from IoT sensors.
#### 4.1.2 Estimating the Quality of Spatial Understanding Using IoT Sensors
Because of the role of vision-based sensing in spatial understanding in AR, and the nature of VI-SLAM algorithms in particular, the properties of the visual environment impact the accuracy and completeness of spatial understanding. Therefore a thorough
knowledge of those properties, obtained through ambient IoT sensors, is highly useful in identifying problematic regions for tracking, estimating current levels of spatial understanding, and informing system adaptations for current environmental conditions. Until recently, knowledge of the impact of environmental properties was limited to qualitative guidelines; however, our recent work [175, 176] provides quantitative insights on the impact of both light and visual texture on VI-SLAM performance.
To obtain these quantitative insights on the impact of environmental conditions on VI-SLAM performance, we developed and implemented two methodologies. In one [176], we measure the pose estimate error of open-source VI-SLAM algorithms (e.g., ORB-SLAM3 [28]) using a game engine-based emulator; we use the trajectories from existing VI-SLAM datasets (e.g. TUM VI [181], SenseTime [83]) to create new camera images in realistic virtual environments, and combine that with the original inertial data from those datasets. In the other [174], we measure virtual object position error (determined by pose estimate error) on commercial AR platforms (e.g., ARCore [65], ARKit [13]), by aligning virtual objects with a real world reference point using our open-source AR app. Our game engine-based methodology facilitates fine-grained control of environment properties (i.e., the exact properties of the light sources and textures in a virtual environment), while our methodology for commercial AR supports monitoring of environment conditions with either AR device sensors or ambient IoT sensors.
**Light:** Illuminance, the amount of light incident on environment surfaces per unit area, determines the accuracy with which environment surfaces can be mapped or tracked, because it determines the extent to which visual features are detectable for tracking. We illustrate an example of this in Figure 2; in these experiments we used our game engine-based emulator to run two SenseTime [83] trajectories in a 6m\(\times\)6m\(\times\)4m virtual concrete room, with 10 different overhead light intensities [176], and 10 trials per light setting. We then plotted VI-SLAM pose estimation performance (in terms of relative error, the translational component of relative pose error) against the 10 light intensity settings. Figure 1(a) shows the results for SenseTime A1, a trajectory involving slow side-to-side motion facing a wall (as if inspecting a virtual object at head height), followed by repeated walking away and returning with the camera angled more towards the floor (described in [83] as 'inspect+patrol'). Figure 1(b) shows the results for SenseTime A4, with slow motion focused on a small area of the floor, followed by the same slow side-to-side motion facing the wall (described as 'aiming+inspect').
For SenseTime A1, optimal performance is obtained at a medium light intensity (750 lumens), at which there is sufficient light to ensure recognizable features are visible, but those features are not obscured by specular reflections - this is particularly a factor during the 'walking away and returning' portion of the sequence. For the less challenging inertial data in SenseTime A4, performance is largely determined by the illuminance of the small area of floor at the start of the sequence, and specular reflections are not a major factor; as such performance is poor at low light levels when the small area of floor is too dark, and optimal performance is obtained at the highest light intensity. These results illustrate that optimal environment illuminance differs depending on which regions of an environment user trajectories cover, and that monitoring of illuminance in specific environment regions of interest will be informative for the estimation of current VI-SLAM performance.
Fig. 2: VI-SLAM pose estimation error for two SenseTime [83] trajectories in a 6m\(\times\)6m\(\times\)4m virtual concrete room [176], showing the translational component of relative error when different light intensities are emitted by a light source (100 trials, 10 at each light intensity). Optimal light intensity for VI-SLAM performance depends on the structure, textures, and reflectance properties of an environment, as well as the camera trajectory. While a medium light intensity will be optimal when specular reflections due to high illuminance are a factor (e.g., Figure 1(a)), trajectories with views of environment regions where light sources are distant or occluded will result in optimal performance at high light intensities (e.g., Figure 1(b)).
We envision multiple IoT ambient light sensors being employed to accurately measure lighting properties in different parts of the environment, without requiring an AR device. This will enable both the proactive identification of environment regions where lighting may cause tracking errors, and a more complete understanding of how lighting conditions change over time. Data on the position of virtual content or user trajectories will inform the most relevant positions for ambient light sensors to be placed. The output from these sensors is also highly useful for environment optimization systems which control properties such as illuminance based on occupant needs, as we show in Section 5.1.
**Visual texture:** Given the reliance of feature-based SLAM on recognizable visual textures in the surrounding environment, estimating the properties of visual textures present is also a valuable predictor of device tracking performance in markerless AR. In quantitative experiments on a state-of-the-art open-source VI-SLAM algorithm (ORB-SLAM3 [28]) using our game engine-based emulator, we showed that both the edge strength and complexity of a texture, as well as how a texture is impacted by motion blur, affect pose estimate error magnitude [176]. For example, Figure 3 shows the VI-SLAM pose estimation performance (in terms of relative error, the translational component of relative pose error, and robustness, the mean percentage of tracked frames) we obtained when running the 'room5' trajectory from the TUM VI dataset [181] in 6m\(\times\)6m\(\times\)4m cuboid environments covered with different visual textures. The textures are ordered according to their edge strength (variance of the Laplacian), with higher numbers indicating higher edge strength values. Three out of four of the textures with high edge strength, stone (R7), plants wallpaper (R8), and brick (R9) resulted in median relative error \(\leq\)5cm, compared to \(>\)20cm for all other textures. The notable exception was the speckled marble texture (R10), with a median relative error of 95cm. In this case, the fine texture was greatly affected by motion blur, resulting in less recognizable texture (low edge strength) in camera images from dynamic portions of the trajectory.
In our experiments on commercial AR platforms we have demonstrated that in terms of virtual object position error (a function of pose estimate error magnitude), some visual textures are more robust to low illuminance than others [177]. In these experiments we measured virtual object position error, the 3D Euclidean distance between where a virtual object was originally placed and where it appears after walking away approximately 7m and returning, in different environmental conditions using
Figure 3: VI-SLAM pose estimate performance for TUM VI room5 [181] with various visual textures, ordered by the edge strength of the texture. The interior of a 6m\(\times\)6m\(\times\)4m virtual room was covered in each texture, and 100 trials were conducted for each (10 at each of 10 different light levels, light levels shown in Figure 2). Performance was measured using relative error, the translational component of relative pose error, and robustness, the mean percentage of tracked frames over all trials.
our open-source app [174]. We conducted our experiments in a university lab, and tested two textures where the virtual object was placed, a checkerboard and an academic paper, at three ambient light levels (low, 50-100 lux; medium, 150-450 lux; high, 500-1000 lux), with 10 trials for each of the six settings. Figure 4 shows our results for these experiments on the Samsung Galaxy Note 10+ smartphone (ARCore v1.28). Our results illustrate how error increases at lower ambient light levels, but that the checkerboard texture was more robust to this effect than the academic paper (mean errors of 4.1cm and 12.0cm respectively at the medium light level). We observed that at the medium light level, noise in smartphone camera images has minimal effect on the checkerboard texture, but obscures the finer texture of the academic paper, making VI-SLAM-based place recognition more challenging, and resulting in greater error.
These relationships between tracking quality and visual texture motivate the proactive monitoring of texture properties; images of the environment periodically captured by IoT cameras can be transmitted to an edge server for the required image processing. Again, not only does this allow us to identify and manually address problematic regions that might cause poor tracking quality before they are encountered by AR devices, but we can automatically optimize illuminance or texture based on the current types of visual texture present (see Section 5.1.2). Another promising direction is implementing a form of adaptive SLAM by adjusting feature extraction parameters based on visual texture; for example, it may be beneficial to lower corner detection thresholds when low-contrast textures are detected.
#### 4.1.3 Enhancing Spatial Understanding Using IoT Sensors
As well as measuring environment conditions to estimate current levels of spatial understanding, we can also use the data from ambient IoT sensors as input to spatial understanding algorithms directly. There are several promising possibilities in this area, which we discuss below.
**Collaborative SLAM:** The visual data obtained from cameras onboard AR devices is both spatially and temporally limited, and frequently suffers from distortions such as noise or motion blur. To help alleviate some of these issues, we can additionally employ the visual data from ambient IoT cameras. This use of multiple vantage points can be seen as a type of _collaborative SLAM_, where captures of different devices can be combined to obtain a higher-quality overall map, that leads to higher-quality pose estimation by the AR device [180; 208; 89]. To further extend the sensing capabilities using external sensors, in our planned work we will use IoT sensors (e.g., a surveillance camera) located in the vicinity of AR devices to provide extra information for pose estimation in SLAM. Specifically, we will use deep learning-based object detection and person re-identification for obtaining semantic information, that is, localizing and tracking users wearing mobile AR devices in the surveillance camera
Figure 4: Virtual object position error on the Samsung Galaxy Note 10+ smartphone after walking away approximately 7m and returning, when the virtual object was placed on a checkerboard or academic paper texture, with a low (50–100 lux), medium (150–450 lux) or high (500–1000 lux) ambient light level. The fine texture of the academic paper is less robust to lower light levels due to it being more easily obscured by noise in the AR device camera image [177].
frames. We will fuse the semantic information obtained from the surveillance camera with VI-SLAM running onboard AR devices to achieve more robust and accurate pose estimation.
**Collaborative depth mapping**: In addition to VI-SLAM, modern AR devices are increasingly relying on time-of-flight _depth sensors_ to aid with spatial understanding, and sensing from multiple vantage points can also be advantageous for obtaining accurate depth maps. Time-of-flight sensors struggle to obtain reliable data when the observed scene contains materials with low reflectivity, strongly specular objects, and reflections from multiple objects [73; 195]. Not only that, but indirect time-of-flight depth sensors such as those on the Microsoft HoloLens 2, Magic Leap 2 and some high-end Android smartphones are limited in range, while the direct time-of-flight depth sensors (marketed as LiDAR) on high-end iOS devices produce sparse readings that may be inaccurate at shorter distances. This leads to depth maps that are missing valid estimates in large parts of a frame, and incomplete spatial understanding.
In our evaluation of depth estimates obtained by a Microsoft HoloLens 2 (in the long throw mode) across a range of representative indoor environments, we found that on average 30% of depth pixels in a frame were missing [220]. We also collected an indoor dataset of 18.6K depth maps on a Samsung Galaxy Note 10+ smartphone, of which 58% had greater than 40% missing pixels [220]. Based on the properties of these missing pixels, we determined that the range of indirect time-of-flight depth sensors on current AR devices is approximately 5m. We also observed that smaller angles (60\({}^{\circ}\) or less) between the sensor's optical axis and the target surface resulted in large numbers of missing pixels. These problematic conditions are prevalent in AR scenarios - in larger rooms a distance of greater than 5m between an AR device and the nearest surface is common, while a depth sensor naturally faces outward toward a wall (due to how a smartphone is usually held or a headset is worn), but target surfaces in AR are often horizontal planes such as a table.
Ambient IoT depth sensors can help to increase the completeness of raw depth maps by capturing data from poses that are not accessible to or normally covered by AR devices. To address limitations in sensor range, ambient depth sensors can be positioned closer to surfaces that are only observed from large distances by AR devices. Horizontal planes, frequently incomplete due to their similar angle of orientation to the optical axis of AR device sensors, can be captured by downward-facing IoT depth sensors. Even some challenging reflections may be avoided from different viewpoints. We envision this collaborative sensing approach being combined with existing techniques for depth map completion such as [126; 160; 219; 220], with the more complete depth data from multiple sensors combined on the edge server for a less challenging depth inpainting task.
**Scene change detection:** Ambient IoT cameras can also be used to establish whether the environment has changed between different AR sessions, to trigger SLAM remapping as required to improve the quality of spatial scene understanding (and conversely to avoid unnecessary time- and resource-consuming remapping if the environment did not change). Scene change detection based on stationary cameras' inputs is a long-examined, well-formulated problem, for which many solutions have been proposed [202]. Extending existing solutions to incorporate the specific constraints of heterogeneous multi-device platforms we envision (IoT and AR, stationary and mobile, devices), for the specific case of scene change detection in the context of VI-SLAM, has the potential to significantly reduce the extent of mapping that would be required to achieve high-quality, spatially-aware AR experiences.
### Semantic Understanding
A key element of next-generation AR is the use of semantic algorithms that detect the type, position, orientation, and even the current state of objects and surfaces within an environment. Not only can they be combined with SLAM to extend or enhance spatial understanding (e.g., [215]), but they also enable the delivery of various types of content to the user. Firstly, directly annotating a visual display with semantic information has a wide variety of applications, from language learning [81] to firefighting [23]. Secondly, this knowledge can be used to inform user interactions, e.g., suggesting appropriate places for the user to position virtual content, or objects that can be interacted with in a specific application. Finally, semantic understanding also enables the provision of more intelligent content, such as avatars or virtual characters that interact naturally and autonomously with real-world objects. Here we focus on the topic of object detection to illustrate the role of ambient IoT sensors, however the techniques we describe may also be applied to other types of semantic understanding, such as semantic segmentation.
#### 4.2.1 Background on Object Detection in AR
By running object detection models on images captured by AR devices, we can detect the type, pose and extents of common objects that are present in real world environments, which provides us with a more in-depth understanding of environmental context and informs the rendering of virtual content. Although current advancements in deep neural networks (DNNs) have shown superior performance in object detection [12, 105, 99, 102], executing large networks on computation-constrained devices such as AR devices and IoT sensors with low latency remains a challenge. To address this, edge-supported architectures are needed to offload computation from the AR devices and IoT sensors and improve the end-to-end latency [201, 201, 70, 216, 205]. As the pervasive deployment of mobile AR will offer numerous opportunities for multi-user collaboration, prior works have also studied object detection that exploits the visual information captured by different AR devices [217, 37]. Nevertheless, collaborative object detection for AR devices and IoT sensors, where visual information is captured from highly distinct vantage points, is an unexplored area of research. The depth information that is obtained by specialized AR headsets and high-end smartphones equipped with time-of-flight depth sensors may also be employed to aid in object detection [147], while recent work has investigated the use of point clouds generated on these types of AR devices as input to object detection models [74, 37].
Another approach is to detect and track objects in the environment by matching current input data - either 2D feature points in captured images, or the 3D data in a generated point cloud - with a predefined reference. In the case of images, the marker detection techniques we described in Section 4.1.1 may be used by attaching a printed marker to a real-world object. Alternatively, feature points can be matched across input and reference images [102, 45]. By comparing the features extracted on reference images (i.e., images of objects that need to be detected or tracked) to the features detected on input images (i.e., images of AR scenes where the desired objects are present) using descriptors such as ORB, SIFT, or SURF [122], the pose estimation of a desired object can be found by computing the homography matrix. Efforts to enhance this markerless registration through improved feature matching include [103, 98, 32]. Similarly for point clouds, input data is matched with the 3D points captured in a prior scan of the object, through point cloud registration [79]. This approach is used to enable the detection of previously scanned objects on ARKit [15]. In general, feature matching approaches require fewer computational resources when compared to neural-network-based object detection, but are less robust to environmental factors such as lighting, image distortion, the distance of the AR device to an object, and background textures.
#### 4.2.2 Enhancing Object Detection Using IoT Sensors
Current techniques for semantic understanding, including both object detection and semantic segmentation, rely on inputs of 2D images [162, 112], image and depth data [147], or 3D point clouds [74]. When these data are collected from sensors onboard AR devices, they are frequently subject to distortions due to device motion, occlusions, and resolution limitations (i.e., targets must be observed at an appropriate distance to capture informative data), resulting in incomplete and incorrect knowledge of the environment. Thankfully, just as for spatial understanding, ambient IoT sensors such as cameras and depth sensors can help address these issues by capturing data from additional and more favorable vantage points (e.g., a stationary downward-facing camera). Given that many semantic algorithms are computationally expensive and often best offloaded from AR devices to an edge server, data from IoT sensors can also be transmitted to the edge for processing.
One way in which the data sourced from IoT sensors can be used is to combine them with the data from AR devices, to achieve more robust detection. For example, in [114] we developed CollabAR, a collaborative image recognition system in which the camera image from an AR device is combined with spatially and temporally correlated images stored on the edge. The architecture for this system is shown in Figure 5. Initial inference results based on the device image are provided by the distortion-tolerant image recognizer, then aggregated with the inference results from spatially and temporally correlated images by our auxiliary-assisted multi-view ensembler module, which outputs the final object detection result. This enables CollabAR to achieve over 96% recognition accuracy for images with severe distortions, with an end-to-end system latency as low as 17.8ms. With ambient IoT cameras, we can quickly provision a large set of high-quality images for the correlated image lookup step, without having to rely on data from AR devices. In general IoT sensors can supplement the data obtained by AR devices to achieve more robust object detection, by providing images or point clouds of the same environment region that are of higher quality, or that contain a more complete view of an object.
Alternatively, the data sourced from IoT sensors may be used to extend the perceptual field of the AR device, to detect objects or surfaces which are outside the field of view or range of device sensors, or objects or surfaces which are entirely occluded. This gives rise to exciting possibilities regarding the extension of the human perceptual field to the entire environment in which an AR user is located. Not only can these IoT sensor data include data from cameras and depth sensors, but one can also incorporate other modalities, towards more reliable recognition that is robust to conditions in which vision-based sensors may perform poorly. For example, passive infrared motion sensors which detect heat can detect human or animal presence in dark environments, while recent works have demonstrated tactile-olfactory-based detection of humans [113] and chemical sensing
of illegal drugs [57] and explosives [106], which could all be incorporated into ambient IoT sensors. IoT-based monitoring of ambient acoustic signals [141] could be used to improve acoustic-based context detection for audio sources that are located far from the microphones of the AR devices. Central to realizing this vision of semantic understanding through an ambient multimodal sensor network will be solving challenges related to device localization and calibration, variable signal quality, and combining multimodal signals, which we discuss further in Section 6.3.
#### Enhancing Object State Awareness Using IoT Sensors
In addition to established types of semantic understanding - i.e., a knowledge of the type and position of objects present in an environment - we propose that a knowledge of the current _state_ of objects present can also enhance AR applications. This is particularly useful for AR applications that directly interact with physical objects in the surroundings, e.g., overlaying holograms related to an object's position, orientation, or other properties. These types of AR applications can be enhanced by the understanding of object states, and reflecting changes in real time. While information about some properties (e.g., pose) can be gathered through the processing of images captured by an AR device, IoT sensors incorporated into objects can provide greater robustness to environment properties, as well as data on other types of object states (e.g., strain). Below, we cover several properties of objects that can be obtained through ambient IoT sensors, and discuss potential use cases in AR applications.
**Pose:** While information about the position and orientation of objects in the surrounding environment can be obtained through marker detection (see Section 4.1.1) or object detection (see Section 4.2.1), these vision-based approaches are often dependent on environmental factors (e.g., lighting conditions or image resolution). Incorporating ambient inertial sensors (e.g., the accelerometer and gyroscope in an IMU) into objects on the other hand provides position and orientation estimates for an object without this dependency on environmental factors. Use cases for integrating inertial sensors into AR for understanding object pose can be seen across various applications that use wearable devices; examples include tracking the orientation of a glove [132], estimating a user's location and orientation through IMUs in earphones [210], detecting head movements for face-related gestures through smart glasses [123], and enabling sensing through haptic devices [187, 150].
**Strain:** Strain of an object refers to its deformation due to stress, and can be measured by strain gauges. Strain gauges change the electrical resistance based on the magnitude of the deformation, thus providing knowledge about the deformation
Figure 5: An example of an edge computing-based architecture we have developed for semantic understanding in AR, CollabAR [114], which provides distortion-tolerant, collaborative image recognition. As an example of combining data from AR devices and IoT sensors to achieve better robustness, we can use high-quality data from IoT cameras to provide the existing viewpoints used in the auxiliary-assisted multi-view ensembler step.
of an object such as the bending or external pressure applied. In our recent work, we have been examining this property to enhance the image registration of a catheter hologram in AR-assisted neurosurgery, as shown in Figure 6. We have developed an AR-assisted guidance system for neurosurgery by tracking the position and orientation of the catheter and overlaying a catheter hologram to guide surgeons in targeting the brain ventricle [46, 47]. However, due to the deformable shape of the catheter, there were often misalignments between the hologram and the catheter object. Use of fiber Bragg grating (FBG) sensors has been proposed in multiple lines of work for shape detection in medical applications [182, 183, 108], however the integration of FBG sensors into an AR system is challenging due to its high cost and requirement of an additional measurement device. To address this challenge, we are currently experimenting with strain gauges as low-cost IoT sensors that can be used to estimate the deformed shape of the catheter. The strain data collected from the catheter are sent to the HoloLens 2 from an edge server to display and align the correct shape of the catheter hologram onto the object. Figure 6 shows the enhancement of catheter hologram alignment by detecting the bending of the catheter using strain gauges. We believe that this will reduce the misguidance that can occur from the misalignment of the catheter hologram in our AR-assisted neurosurgical guidance system, and further enhance the accuracy of catheter placement during the external ventricular drain procedure.
**Other properties:** Ambient IoT sensors could also be employed to detect other 'non-spatial' aspects of object state, as well as pose and strain. This goes beyond simply visualizing the data from existing IoT sensors, to enhancing semantic understanding for AR in new ways. For example, one AR use case is cleaning applications, which inform users if an area requires attention, even if that area does not appear dirty to the human eye. Nanomechanical and electrochemical sensors have been developed which detect pathogens [29, 157], and if integrated into IoT devices, these sensors could provide information about whether objects or surfaces in an environment are contaminated. Similarly, recent works [143, 91, 135] have shown that food spoilage can be detected using a variety of different devices, from gas, humidity, and temperature sensors to more novel designs using nanomaterials. Deployed in distribution, retail, or culinary environments, these sensors could increase the speed and accuracy of AR-assisted food inventory management, by quickly indicating to workers which items are unsafe for consumption. Extending the perceptive capabilities of users to a wider range of object properties in this manner will provide opportunities to improve both productivity and safety in many industries, and is likely to be a key motivation for the wider adoption of AR.
### Contextualized Content
In the previous two subsections, we explored how ambient IoT sensors may be used to support or enhance two core aspects of next-generation AR, spatial and semantic understanding. We now consider how the virtual content that is presented to the user may be adapted according to the data obtained from IoT sensors. We start by covering _adaptive user interfaces_, the adjustment of virtual content to improve visibility and intelligibility for the user. We then cover the established field of _photometric registration_ in AR, the matching of virtual content lighting to real environmental conditions, before extending this to _environment-aware intelligent virtual content_, the provision of virtual avatars or characters which are aware of and respond to a wide range of environmental conditions.
Figure 6: The registration of a catheter hologram in an AR-assisted external ventricular drain (EVD) procedure can benefit from the use of IoT sensors for understanding of object state. We used strain gauges to detect the degree of bending on the catheter to align the catheter hologram in AR.
#### 4.3.1 Adaptive User Interfaces
The development of adaptive user interfaces, the properties of which are adjusted according to the context they are presented in and the needs of the user, is a long-standing field of research in human-computer interaction (for a recent review see [133]). While to a large extent the literature on traditional 2D interfaces has focused on contextual information related to the user (including social and cultural context) or system capabilities, the impact of the diverse and dynamic real environments which will host AR on both human perception of virtual content and system functionality means that environment-aware user interfaces are a vital consideration [110, 54, 117]. Below we examine different properties which will inform environment-adaptive user interfaces in AR, and how this will be enabled by ambient IoT sensors.
**Spatial and semantic understanding:** A number of works on environment-adaptive AR have considered adaptations based on spatial or semantic properties. For example, in [56] the authors developed a rule-based framework that enables designers to fit their application components to environments with different geometries, in this case, sourced from the RGB and depth streams of a Microsoft Kinect camera. Similarly, [145] presented a prototype system that allows users to'snap' virtual content to planar surfaces and edges, with environment data again extracted using RGB and depth images from the Kinect. More recently, the work of Lindlbauer et al. [110] considers adapting AR user interfaces based on a combination of environment geometry (checking whether a virtual object will be occluded using depth data), user task (which may in part be determined by semantic understanding), and user cognitive load.
Regarding semantic understanding specifically, we may wish to place user interface elements near real-world objects that are semantically related, in order to either anticipate a user's current needs, or to place AR-based tasks in context for a smoother experience, as proposed in [34]. For example, access to AR-guided recipes may appear above the stove, or an in-progress packing list might appear next to a suitcase. Alternatively, we may wish to block the view of real objects with virtual objects - for instance, in the context of _just-in-time adaptive interventions_ (JITAI) [139] it may support a user's personal development and change to cover a cigarette packet or mobile phone with another object such as a plant. A mock-up of this on the Magic Leap One headset is shown in Figure 7, and we discuss this topic further in [175]. Incorporating ambient IoT cameras and depth sensors will enable sufficient levels of spatial and semantic understanding to realize these visions reliably in practical scenarios; accurate, information on the contents of an environment, along with data gathered on how users interact with that environment, will enable the provision of optimally positioned virtual content for AR users.
**Light, texture, and color:** However, the physical attributes of an environment are not the only factors which should inform the positioning and properties of AR user interface elements - the light, texture, and colors present also affect visibility and intelligibility. For example, on the OST displays employed on a number of state-of-the-art AR headsets, visibility of virtual content (and as a result usability, user comfort, and sense of presence) is lower at higher levels of
Figure 7: A Magic Leap-based mock-up of an environment-adaptive user interface, which uses semantic understanding of the environment to identify a distracting item (a phone), then covers it with a virtual plant. To further motivate the user to study, a motivational hologram (a diploma) is presented [175].
commonly encountered in indoor environments [49; 86; 96]. Not only does the additive nature of these displays mean that any virtual content added is less perceptible, but real-world background textures that are visible through dark, transparent regions of virtual content are more visible. Therefore, we may wish to detect when high ambient light levels or highly textured backgrounds are present and increase the brightness (pixel intensity) of virtual content, or adjust the amount of ambient light that is allowed through the headset lenses, to improve content visibility. While state-of-the-art headsets support automatic adjustment of display brightness based on ambient light settings, and the Magic Leap 2 allows users to reduce the amount of ambient light that passes through all or parts of the display [119], the ultimate goal here is automatic, fine-grained adjustment based on both the nature of virtual content and the presence of background textures.
Furthermore, existing literature has established the effect of contrast ratio and color combinations on content legibility and aesthetics for both traditional (e.g., [107]) and AR displays (e.g., [213]). Given that virtual content may be presented in front of a wide variety of real-world backgrounds in AR, content that is automatically optimized for the current background color and texture is naturally of interest, and was proposed in [54]. On OST displays, blending effects occur in that the perceived color of virtual content is affected by the color of the real-world background [53; 55]; the automatic selection of virtual content colors that will either be minimally affected, or affected towards a desired result by blending with the current background, is an important direction for future work.
While light levels, background textures, and colors can be detected through the ambient light sensors or cameras on an AR device, this requires user interfaces (or other content) to be optimized in real-time. This is feasible for the types of backgrounds and content that have been studied so far in AR (the vast majority of existing works consider text readability against backgrounds with a single color or pattern, e.g., [40; 54]), but for more complex cases it will likely be advantageous to prepare and provision optimized content in advance. Especially when we consider the complex ways in which light, texture, and color interact to determine visibility, and the potential for combining this information with spatial and semantic understanding of an environment, the use of ambient IoT sensors becomes a necessary addition. To this end, we envision the installation of IoT cameras that capture the texture and color of the real-world surfaces which frequently appear behind virtual content from the user's perspective, with camera poses informed by both analyses of user trajectories and the positions of virtual content. Automatic content optimization, informed by existing work on deep learning for adaptive user interfaces (e.g., [188]) can then be performed on the edge, and the result provisioned to an AR user when they enter an environment.
**Spatial and semantic algorithm performance:** Less considered in the literature is how user interfaces might be adapted based on the current performance of algorithms for spatial and semantic understanding, which is essential to ensure users do not rely on incorrect virtual content. Several pioneering works examined how virtual content may be adjusted in the presence of registration (tracking) error; for example, MacIntyre et al. introduced and applied a _level-of-error_ rendering technique [116; 117], in which a virtual convex hull outlining a real object was expanded according to estimated registration error. This was extended in [38] to show 2D convex hulls representing possible registration error of virtual objects. In [72], Hallaway et al. demonstrated switching between spatially registered object labels and unregistered augmentations depending on the level of tracking accuracy currently available (this AR system employed a hybrid tracking solution incorporating both ultrasonic and GPS-based tracking, rather than VI-SLAM). Since then, other works have proposed and evaluated visualization techniques to mitigate the effect of registration error [165], as well as alternative ways of visualizing error in navigation tasks (e.g., colored arrows, virtual character expressions) [149]. We are building upon this line of research by exploring new methods of displaying registration error, such as 3D convex hulls around virtual objects.
In particular, our ongoing work examines how to establish and convey the relationship between environmental conditions and tracking or registration error - as we covered in Sections 4.1 and 4.2, properties such as light and visual texture can impact the performance of the underlying algorithms. Beyond visualizing an estimate of current error, this will enable users and environment designers to take steps to reduce error by altering their environment, before the main AR experience commences. While commercial AR platforms such as ARKit [13] and ARCore [65] indicate when tracking results are unavailable or questionable, along with possible high-level causes, these causes lack granularity in terms of environmental conditions. To address this we developed an interpretable predictive model (a decision tree) for binary classification of tracking performance in [178], along with an example of how environment ratings might be displayed (Figure 8a). We have since extended the design of this user interface to use the model output to provide the user with extra guidance, as shown in Figure 8b. In these cases the input data for the tracking quality prediction were obtained by the AR device alone, but we envision them being sourced from ambient IoT sensors in the future. We are now developing prediction models which provide more fine-grained estimates of error magnitude, as well as methods to visualize the error estimates associated with different environment regions.
#### 4.3.2 Photometric Registration
Currently, the most established (and well-supported) type of environment-based virtual content adaption in AR is photometric registration - rendering virtual content such that it is consistent with the lighting in the real environment. Lighting properties of interest include illuminance, light direction, and color temperature, and can be used to more accurately render reflections, specular highlights (the bright regions of a surface that reflect a light source directly), and directional shadows, which in turn increases the level of realism, a user's sense that a virtual object is really present in an environment. Photometric registration for AR has been an active research topic for more than two decades (e.g., [68; 88]), enabling its recent support in commercial AR platforms such as ARCore and ARKit. State-of-the-art methods in commercial AR (e.g., [66]) use machine learning to analyze camera images and synthesize environment lighting, similar to recent work on using CNNs for this task (e.g., [121]). Other recent work includes the development of an edge-assisted framework to support real-time light estimation [223], and the use of physical, geometrically-tracked reflective light probes that are placed in the environment [155].
We envisage ambient IoT cameras and light sensors being employed to enhance photometric registration for two main reasons. Firstly, IoT sensors are able to provide views of the environment not available to AR devices due to occlusions or a limited field of view. This will enable a more complete understanding of lighting within an environment, such as the exact position of light sources which are not visible from the perspective of the AR device. Secondly, unlike AR devices, IoT devices are able to obtain relevant information on environment lighting prior to an AR experience. This means that all the computation related to light estimation does not have to be done in real-time on the AR device; instead, it can be precomputed (and continuously updated) on the edge, both saving resources on the AR device and allowing for more complex light estimation methods to be implemented. Indeed, an example of this type of solution was developed in [167], which the authors termed 'distributed illumination', to address the lack of computing power in mobile devices at that time (2014). In this implementation high dynamic range cameras were connected to a stationary PC via a wired connection; we propose extending this to wireless connections using the IoT sensors now available, as well as incorporating more advanced light rendering techniques such as indirect illumination (reflections from the virtual object onto the environment) and the use of spherical harmonics (e.g., [222]).
#### 4.3.3 Environment-Aware Intelligent Virtual Content
Environment awareness in next-generation AR should also extend to the provision of intelligent virtual humans, animals, or other characters that respond naturally to environmental conditions, including the presence and state of real humans or objects, as well as properties such as light and temperature. This has the potential to make virtual content appear even more realistic to users, and further increase a user's sense of presence or immersion. Indeed, recent work has shown that virtual humans whose gaze is directed towards real physical objects supported more effective communication with real participants [8], and
Figure 8: Two of our prototype adaptive user interfaces for communicating the current level of spatial understanding (device tracking performance) in an environment when placing a virtual object (hologram). Notification of tracking performance classification (a) can be extended to include user guidance on how to improve unsuitable environments (b).
that a virtual human that is able to influence physical objects (e.g., turning off a light) may be seen as more trustworthy by users, and can result in users perceiving a greater degree of social presence [93]. In [193] the authors looked to develop a more sophisticated understanding of semantic context to inform this type of intelligent virtual content, with semantic information first extracted from RGB and depth images, then represented as a 3D scene graph.
While less explored, a virtual character's apparent awareness of and ability to respond to environmental properties such as light, noise, wind, or temperature also holds great potential for increasing levels of realism and immersion. For example, one can imagine a virtual cat that basks in a patch of sunshine, a virtual human that turns its head in the direction of a slammed door, or a virtual character that warms itself next to a heater. In [94] the authors demonstrated the use of IoT wind sensors for AR, with virtual papers that fluttered in response to the airflow generated by a real fan, along with a virtual human that put their hand out to stop the fluttering. IoT sensors will be particularly useful in cases such as this, when detecting or localizing these types of contextual information is beyond the capabilities of the sensors onboard an AR device. Contextual data from different parts of an environment may also be required in advance in order to provision specific animations associated with responsive or intelligent content (e.g., the virtual cat rolling around in a patch of sunshine).
### Interaction
Currently, the primary methods of interaction for user input in AR are tactile, via a touchscreen on the device (e.g., a smartphone), a controller (e.g., the Magic Leap 2), or mid-air gestures via hand tracking (e.g., the Microsoft HoloLens 2, the Varjo XR-3). Some devices also facilitate gaze-based interactions, supported via video-oculography-based eye tracking, or auditory interactions via speech recognition. One important property of all these methods is that their performance can be affected by noise sources in the surrounding environment; for example, the presence of high intensity light sources or high illuminance in general can cause issues for infrared sensor-based depth sensing [195], resulting in lower quality eye tracking [172; 177] and hand tracking [71]. 3D gaze point estimation may also be inaccurate at low illuminance due to lower quality spatial mapping [177; 71], while acoustic noise is known to be problematic for speech recognition [221]. Another limitation of tactile and mid-air interaction methods at present is that they only capture gestures made using the hands, and in the case of mid-air gestures captured via hand tracking, gestures are only recognized when the hands are in the field of view of the head-mounted depth sensor. If we compare this to natural human interactions, this is severely restricted; humans frequently express themselves with hand gestures outside this region, as well as gestures with the head, upper and lower body, and facial expressions.
There are two ways in which ambient IoT sensors could be used to manage or address the reduced performance of AR interaction methods due to environmental conditions. One option is to use ambient light sensors or microphones to predict the current level of performance for each interaction method; the most appropriate method could then be suggested to the user, or virtual content could even be adapted to address current limitations (e.g., elements might be increased in size to support gaze-based interactions when gaze estimation accuracy is lower). Another option particularly applicable to speech-based interaction is to use ambient IoT sensors to inform noise cancellation techniques. Such a method was developed in [185], the core idea being that an IoT microphone captures ambient sounds and forwards them to the speaker over its wireless radio, with the information arriving faster than if the sound was captured next to the speaker. This solution is readily applicable to AR audio systems, especially if IoT microphones are already placed in the environment for other purposes.
There are also a number of possibilities for extending the interaction methods available in AR using IoT sensors. For example, cameras and depth sensors have the potential to greatly expand the range of gestures AR users can use for interaction by capturing a view of a user's entire body. Building upon existing work in human-robot interaction (for a review see [111]) and human activity recognition (for a review see [39]), we can develop solutions to recognize natural human gestures as well as relevant activities in AR scenarios. Alternatively, _tangible user interfaces_ allow users to manipulate virtual content using a physical proxy, tracked using optical or inertial sensors - for example in [44] a Vive Tracker [77] equipped with infrared and inertial sensors was placed inside a physical sphere used for interaction. One could also leverage IoT tactile sensors present in the environment, such as touchscreens, physical buttons, or pressure sensors; a recent example of the latter, particularly relevant to robotic and biomedical applications, was developed in [36], and is able to detect contact pressure, contact shape, and shear direction. Finally, higher-quality signals available from ambient sensors will also be useful in some circumstances; for example, although the microphones embedded in AR devices are of sufficient quality for speech-based interactions, we may require microphones with a better dynamic range and frequency response for AR applications that present feedback on musical performances to support learning.
## 5 IoT-based Actuation for AR
Next, we detail how ambient IoT actuators may be used to support or enhance AR experiences. As in Section 4, we examine the different uses which we defined in Section 3.1, though given that the role of IoT actuators for AR is less studied than IoT sensors, this section is more exploratory in nature. In this section we combine related uses into one of two subsections; the first covers how IoT actuators can enhance the performance of spatial and semantic understanding algorithms by optimizing environmental conditions, and the second covers how IoT actuators can be used to extend interaction methods or enhance immersion.
### Spatial and Semantic Understanding
Optimizing environment properties in order to achieve higher quality spatial understanding is an important direction for both marker-based and markerless AR, that will enable both more accurate spatial registration of virtual content and more stable virtual content. Here we cover the use of IoT actuators for marker-based AR in our dynamic marker system (Section 5.1.1), and for markerless AR in our environment illuminance optimization system (Section 5.1.2). One alternative to these spatial understanding methods was recently introduced in [4], and involves the modulation of ambient light sources in a predefined manner; this could be achieved using ambient IoT actuators such as light bulbs, however we focus on established techniques for spatial understanding here. Finally, we also cover the possible use of IoT actuators to optimize environments for semantic understanding (Section 5.1.3).
#### 5.1.1 Environment Optimization for Marker-based AR
As introduced in Section 4.1.1, marker detection is a common technique used for object detection, with the position and orientation of an object obtained through the detection of a printed marker attached to the physical object. To address the challenges of environmental factors (e.g., environment lighting, the distance and angle between an AR device and the marker), we developed a dynamic marker system that controls IoT actuators including an E-Ink display and a smart light bulb to optimize the environment for marker detection.
Dynamic markers were first explored by manipulating the size of projections in a projection-based AR system [124], however with recent innovations in low-cost and low-power display hardware (e.g., an E-Ink display), dynamic markers became more practical for many applications. In prior studies, dynamic markers were used in robotic applications such as enhancing human-robot swarm management [131], or aiding Unmanned Aerial Vehicle (UAV) landing by scaling the size of a marker based on the distance to the UAV [1]. Similarly, a dynamic marker has the potential to enhance marker-based AR applications by changing the properties of the marker. Compared to a static printed marker, dynamic markers using an E-Ink display can enhance user interactions by adapting to various situations (e.g., changing the size, shape, or pattern of the marker on the display) [152]. However, there are challenges associated with a dynamic marker, such as the glare observed on the E-Ink display depending on the environment lighting, which may result in the marker being undetectable by the AR system. Our dynamic marker system
Figure 9: The IoT-based architecture of our dynamic marker system, an example of environment optimization for marker-based AR. Environment images are captured by the AR headset and processed on the edge server for feature detection and matching, to inform environment optimization through IoT actuators.
for AR further enables the system to control the properties of IoT actuators such as the brightness of a smart light bulb, and the size and shape of a marker shown on an E-Ink display, in order to optimize the environment for marker detection (as shown in Figure 9).
The hardware setup of our system (illustrated in Figure 10) includes a 7.5-inch Waveshare E-Ink display that can show images in 8-bit grayscale with a Raspberry Pi 3, a HoloLens 2 AR headset with Vuforia marker detection, and a LIFX smart light bulb for controlling environment illuminance. As shown in Figure 9, we collect scene (environment) images from the HoloLens 2 and process them on an edge server, to characterize the environment with feature points, an important metric for marker detection algorithms, that provides information about the quality of the marker image. Due to the relatively low computational complexity of this task, we use a Raspberry Pi 3 as our edge server. The scene images are first cropped to a region of interest through image processing by detecting the square shape of the marker. We then detect the feature points in the cropped scene image and match them with the reference marker image to quantify the percentage of features available in the scene image. Our system considers three environmental properties (the lighting condition of the scene, the distance from the camera to the marker, and the viewing angle of the camera to the marker) to optimize for marker detection. Environment lighting impacts the quality of the printed marker, with marker detection less robust in darker or brighter conditions. The distance and angle of the camera on the AR headset, determined by the user's movement, impacts marker shape-related factors such as the black and white ratio, edge sharpness, or the information complexity of the marker [87, 90]. To optimize the environment, we control the IoT actuators to change the brightness of the LIFX bulb, and the size and shape of the marker shown on the E-Ink display, until the percentage of matched feature points reaches the optimal level for marker detection. The latency associated with changing a new image based on the environment characterization in our dynamic marker system takes about one second, due to updating all pixels on the E-Ink display.
To inform this system, we first investigated the relationship between the three aforementioned environment properties and characterization metrics of the scene (i.e., results of feature point matching). We conducted experiments by computing feature matching with four different marker patterns under different environmental conditions. The four markers comprised two fiducial markers in 1-bit grayscale with a similar number of available features from ARToolkit [17] and ArUco [18] open-source libraries, and two image markers; one with a uniform pattern of features and another with an inconsistent pattern of features. We analyzed the changes in the percentage of matched feature points by testing each marker pattern at 9 different viewing distances (20-90cm with an increment of 10cm), 5 different viewing angles (0-60 degrees with an increment of 15 degrees), and 9 different illuminance levels. Each set of environmental conditions was run for 20 trials. Our results showed that there was a correlation between the number of matched feature points and the environment properties. More feature points were matched on the dynamic marker when the viewing distance and angle were lower. On the other hand, fewer feature points were matched on the dynamic marker at lower illuminance levels. We also observed a dissimilitude in the changes in the percentage of the matched feature points among different marker patterns. The matched feature points on the fiducial markers (those from ARToolkit and ArUco) were more dependent on environmental conditions than image-based markers due to their lower number of available feature points. This prompts further investigation to discover the different optimal conditions required for each marker pattern, to achieve more robust and accurate marker detection in AR.
These relationships between environment properties and marker detection performance motivate the use of dynamic markers with IoT-based actuators for various marker-based AR applications. Our dynamic marker system achieves more accurate and robust marker detection by optimizing the environment. For instance, we can change the brightness of a smart light bulb to optimize the level of illuminance, and change the size of the marker shown on the E-Ink display based on the distance from the
Figure 10: Hardware setup of our environment optimzation system for marker-based AR: an E-Ink-based dynamic marker, a HoloLens 2, and IoT actuators (a), an undetected dynamic marker in an unoptimized environmental condition (b), an overlay of virtual content through detection of the dynamic marker in an optimized environmental condition (c).
user to the marker. This environmental optimization enhances user interactions in marker-based AR applications by allowing the system to detect a marker at ease even under initially challenging conditions. Furthermore, a dynamic marker provides much greater flexibility in changing the marker pattern, shape, and size, when compared to a conventional printed marker. This holds potential for marker-based image registration in medical applications by showing different anatomies in AR holograms based on the marker type [46, 52] or in human-robot applications by providing different feedback on the E-Ink display to enable simpler debugging for users [131].
#### 5.1.2 Environment Optimization for Markerless AR
Given the increasing popularity of markerless AR applications (e.g., [6, 82, 144]), and the fact that virtual object instability due to incorrect spatial understanding is still a prevalent issue on state-of-the-art platforms and devices [174], it is also of interest to consider how IoT actuators can be used to optimize environments for better spatial understanding in markerless AR. As we covered in Section 4.1.2, environment illuminance (ambient light level) plays a central role in determining the performance of VI-SLAM, the method underpinning spatial understanding in markerless AR, because it determines the extent to which visual textures are distinguishable for feature-based mapping and tracking. If the tracking cameras on an AR device are pointed towards regions of an environment with sufficiently low or (less commonly in indoor scenarios) high levels of ambient light, device tracking may fail to initialize, be of lower quality, or be lost, resulting in the incorrect spatial registration of virtual content. Thankfully, illuminance is also readily controlled through IoT actuators which emit light (e.g., smart bulbs) or block light (e.g., smart blinds), leading us to focus our initial efforts on environment illuminance optimization. Additional complexity comes from the need to consider the impact of light on other AR system elements, including the quality of semantic understanding, the performance of eye and hand-tracking algorithms, and the visibility of virtual content on OST displays.
In [177], we developed a proof-of-concept environment illuminance optimization system for AR, which automatically maintains illuminance at a sufficient level for high virtual object stability, and where possible, accurate and precise eye tracking. It uses both IoT sensors to detect current levels of illuminance and visual texture in an environment, and an IoT actuator to control the level of illuminance in that environment. Our results on virtual object position error for different visual textures and illuminance levels (see Section 4.1.2) showed that the robustness of spatial understanding of different illuminance levels is dependent on the properties of the visual textures present, with fine textures requiring greater illuminance to support good performance. Therefore, while our system's default optimum illuminance level is 300 lux (to support accurate eye tracking and low virtual object position error for coarse textures), when the environment contains fine visual textures the core AR functionality, virtual object stability, is prioritized and that optimal level is increased to 750 lux. To offload the computationally expensive environment texture characterization task, while avoiding the transfer of potentially sensitive images to the cloud, we implement an edge-computing architecture, with optimization controlled by a server on the same wireless local area network, as shown in Figure 11.
Our environment illuminance optimization system employs two IoT sensors, an 8MP Raspberry Pi Camera Module 2 and an ambient light sensor (TSL25911FN), connected to a Raspberry Pi 4 (as shown in Figure 12), to record images of the environment and illuminance every 5s. This environment data is saved to local storage and then sent via an HTTP PUT request to the edge server (a Lenovo ThinkCentre M910T desktop computer). The characterization module determines the optimal
Figure 11: The edge-based architecture for our environment illuminance optimization system for markerless AR [177]. Environment images and illuminance levels are periodically captured by IoT sensors (a Raspberry Pi camera and an ambient light sensor) connected to a Raspberry Pi, which transfers these data to the edge server. Environment characterization is performed on the edge, and instructions are sent to an IoT actuator, a LIFX bulb, to optimize illuminance levels.
light level by detecting the number of FAST corners in the environment image using OpenCV: if more than 250 corners are detected the environment texture is classified as fine, and the optimal light level is raised. If a light level change is required, the characterization module sends the optimal and current light levels to the light level control module. Our system employs one IoT actuator, a LIFX bulb, and the light level control module adjusts the brightness value of the LIFX bulb accordingly, at low latency (<0.5s). Environment characterization metrics (illuminance, plus image brightness, contrast, edge strength, and corners) are also stored on the edge for long-term trend analysis, while the latest metrics can be requested by an AR device via an HTTP GET request. The computational and storage requirements of this long-term environment characterization motivated our use of a higher class of edge server (the desktop computer) than the Raspberry Pi 3 used in our optimization system for marker-based AR (Section 5.1.1).
In future work, we will extend our environment illuminance optimization system to employ multiple sets of sensors and smart bulbs to control illuminance in different environment regions. One associated challenge will be device localization and calibration, which we discuss further in Section 6.2. It will be desirable for the pose of IoT cameras to be automatically adjustable, such that they capture regions where AR users often point their device camera, or where device tracking is lost. To this end, while our current system uses a fixed camera mount, future implementations could be replaced by a programmable camera mount, controlled according to data gathered on the edge server (the Arducam B0227 Pan Tilt Platform for example provides a low-cost solution for a Raspberry Pi Camera). We will also investigate how other IoT actuators such as smart blinds can be incorporated, as alternative methods of controlling illuminance.
Another direction for future work is to explore how the _visual texture_ in an environment could be adjusted to enhance spatial understanding in markerless AR. VI-SLAM algorithms rely on sufficient texture for accurate and robust pose tracking [176], but this is often not present in built environments, due to human preferences for minimalism in architecture and interior design. One possible solution would be to detect when an AR experience is taking place in an environment, and use ambient IoT actuators such as electronic or E-link displays, or light projectors, to provide additional visual texture during that period. In some cases, additional textures provided by ambient actuators might even be matched with the contents of the AR application, towards greater user immersion (see Section 5.2.2). In ongoing work, we are investigating methods that automatically determine where, for a given environment and application scenario, the application of visual texture will be most beneficial. However, any environment texture optimization system must also take into account the impact texture can have on the visual perception of AR users - greater visual texture can impair visibility on OST displays or distract users from a task [176]. This is particularly pertinent because some of the environment regions in which applying visual texture is likely to be most beneficial for spatial understanding (i.e., the regions which a device camera is most often facing towards), are frequently the regions where AR content is placed, so adjustment of textures could interfere with human perception. To inform the optimization of visual texture in AR environments, further work is required to investigate the trade-offs between spatial understanding and human visual perception of different visual textures.
Figure 12: A proof-of-concept installation of our IoT-enabled environment illuminance optimization system [177], viewed through a custom AR application on a Samsung Galaxy Note 10+ smartphone. In order to capture the appropriate region of the environment with the Raspberry Pi Camera Module 2 we manufactured a custom 3D-printed camera mount, which could be replaced by a programmable mount in future implementations.
#### 5.1.3 Environment Optimization for Semantic Understanding
In a similar manner as for spatial understanding, one can also use ambient IoT actuators to optimize environments for semantic understanding. Specifically, IoT light bulbs and blinds can be used to adjust environment illuminance to levels which maximize the performance of vision-based object detection and semantic segmentation algorithms. For example, the guidelines for detecting previously scanned objects on ARKit (described in Section 4.2.1) recommend an illuminance of 250 to 400 lux [15]. At low levels of illuminance, noise is introduced to the camera images used as input for semantic understanding, especially on mobile devices with a small camera sensor size [7] - even small amounts of noise have been shown to result in large accuracy drops for leading image recognition algorithms [114]. Although we covered techniques to obtain images with less distortion using ambient IoT cameras in Section 4.2.2, one can make this task less challenging by ensuring IoT light bulbs provide sufficient illuminance. On the other hand, high intensity light sources may cause specular reflections on objects or surfaces made of glossy or metallic materials; not only do these artifacts reduce the performance of image-based object detection or semantic segmentation algorithms [10], but they also degrade the quality and completeness of data available from time-of-flight depth sensors used to support semantic understanding [195]. IoT light bulbs should be set to intensities that minimize these issues, and IoT blinds should be employed when necessary to block strong sunlight.
Critical to implementing environment optimization systems for semantic understanding will be establishing'regions of interest', where illuminance-related issues are likely to occur. Regions of interest will be areas of an environment where both (1) illuminance-based distortion or artifacts could occur (e.g., dark corners, or a metallic object close to a light source), and (2) AR users are likely to view, particularly with the expectation of semantic information being provided (e.g., an object of interest). These regions of interest can then be monitored by ambient IoT sensors to inform the control of nearby IoT actuators. For example, if an ambient light sensor on a desk or workbench detects a low light level, the light intensity of a nearby IoT light bulb could be raised. Alternatively, one could detect illuminance-based issues in IoT sensor data directly; for example, specular reflection detection [10] could be run on the images captured by an IoT camera focused on a metallic object near a window, and when specular reflections are detected the IoT blind on that window would be lowered. We envision these optimization systems being readily combined with the illuminance optimization systems for spatial understanding that we described in Sections 5.1.1 and 5.1.2; not only do they share the common goal of minimizing distortion and artifacts in visual data, but regions of interest in need of IoT sensor-based monitoring will frequently overlap.
### Interaction and Immersion
As well as being used to optimize the performance of spatial understanding algorithms, IoT actuators may also be leveraged to enhance the user-facing elements of an AR system directly. Below we consider the ways in which a user's interactions with virtual content (Section 5.2.1) and their immersion or sense of presence in it (Section 5.2.2) may be improved using ambient IoT actuators.
#### 5.2.1 Interaction
Environmental conditions (e.g., lighting) can impact both the performance of interaction methods such as hand and eye tracking in AR (see Section 4.4), and an AR user's perception of visual virtual content (see Section 4.3), which in turn affects their ability to interact with it. Ideally, we wish to incorporate these constraints into the type of environment optimization systems we introduced in Section 5.1, although this will not be without challenges due to conflicting requirements, as we discuss in Section 6.4. In certain cases, for example when conditions cannot be adjusted, or in safety-critical scenarios where the accuracy and speed of task completion are paramount, it may be beneficial to present information on traditional electronic displays that are not as affected by environmental conditions. Users could choose to switch to a nearby ambient visual actuator (e.g., a tablet, smartphone, or smart display such as an Echo Show) to help them complete a specific task, before switching back to the main AR display. Indeed, similar to how screen mirroring is used to share content with a wider audience, this can also be used to support interactions that involve non-AR users. We envision that knowledge of alternative displays that can currently be leveraged will become an important element of environmental awareness in AR.
Another limitation of current AR headset designs is that they facilitate the delivery of visual and auditory sensory information to users, but are not well suited to providing realistic tactile feedback when users interact with AR content. The controllers included with some devices (e.g., the Magic Leap 2) provide uniform tactile feedback and support vibration but do not allow users to actually 'touch' AR content, while hand tracking-based interactions allow for natural gestures but do not provide any haptic feedback when users touch virtual content. Recent work has sought to address this latter issue by providing haptic
feedback through a vibrotactile ring [125; 192] or finger-side or nail actuators [118; 156]; however, all solutions involving wearables place extra requirements on user equipment and setup that may not be practical in many scenarios. One potential alternative for visual AR content that is overlaid on a real-world surface is the use of surface haptics, to produce variable tactile sensations and perceived levels of friction on the same surface through electroadhesion [134; 206]. By incorporating these devices into an AR environment as ambient IoT actuators, we can program them according to the virtual content that is displayed in a particular region, and thereby provide more realistic haptic feedback to users without the need for wearables.
#### Immersion
There are a variety of possibilities for using ambient IoT actuators to enhance an AR user's sense of immersion. One type of enhancement stems from the fact that current AR devices are limited to delivering visual and auditory sensory information; this can lead to a mismatch with other sensory inputs such as the sensation of temperature (thermoception), for example, if a virtual fire is present, but the room is cold. This limits a user's sense of immersion, the feeling that virtual AR content is truly present in the real environment - research indicates that increasing the modalities of sensory input in virtual environments increases a user's sense of presence in that environment (e.g., [42; 153; 64]), consistent with observations that the human perceptual system evolved in multisensory environments [189]. Recent work in VR has explored the potential for different types of external actuators to deliver a range of stimuli specific to the current environment, including speakers for 3D audio [101], heat lamps for temperature control, and fans for haptic wind representations [43] (the authors use the term'smart substitutional reality' to describe their system here), and an olfactometer that provides different scents or smells [16]. We propose applying and evaluating these types of systems in AR; for example, if we imagine a therapeutic AR application that relaxes users by placing virtual flowers in their environment, incorporating the scents associated with those plants may increase the application's effectiveness. Given the variable level of virtual content that may be incorporated into an AR experience, an important consideration will be how a sense of virtual content realism in AR (i.e., a sense that a virtual object is present in a user's physical environment) differs from a user's sense of presence in VR (the feeling that one is actually present in a new virtual environment).
Alternatively, another type of possible enhancement using IoT actuators involves increasing immersion through the addition of visual and auditory stimuli beyond the current limitations of AR device displays and speakers. In particular, the limited field of view on OST displays and smartphones may be counteracted somewhat by adding visual stimuli, or controlling visual conditions, in the surrounding environment to match AR content. For example, while an AR application displaying the planets of the solar system is being used, the environment light level might be lowered using IoT bulbs, and distant stars represented on the surrounding surfaces using IoT light projectors. Existing immersive art experiences which offer separate projection and VR exhibits (e.g., [50]) could provide AR experiences which combine 3D virtual content with 2D projections, while still allowing visitors to interact naturally in the same space. For auditory information, although spatial audio using headphones or the speakers available on some headsets is well-developed, there will be cases (e.g., for a smartphone without headphones, or a headset with a smaller form factor) in which ambient IoT speakers can replicate environmental sound sources much more realistically. Indeed, the opportunities to support smaller AR device form factors by using external devices for virtual content presentation (as well as for computation), in this type of'mixed media AR,' is another important direction for future work.
## 6 Challenges and Research Directions
In this section we lay out a set of research directions associated with employing ambient IoT devices to support or enhance AR experiences. First, we describe the use of a game engine-based emulator, required to gather sufficient data on the effect of environment properties to inform the design of IoT-supported AR systems, and prototype them in known, controlled conditions (Section 6.1). Second, we lay out the challenges associated with AR and IoT device localization and calibration that arise once we implement these systems in the real world, along with potential solutions (Section 6.2). We then consider how, given a set of localized and calibrated devices, one might tackle combining the sensor data from those devices (Section 6.3). Next, we discuss the challenges of implementing the IoT actuator-based environment optimization systems we described in Section 5, with a focus on the conflicting requirements of different aspects of AR experiences, as well as those that come about once we introduce heterogeneous users and devices (Section 6.4). Finally, we cover security and privacy issues related to platforms which combine AR and ambient IoT devices (Section 6.5), a vital consideration for the wider acceptance and deployment of ambient IoT-supported AR systems.
### Game Engine-based Emulations of IoT-supported AR Systems
The vision we have laid out in this book chapter requires both an in-depth understanding of how environment properties impact AR algorithm performance in diverse scenarios, and the development of deep learning models to predict AR algorithm performance. This is challenging for two reasons. Firstly, we require large amounts of data with accurate ground truth information in a diverse set of environments, which is time-consuming or even infeasible to obtain. For example, obtaining ground truth pose data for VI-SLAM algorithm evaluations in real environments necessitates the use of optical tracking systems such as OptiTrack [148] and Vicon [199], which involves considerable setup and calibration time for each new environment. Secondly, we require fine-grained, systematic manipulation of environment properties in order to perform experiments in repeatable and controlled conditions; this is also difficult to achieve in most real environments because they are often subject to external influences, such as daylight entering through a window, or objects being moved. Indeed, these challenges are why existing datasets for VI-SLAM evaluations (e.g., EuRoC [26], TUM VI [181], SenseTime [83]) or object detection in AR (e.g., [105; 2]) cover a small range of environments or provide no information on environment properties. Instead, we propose the use of highly realistic synthetic data, recently made possible through high-definition rendering techniques in game engines (e.g., Unity [197] and Unreal [48]) and other rendering software (e.g., V-Ray [30]). We developed a methodology for using virtual environments for VI-SLAM evaluations in [176], and the process for generating synthetic datasets for semantic understanding was described in [164]. We cover the key elements of this approach for ambient IoT-supported AR below.
**AR environment and AR user emulation:** In ongoing work we are using the Unity and Unreal game engines, de-facto standard AR and VR development platforms, to create emulators of AR environments and users. We are using these emulators to generate diverse photorealistic scenes, represent mobile AR user behaviors, and test the performance of a wide range of IoT-supported AR solutions. An example of a scene we have generated in our initial Unreal-based emulator is shown in Figure 13. Within the generated scenes, we have fine-grained control over multiple parameters related to AR environments (e.g., physical layouts, lighting, reflections, textures), as well as parameters related to AR users and devices (e.g., trajectories, camera properties, motion blur levels). In future we envision the inclusion of other factors to emulate AR environments and users at higher fidelity, such as animating AR users to move and behave like real humans.
**IoT sensor and actuator emulation:** To enhance the capabilities of our game engine-based emulators, we will also emulate ambient IoT sensors and actuators in AR environments. For example, we will emulate light sensors to measure lighting properties (illuminance, light direction, and color temperature) of the virtual environment. In our preliminary investigations we have added an IoT camera in our emulator: Figure 13 shows an image captured by this IoT camera, overlooking a space with three AR users. The camera has also identified and tracked the AR users in its view. We are currently exploiting the emulation of IoT cameras to develop algorithms that increase the accuracy of spatial and semantic understanding, by fusing the images from these IoT cameras with the sensor data captured by the AR devices worn by emulated AR users. Adding the emulation of IoT actuators (e.g., by creating and modifying light sources) will enable the modeling and prototyping of environment optimization
Figure 13: An image captured by an IoT camera in the Unreal-based emulator we have created. The IoT camera overlooks a photorealistic scene with three AR users.
systems, such as those we described in Section 5.1.
**Ground truth generation:** As well as controlling parameters related to environmental conditions, we envision building pipelines that automatically generate the ground truth data required for algorithm evaluation. We obtained the ground truth of the states (e.g., camera poses, sensor positions) of AR devices and IoT sensors in our prior work [176], and will use the pipelines to generate pixel-wise semantic instance annotations for images captured by AR devices and IoT sensors. For example, in [176] our VI-SLAM evaluation methodology used existing SLAM dataset ground truth trajectories to generate sequences in new virtual environments, while preserving the use of real inertial data, as shown in Figure 14. In our future work, we envision exploiting the automatic generation of ground truth data, especially fine-grained pixel-wise data, to evaluate IoT-supported AR systems under diverse environment settings, such as assessing mapping accuracy in SLAM, which is otherwise difficult to evaluate in real environments [97].
### AR and IoT Device Localization and Calibration
In the game engine-based emulators described above we have access to the ground truth poses of AR and IoT devices, along with the parameters of IoT sensors and actuators (e.g., correct information on environment regions captured by sensors and affected by actuators such as light sources). However, a critical challenge for multi-device, IoT-supported AR in real environments is how we establish all AR and IoT devices' poses within the same frame of reference, and determine the environment regions either in view of sensors or which will be adjusted by an actuator. Existing literature provides us with potential ways to address this problem; for example, the localization of IoT devices is a known issue, and a variety of different solutions have been proposed, including the use of ultra-wideband signals [22] and retroreflectors [184] - for a survey of indoor localization systems and technologies see [212], and for discussion of the challenges specific to IoT devices, see [61]. Established camera calibration techniques are also an option for IoT cameras, such as the OCamCalib toolbox [173] employed in [167] for photometric registration using multiple cameras. For localizing light sources a possible approach is to build on the work in [58], and apply DNNs to images obtained by AR devices or IoT cameras. Methods are also available to localize multiple AR devices; cross-platform solutions include Azure Spatial Anchors [128] and Google Cloud Anchors [67], while the 'ARWorldMap' feature can be used on ARKit [14], and 'Shared Spaces' on the Magic Leap [120]. The above techniques provide both inspiration and readily implementable solutions for systems which employ AR and IoT devices.
Based on our analysis of existing techniques, we identify a number of possible approaches for localizing and calibrating both AR and IoT devices, and consider their advantages and disadvantages. The solution that is perhaps most readily implemented for current commercial AR devices is to identify IoT devices within the frame of reference established by an AR platform. For example, the pose of devices could be recognized by displaying markers on them or attaching printed markers to them (see Section 4.1.1 for background on spatial understanding using markers), although this may not be feasible for many small devices, and has downsides in terms of environment aesthetics. Alternatively, spatial anchors could be manually aligned with IoT devices [130]; however, this approach could be both labor-intensive and time-consuming, and the accuracy that can be achieved requires further investigation, as it may be susceptible to human error or easily impacted by environmental factors. On the other hand, a different research direction is to identify AR device poses in the views obtained by IoT cameras, aided by motion and visual cues from the AR device - this may be particularly useful for collaborative SLAM techniques (e.g., to inform loop closing). For the mapping of available environment control actions (e.g., increasing the brightness of a light bulb or lowering a smart blind) to environment properties as detected by an IoT sensor or AR device, we plan to develop
Figure 14: An overview of our methodology (left) and a screenshot of the SLAM sequence execution step (right) for our game engine-based emulator for VI-SLAM using virtual environments [176].
efficient sampling strategies for long-term monitoring of environment properties, such that we can automatically determine the necessary control actions to achieve specific conditions, and thereby optimal AR experiences.
### Combining Data from Multiple AR Users and IoT Sensors
Along with localizing and calibrating AR devices, IoT sensors and actuators, we must also consider how to combine the data from the variety of different sensors that are available in the environments that host AR experiences. This sensor data is diverse, including multimodal sensor data of AR devices and IoT sensors captured from different vantage points. Below we discuss three challenges related to combining data from multiple AR users and IoT sensors, and outline the associated research directions.
**Communication-efficient AR with multiple users and multiple IoT sensors:** In a multi-user, multi-sensor AR scenario, as the number of AR users and IoT sensors grows, the operational overhead of the IoT-supported system such as bandwidth consumption also scales [163], due to the transmission of large amounts of contextual data from multiple users or their ambient environments. The system also needs to maintain a low user-perceived latency [11] to ensure seamless integration with the real world and synchronous perception of contextual information among different users. To improve communication efficiency and reduce user-perceived latency, we envision developing intelligent network-adaptive AR-IoT solutions that adapt the size and frequency of data transmissions under changing network conditions. Since multiple users and IoT sensors that are in close proximity usually exhibit a strong correlation in the perceived contextual information, we also envision exploiting the collaboration among multiple AR users and IoT sensors for communication redundancy elimination.
**Assessment and processing of data with different signal quality levels:** Another research direction in this space is the quality assessment of data from different devices and the development of robust approaches for combining this data. The signal quality levels obtained from different vantage points will change over time due to a variety of factors; for example, other noise sources will be interfering with signals captured by different microphones, users of AR devices will move closer and farther from different IoT-based cameras, and IoT sensors in unusual poses will cause the misbehavior of the object detection models that perform well for IoT sensors in canonical poses. We envision the real-time and in-situ evaluation of the quality levels of data captured by different devices at different times. After the data quality assessment, we will seek to combine signals with different quality levels; we envision designing algorithms that will adaptively amplify 'good' signals and place less weight on signals that are less useful, e.g., by integrating the existing design of appropriate attention mechanisms [198].
**Multimodal data:** In IoT-supported AR systems, AR devices and IoT sensors continuously capture and process multimodal data (e.g., visual, auditory, haptic, olfactory, and thermal sensory information). One potential research direction is multimodal sensing and learning in which the visual modality enhances, or is enhanced by, other sensing modalities; another is fusing sensor data with different temporal and spatial traits. Given the heterogeneity of the collected data, we envision capturing correspondences and transferring knowledge between modalities to improve the performance of various aspects of AR functionality, including using gesture and speech for precise multimodal interaction, and using thermal, light, and visual sensors for detecting scene changes throughout the day.
### Optimizing Environments with Conflicting Requirements
Environment optimization systems which automatically adjust properties such as light and visual texture (see Section 5.1) have the potential to facilitate AR experiences of both higher and more consistent quality. However, before these types of systems are ready for deployment, consideration must be given to how we manage conflicting environmental requirements. These conflicts, already a challenge for designers of spaces which host AR, will be commonplace in practical scenarios, as we describe below.
**Conflicting environmental requirements:** Even in an environment in which just one AR user is present, adjusting environmental conditions to optimize one aspect of an AR experience may have negative effects on another element of system functionality or the overall user experience. For example, the addition of visual texture to an environment may improve spatial understanding, but be distracting for the user [176]. For headsets with an OST display, the low level of illuminance which maximizes content visibility may result in a reduced level of performance for spatial understanding or eye-tracking
algorithms [177]. Furthermore, environmental requirements related to AR may also be at odds with constraints related to energy efficiency or the comfort of human occupants in general. Any environment optimization system will need to (1) define the characteristics of the AR devices, applications, and users to be served, in order to define a reasonable 'operating range' in which the system is functional and usable; (2) apply other environmental constraints, e.g., those imposed by building managers; (3) implement real-time analysis of the system elements and environment regions users are interacting with, in order to determine and prioritize environment adjustments. This latter element is a particularly complex challenge, necessitating innovative solutions to manage continuously updating environment maps and device poses, and to avoid oscillations between environment states that may occur due to conflicting adjustments. Some of these issues were recently tackled in the context of shared control of public IoT actuators [95], including device discovery, establishing 'areas of interest' for IoT devices, and aggregating conflicting requirements; this work may serve as a valuable starting point for developing AR-specific systems.
**Heterogeneous AR users and devices:** The environment optimization challenge becomes even more complex when we consider the differing requirements of heterogeneous AR devices and users. Different devices are equipped with different sensors, run different marker detection or VI-SLAM algorithms, and employ different types of displays: smartphones may require greater levels of illuminance and texture to achieve acceptable tracking performance [174], while OST displays may require lower illuminance than VST displays to achieve the same level of content visibility [20]. Heterogeneous AR users will have different behavior patterns (e.g., mobility characteristics, eye gaze patterns, virtual content interactions), as well as different expectations and requirements for the properties of virtual content. For example, school children exploring an AR science exhibit may move rapidly, prompting the addition of environmental textures to maintain high-quality pose tracking. On the other hand, an academic carefully examining virtual content in the same environment might require a view that is uninhibited by a textured background, and also be more concerned with the stability of virtual content. While knowledge of the type of AR devices present in an environment is likely to be relatively straightforward in most cases, capturing and analyzing information on the properties of AR users is a more complex task, which will require ongoing analysis of both individual users and user populations. This in turn will have significant security and privacy implications, which we discuss next in Section 6.5.
### Secure and Privacy-Preserving AR-IoT Platforms
The development of secure and privacy-preserving AR-IoT platforms is paramount for achieving widespread societal acceptance of AR systems incorporated with ambient intelligence technology.
**Security:** The increase in the number of connected AR and IoT devices and the richness of data collected by them bring security concerns to AR-IoT platforms. Building upon the categorization adopted in [100; 166], we classify concerns as related to security of AR devices [166; 167; 41; 100] and IoT sensors and actuators [142; 9] as related to _input security_ or _output security_. Input security challenges are well known in traditional arenas of networked and computing systems, but become even more important for AR-IoT platforms with rich multimodal inputs (e.g., RGB, depth, inertial, haptic, auditory sensor readings, and even gaze tracking data) of AR and IoT devices. Compromised inputs will degrade the performance of AR-IoT platforms, including localization and tracking accuracy, rendering quality, and the accuracy and completeness of environmental awareness. To mitigate input security risks, potential research directions include designing input validation and sanitization methods that apply to multimodal data from AR and IoT devices, and building fault-tolerant AR-IoT platforms robust to corrupted input data. AR outputs produced by malicious or bug-ridden applications can also be potentially harmful or distracting. For example, in the case of industrial AR in a smart warehouse, it would be dangerous for a visual overlay to obstruct the operator's view; IoT actuators that generate flashing visuals, shrill sounds, or intense haptic signals could cause physiological damage to the user. To mitigate output security risks, inspired by existing works on adaptive policies to secure visual outputs in AR devices [3; 41], we envision the development of reinforcement learning-based policies that prevent distraction due to tampered content of AR devices or IoT actuators, with these policies able to adapt to dynamic environments through trial-and-error.
**Privacy:** AR-IoT systems' need for rich, continuous sensor data (from both AR and IoT devices) raises privacy concerns for both users and bystanders in the AR environment. Buggy or malicious applications may record privacy-sensitive information on user states or the surrounding AR environment by compromising the AR devices [209] or IoT sensors [194]. To mitigate privacy risks, we envision enforcing privacy-preserving policies to limit access to potentially sensitive sensor data. For example, we will anonymize human faces with an extremely low resolution before feeding images to object detection applications running on AR or IoT devices; we will seek to prevent applications from inferring sensitive information about an AR environment (e.g., the contents of a home) by concealing information contained in the point cloud constructed by VI-SLAM algorithms.
## 7 Conclusion
In this book chapter, we explored the variety of ways in which ambient IoT sensors and actuators could be used to support next-generation AR. We categorized these uses by defining five different aspects of AR system functionality or user experience that may be enhanced - spatial understanding, semantic understanding, contextualized content, interaction, and immersion - and provided an overview of relevant IoT devices (Section 3). We then examined the possibilities for each of these categories in detail, when employing IoT sensors (Section 4) and actuators (Section 5). Finally, we discussed a number of research directions related to implementing ambient IoT-supported AR systems, along with associated challenges and opportunities for future work (Section 6).
One important point to make, having reviewed multiple different uses of IoT sensors and actuators, is that the devices physically deployed in a given environment will likely be used for multiple purposes concurrently (e.g., images from a single IoT camera may serve as input to spatial understanding, semantic understanding, photometric registration, and visual texture estimation algorithms), so consideration will need to be given to balancing the needs of each use in these cases. Similarly, our vision for ambient IoT for AR in general, is that it is combined with the use of ambient IoT devices for other purposes; for example, IoT-based systems incorporating devices also applicable to AR have been proposed for energy efficiency [171], occupant comfort and productivity [19], surveillance [214] and building ventilation [35], while in [196] the authors consider the role of ubiquitous displays in environment aesthetics. Moreover, as AR becomes more and more integrated into our lives in the coming years, we envision that built environments, products and materials will be designed with ambient intelligence for AR in mind, in order to support the use cases we have laid out in this book chapter.
## 8 Acknowledgements
We thank Guohao Lan, Zida Liu, Yunfan Zhang, Jovan Stojkovic, Achilles Dabrowski, Alex Xu, Ritvik Janamsetty, Tiffany Ma, Owen Gibson, Michael Glushakov and Joseph DeChicchis for their contributions to this work. This work was supported in part by NSF CAREER Award IIS-2046072, NSF grants CNS-2112562 and CNS-1908051, Facebook Research Award, IBM Research Award, and a Thomas Lord Educational Innovation Grant.
|
2305.16880 | On the first order theory of plactic monoids | This paper proves that a plactic monoid of any finite rank will have
decidable first order theory. This resolves other open decidability problems
about the finite rank plactic monoids, such as the Diophantine problem and
identity checking. This is achieved by interpreting a plactic monoid of
arbitrary rank in Presburger arithmetic, which is known to have decidable first
order theory. We also prove that the interpretation of the plactic monoids into
Presburger Arithmetic is in fact a bi-interpretation, hence any two plactic
monoids of finite rank are bi-interpretable with one another. The algorithm
generating the interpretations is uniform, which answers positively the
decidability of the Diophantine problem for the infinite rank plactic monoid. | Daniel Turaev | 2023-05-26T12:32:57Z | http://arxiv.org/abs/2305.16880v2 | # On the first order theory of plactic monoids
###### Abstract
This paper proves that a plactic monoid of any finite rank will have decidable first order theory. This resolves other open decidability problems about the finite rank plactic monoids, such as the Diophantine problem and identity checking. This is achieved by interpreting a plactic monoid of arbitrary rank in Presburger arithmetic, which is known to have decidable first order theory. The algorithm generating the interpretations is uniform, which answers positively the decidability of the Diophantine problem for the infinite rank plactic monoid. We also show that the plactic monoid of rank \(2\) is bi-interpretable with Presburger arithmetic.
###### Contents
* 1 Introduction
* 2 Background
* 2.1 Rewriting systems
* 2.2 The plactic monoid
* 2.3 Interpretations, theories, and Presburger arithmetic
* 3 The case \(n=2\)
* 3.1 Interpreting \(P_{2}\) in \((\mathbb{Z},0,1,+,-,\leq)\)
* 3.2 Bi-interpretability
* 4 The general case
* 4.1 Multiplication - the idea
* 4.2 The formula defining \(\mu_{x}\)
* 5 The Diophantine problem in the infinite case
* 5.1 The plactic monoid of all tableaux
* 5.2 A plactic monoid on integers
* 5.3 Two open questions
Introduction
The plactic monoid has its origin in the work of Knuth [22], which based itself on the algorithm developed by Schensted [41]. First studied in depth by Lascoux and Schutzenberger [25, 42, 43], its combinatorial properties were applied to the theory of symmetric polynomials to prove the Littlewood-Richardson rule. Due to its origins as a monoid of Young tableaux, it has proved useful in various aspects of geometry and representation theory [14]. More recently, it has found application in Kashiwara's crystal basis theory [19], with analogous plactic monoids being defined for different root systems associated to crystal bases [27, 28, 29, 30], and used to study Kostka-Foulkes polynomials [26]. Related, plactic-like monoids have also been defined [1, 12, 16, 35], and used to study the combinatorics and growth properties of the plactic monoid, which itself has some interesting combinatorial structure. Cain, Gray, and Malheiro [5] have shown that the plactic monoids are biautomatic, as are related crystal monoids [4], and related plactic-like monoids such as the Chinese, Hypoplactic, and Sylvester monoids [6].
Schensted's multiplication algorithm can be used to decide the word problem for the plactic monoid. It was shown in 1981 that the plactic monoid has decidable conjugacy problem [25]. A classic generalisation of both the word and conjugacy problems is the Diophantine problem, which has received much attention for free groups [20, 32, 40, 44], where Makanin-Razborov diagrams were used independently by Sela [44] and Kharlampovich and Myasnikov [21] to solve the Tarski problems on the first order theory of free groups1. The Diophantine problem has also been studied for free monoids [33, 45] and is gaining attention in the study of other monoids [15, 36]. In the monoid setting, it asks for an algorithm for deciding whether a given system of equations has a solution in a given monoid.
Footnote 1: For a survey of these results, see [13]
An active area of research is the question of checking identities in the plactic monoids and their monoid algebras. Progress has been made in the rank 3 case [23, 24], and the plactic monoid, bicyclic monoid, and related plactic-like monoids have been shown to admit faithful representations in terms of matrices over the tropical semiring [3, 8, 10, 18]. This implies that every plactic monoid of finite rank satisfies a nontrivial semigroup identity. There is a natural decision problem underpinning this field of study - is it decidable whether a given identity is satisfied by a plactic monoid.
In this paper, we show that the plactic monoid of every finite rank has decidable first order theory. This result is a significant generalisation of both of the above ideas. Both identities and Diophantine equations are expressible as first order sentences, thus yielding positive results for both the Diophantine problem and the problem of identity checking. It is not equivalent to these results - the free semigroup has undecidable theory despite having decidable Diophantine problem [39]. Nor does this result follow from the multi-homogeneity of the plactic
monoids, as there are multi-homogeneous monoids with undecidable theory, and even undecidable conjugacy problem [7].
The argument presented below is by constructing an interpretation of a plactic monoid in Presburger arithmetic, and could open the door to studying the theories of plactic-like classes of monoid. It is known that all groups interpretable in Presburger arithmetic are abelian-by-finite [37]. This result may also be a starting point for classifying all monoids interpretable in Presburger arithmetic.
In preparing this paper for publication, the author was made aware in private communication that Alan Cain and Tara Brough have independently constructed a proof that the plactic monoids are interpretable in Presburger arithmetic, which will appear in a forthcoming paper of theirs. This coincides with the results in section 3.1 and section 4.
### Notation and conventions
Write \([n]\) for the set \(\{1,\ldots,n\}\). Write \(A\) for a totally-ordered alphabet on \(n\) letters, which will usually be \([n]\). The free monoid on an alphabet \(A\) will be written with a Kleene star \(A^{*}\), and will have identity \(\varepsilon\), the empty word. Given a set of generators \(A\) and relations \(R\subset A^{*}\times A^{*}\), the monoid presentation denoted by \(\langle A|R\rangle\) will be the quotient of \(A^{*}\) by the congruence generated by \(R\). The set \(\mathbb{N}\) of natural numbers will contain \(0\).
## 2 Background
### Rewriting systems
A string rewriting system (henceforth rewriting system) for \(A^{*}\) is a set \(R\subset\ A^{*}\times A^{*}\) of elements \((\ell,r)\), usually written \(\ell\to r\), called _rewrite rules_. See the book [2] for a more detailed introduction.
For two elements \(u,v\in A^{*}\), write \(u\to_{R}v\) if \(u=x\ell z\), \(v=xrz\), and \((\ell,r)\in R\). The transitive and reflexive closure of \(\to_{R}\), written \(\to_{R}^{*}\), is called the reduction relation of \(R\). The symmetric closure of \(\to_{R}^{*}\) is a semigroup congruence. The monoid obtained by taking the quotient of \(A^{*}\) by this congruence is the monoid presented by \(\langle A|R\rangle\). Thus every presentation \(\langle A|R\rangle\) also corresponds to a rewriting system, which is written \((A,R)\).
A rewriting system is called _Noetherian_ if it has no infinite descending chain. That is, there is no sequence \(u_{1},u_{2},\ldots\in A^{*}\) such that \(u_{i}\to_{R}u_{i+1}\) for all \(i\in\mathbb{N}\). A rewriting system is called _confluent_ if it has the property that, whenever \(u\in A^{*}\) is such that \(u\to_{R}^{*}u^{\prime}\) and \(u\to_{R}^{*}u^{\prime\prime}\), there exists a \(v\) such that \(u^{\prime}\to_{R}^{*}v\) and \(u^{\prime\prime}\to_{R}^{*}v\). We call a confluent Noetherian rewriting system _complete_.
Call \(u\in(A,R)\) a _reduced word_ if there is no subword \(\ell\) of \(u\) that forms the left hand side of a rewrite rule in \(R\). By theorem 1.1.12 of [2], if \((A,R)\) is a complete
rewriting system, then for every \(u\in A^{*}\) there is a _unique, reduced_\(v\in A^{*}\) such that \(u\to_{R}^{*}v\). This \(v\) is called a _normal form_ for \(u\), and forms a cross-section of the monoid \(\langle A|R\rangle\), in the sense that every element of the monoid is equal to exactly one reduced word. We may therefore identify a monoid admitting a complete rewriting system with its set of normal forms, and the multiplication being concatenation followed by reducing to normal form.
### The plactic monoid
We follow the French conventions of Young diagrams having longer rows underneath shorter ones.
**Definition 2.1**.: _A Semistandard Young Tableau (henceforth simply tableau) is a Young diagram with labelled boxes, with labels satisfying the following conditions_
* _each row weakly increases left to right_
* _each column strongly decreases top to bottom_
_is not a tableau._
Let \(t\) be a tableau with labels taken from \(A\). We associate to \(t\) a _row reading_ in \(A^{*}\). Suppose \(t\) is a tableau of \(m\) rows, labelled top to bottom as \(r_{1},\ldots,r_{m}\). The labels of the boxes in each row are an increasing sequence, which can be viewed as a word \(r_{i}\in A^{*}\). The row reading of \(t\) is then \(w=r_{1}r_{2}\ldots r_{m}\in A^{*}\).
We similarly associate a _column reading_ to \(t\). Denote the columns of \(t\) from left to right by \(c_{1},\ldots,c_{m}\). Each such column corresponds to a strictly decreasing sequence \(c_{i}\in A^{*}\). The column reading of \(t\) is then \(w=c_{1}\ldots c_{m}\in A^{*}\).
_has row reading_
\[32311222=3\ 23\ 11222\]
_and column reading_
\[32131222=321\ 31\ 2\ 2\ 2\]
We now describe Schensted's algorithm. Consider \(A=[n]\) the totally ordered alphabet, and \(w\in A^{*}\). We may view \(w\) as a finite sequence of numbers. Schensted's algorithm is used to study the longest increasing and decreasing subsequences of \(w\). The algorithm associates a tableau to \(w\) with the property that the number of columns of \(w\) is the length of the longest _increasing_ sequence, and
the number of rows is the length of the longest _strictly decreasing_ sequence. See [41] or chapter 5 of [31] for more details on this combinatorial structure.
**Definition 2.2** (Schensted's algorithm).: _We define \(P:A^{*}\to A^{*}\) to be the map sending a word \(w\) to the row reading of a tableau recursively as follows:_
_Firstly, \(P(\varepsilon)=\varepsilon\). Then suppose \(w=x_{1}\dots x_{\ell}\in A^{*}\) and \(P(x_{1}\dots x_{\ell-1})=r_{1}\dots r_{m}\) for some rows \(r_{i}\) that form the row reading of a tableau. Then we have:_
1. _If_ \(r_{m}x_{\ell}\) _is a row, then we set_ \(P(r_{1}\dots r_{m}x_{\ell})=r_{1}\dots r_{m}x_{\ell}\)__
2. _If not, then we can write_ \(r_{m}=r_{\alpha}yr_{\beta}\)_, with_ \(y\) _being the leftmost letter such that_ \(x_{\ell}<y\)_, as this will break the weakly increasing property required of a row. But then_ \(r_{\alpha}x_{\ell}r_{\beta}\) _will be a row. So we set_ \[P(r_{1}\dots r_{m}x_{\ell})=P(r_{1}\dots r_{m-1}y)r_{\alpha}x_{\ell}r_{\beta}\] _._
We call the process in point (2) 'bumping the letter \(y\)'. If \(t\) has row reading \(r_{1}\dots r_{m}\) and column reading \(c_{1}\dots c_{k}\), then it is straightforward to show that
\[P(r_{1}\dots r_{m})=P(c_{1}\dots c_{k})=r_{1}\dots r_{m}\]
**Definition 2.3** (The plactic monoid).: _The relation \(\sim\) on \(A^{*}\) given by_
\[u\sim v\iff P(u)=P(v)\]
_is a semigroup congruence, and the monoid \(A^{*}/\sim\) with multiplication given by \(u\cdot v=P(uv)\) is called the plactic monoid of rank \(n\). Denote this monoid by \(P_{n}\)._
Knuth [22] exhibited a set of defining relations \(K\) for the plactic monoids of the form \(xzy=zxy\) and \(yxz=yzx\) for \(x<y<z,\ x,y,z\in A\) and \(xyx=yxx\) and \(yyx=yxy\) for \(x<y,\ x,y\in A\). That is, we have
\[K=\{xzy=zxy,\ x\leq y<z\}\cup\{yxz=yzx,\ x<y\leq z\}\]
with \(P_{n}=\langle A|K\rangle\). For each finite rank, it follows that \(P_{n}\) will be finitely presented.
It was shown by Cain, Gray, and Malheiro in [5] that the plactic monoid admits a finite complete rewriting system, which we describe here.
We consider two columns \(\alpha,\beta\) as words in \(A^{*}\). We say that \(\alpha\) and \(\beta\) are _compatible_, written \(\alpha\succeq\beta\), if \(\alpha\beta\) is the column reading of a tableau. Then each pair \(\alpha,\beta\) with \(\alpha\nsubseteq\beta\) yields a rewrite rule. Consider the tableau associated to \(P(\alpha\beta)\). Since the number of columns in \(P(\alpha\beta)\) is the length of the longest increasing sequence, and \(\alpha,\beta\) are columns, it follows that \(P(\alpha\beta)\) will be a tableau with at most two columns. Therefore this tableau will have column reading \(\gamma\delta\), for some columns \(\gamma,\delta\) with \(\gamma\succeq\delta\), and potentially \(\delta=\varepsilon\).
Now consider \(\mathcal{C}=\{c_{\alpha}\ |\ \alpha\in A^{*},\alpha\text{ is a column}\}\) to be a set of symbols corresponding to columns in \(A^{*}\). Since \(A\) is finite and columns are strictly decreasing sequences, \(\mathcal{C}\) is also finite. Then define \(R\) to be the set of all rewrite rules detailed above
\[R=\{c_{\alpha}c_{\beta}\to c_{\gamma}c_{\delta}|\ \alpha,\beta\in A^{*}\ \alpha \ngeq\beta\}\]
It is shown in [5] that
**Lemma 2.4**.: \((\mathcal{C},R)\) _is a complete rewriting system for \(P_{n}\)._
It follows from this that \(P_{n}\) admits normal forms as reduced words in \(\mathcal{C}^{*}\). By the definition of \(\succeq\), this normal form will be in the form of column readings \(c_{\alpha_{1}}\ldots c_{\alpha_{m}}\) with each \(\alpha_{i}\succeq\alpha_{i+1}\).
Note that if \(\alpha=\alpha_{m}\ldots\alpha_{1}\) and \(\beta=\beta_{n}\ldots\beta_{1}\), \(\alpha_{i},\beta_{i}\in A\), appear in the column reading of the same tableau (not necessarily adjacent) with \(\alpha\) further left than \(\beta\), then \(\alpha\succeq\beta\). Indeed, since \(\alpha\) and \(\beta\) are columns of the same tableau, then by the structure of a tableau we have that \(m\geq n\). Furthermore, each pair \(\alpha_{i},\beta_{i}\) will be in the same row of the tableau, with \(\alpha_{i}\) appearing earlier than \(\beta_{i}\). This will imply that \(\alpha_{i}\leq\beta_{i}\). But these two conditions imply that \(\alpha\succeq\beta\). Thus \(\succeq\) is a partial order.
We introduce a length-decreasing-lexicographic order on \(\mathcal{C}\) extending \(\succeq\). For \(c_{\alpha},c_{\beta}\in\mathcal{C}\), define:
\[c_{\alpha}\sqsubseteq c_{\beta}\iff(|\alpha|>|\beta|)\vee(|\alpha|=|\beta| \wedge(\exists j:\ i<j\implies\alpha_{i}=\beta_{i}\wedge\ \alpha_{j}<\beta_{j}))\]
With \(j\) taken as \(n+1\) when \(c_{\alpha}=c_{\beta}\). Note that \(c_{\alpha}\succeq c_{\beta}\implies c_{\alpha}\sqsubseteq c_{\beta}\). Furthermore, this is clearly a total order. We can therefore enumerate the set \(\mathcal{C}\) as \(\{c_{1},\ldots c_{k}\}\), with \(k=|\mathcal{C}|=2^{n}-1\), such that \(i\leq j\implies c_{i}\sqsubseteq c_{j}\). Then, since \(c_{\alpha}\succeq c_{\beta}\implies c_{\alpha}\sqsubseteq c_{\beta}\), we have that the normal forms of \(P_{n}\) will have the form
\[c_{1}^{w_{1}}\ldots c_{k}^{w_{k}}\]
with \(w_{i}\in\mathbb{N}\) for each \(i\), and for any pair \(c_{i},c_{j}\) with \(i<j\wedge c_{i}\ngeq c_{j}\), either \(w_{i}=0\) or \(w_{j}=0\). Call two columns \(c_{i}\) and \(c_{j}\)_incompatible_ if \(i<j\wedge c_{i}\ngeq c_{j}\).
**Example**.: \(P_{3}\) _has seven columns:_
\begin{tabular}{|
### Interpretations, theories, and Presburger arithmetic
We assume familiarity with basic first order logic, and refer the reader to [17] or [34] for a more detailed introduction to model theory. We will be following the conventions of [34]
**Definition 2.5**.: _Let \(L\) be the language of first order formulas for a given signature. Then the first order theory of an \(L\)-structure \(\mathcal{M}\) is the set of all sentences in \(L\) that hold in \(\mathcal{M}\)._
_The question of deciding a first order theory asks for an algorithm which, given a first order sentence \(\phi\), determines whether \(\phi\) is true or false in \(\mathcal{M}\) in finite time._
The language of interest in this paper is the language of monoids, whose signature is \((\circ,\varepsilon)\). To speak of the first order theory of a given monoid, one classically allows atomic formulas of the form \(u=v\) for each \(u,v\in\mathcal{M}\). In the finitely generated case (with generating set \(A=\{a_{1},\ldots,a_{n}\}\), say) this is equivalent to adding constants \(a_{1},\ldots,a_{n}\) to the signature, and considering the first order theory with constants of \((\mathcal{M},\circ,\varepsilon,a_{1},\ldots,a_{n})\).
In our case, we refer to the first order theory of a plactic monoid of rank \(n\), which will have constants \(1,\ldots,n\) added to the language of monoids. Write \(FOTh(P_{n})\) as shorthand for the first order theory of \(P_{n}\) with constants.
We aim in the following sections to build an interpretation of \(P_{n}\) in Presburger arithmetic, which will allow \(\varphi\in FOTh(P_{n})\) to be reduced to a sentence \(\tilde{\varphi}\) of Presburger arithmetic. A _reduction_ of a decision problem \(D_{1}\) to another decision problem \(D_{2}\) is a Turing machine which, given finitely many queries to an oracle for \(D_{2}\), will yield an algorithm for deciding \(D_{1}\). Importantly, this means that decidability of \(D_{2}\) will imply decidability of \(D_{1}\), as such an oracle machine will exist and halt in finite time on each query.
Presburger arithmetic is named after Mojzesz Presburger, who in 1929 was tasked with studying the decidability of the integers under addition. In his master's thesis [38], he used quantifier elimination and reasoning about arithmetic congruences to prove that the first order theory of \((\mathbb{N},0,1,+)\) is consistent, complete, and decidable. Note that we can add a comparison symbol \(\leq\) to the signature of Presburger arithmetic without trouble, since \(x\leq y\) is equivalent to the statement \(\exists z:y=x+z\). This yields the following lemma:
**Lemma 2.6**.: \(FOTh(\mathbb{N},0,1,+,\leq)\) _is decidable._
This result will form the bedrock of the following argument. For an English translation of Presburger's work, see [46].
For the definition of an interpretation, we proceed as in section 1.3 of [34]
**Definition 2.7**.: _Let \(\mathcal{M}\) be an \(L\)-structure. A set \(S\subseteq\mathcal{M}^{n}\) is called definable in \(\mathcal{M}\) if there is a first order formula \(\phi(x_{1},\ldots,x_{n},y_{1},\ldots,y_{m})\in L\) with free variables \(x_{1},\ldots,x_{n},y_{1}\ldots,y_{m}\) such that there exists \((w_{1},\ldots,w_{m})\in\mathcal{M}^{m}\) with
the property that \(\phi(x_{1},\ldots,x_{n},w_{1},\ldots,w_{m})\) holds if and only if \((x_{1},\ldots,x_{n})\in S\). i.e. \(S\) is the set_
\[\{\underline{x}\in\mathcal{M}^{n}|\ \mathcal{M}\models\phi(\underline{x},w_{1}, \ldots,w_{m})\}\]
**Example**.: _The set of elements that commutes with a given \(m\in\mathcal{M}\) is definable by the formula \(xm=mx\). The centre of \(\mathcal{M}\) is definable by the formula_
\[\forall m:xm=mx\]
_If \(\mathcal{M}\) is finitely generated by \(\{m_{1},\ldots m_{k}\}\), then the formula_
\[xm_{1}=m_{1}x\wedge xm_{2}=m_{2}x\wedge\cdots\wedge xm_{k}=m_{k}x\]
_defines the centre of \(\mathcal{M}\). This has the property of being a positive existential formula, which is useful in the study of Diophantine equations2_
Footnote 2: See, for example, [9]
**Definition 2.8**.: _A function \(f:\mathcal{M}^{m}\to\mathcal{M}^{n}\) is definable in \(\mathcal{M}\) if its graph is definable as a subset of \(\mathcal{M}^{m+n}\)._
Note that the composition of definable functions is definable.
**Definition 2.9**.: _Let \(\mathcal{M}\) be an \(L_{1}\)-structure, and \(\mathcal{N}\) be an \(L_{2}\)-structure. Then we call \(\mathcal{N}\) interpretable in \(\mathcal{M}\) if there exist some \(n\in\mathbb{N}\), some set \(S\subseteq\mathcal{M}^{n}\), and a bijection \(\phi:S\to\mathcal{N}\) such that_
1. \(S\) _is definable in_ \(\mathcal{M}\)__
2. _For every_ \(\sigma\) _in the signature of_ \(L_{2}\)_, including the equality relation, the preimage by_ \(\phi\) _of the graph of_ \(\sigma\) _is definable in_ \(\mathcal{M}\)__
Since we will only be dealing with the case of a monoid, point 2 reduces to checking the preimages of equality \(\phi^{-1}(=)\) and multiplication \(\phi^{-1}(\cdot)\), with the latter being the set of triples \((a,b,c)\in S^{3}\) such that \(\phi(a)\cdot\phi(b)=\phi(c)\).
Note that in the above definition we insisted the map \(\phi\) be a bijection, as in section 1.3 of [34]. The interpretation we will build will be a bijection. However, the most general theory of interpretations works with surjections from \(S\) onto \(\mathcal{N}\). See section 5 of [17] for more information.
The following result will prove fundamental, and is a consequence of theorem 5.3.2 and its remarks in [17]:
**Proposition 2.10**.: _Suppose \(L_{1}\) and \(L_{2}\) are languages, with \(M_{1}\) and \(M_{2}\) being \(L_{1}\)- and \(L_{2}\)-structures, respectively. Suppose \(M_{1}\) is interpretable in \(M_{2}\). Then the problem of deciding \(FOTh(M_{1})\) is reducible to the problem of deciding \(FOTh(M_{2})\)._
Next we define the notion of bi-interpretability, which will be the subject of section 3.2. By the definition of interpretations, it is straightforward to see that interpretations are transitive: if \(M_{1}\) is interpretable in \(M_{2}\), and \(M_{2}\) is
interpretable in \(M_{3}\), then \(M_{1}\) is interpretable in \(M_{3}\). This implies that if two structures are _mutually interpretable_, i.e. \(M_{1}\) and \(M_{2}\) are each interpretable in the other, then we obtain an interpretation of \(M_{1}\) in itself, and likewise an interpretation of \(M_{2}\) in itself.
**Definition 2.11**.: _Given \(M_{1}\) an \(L_{1}\)-structure, and \(M_{2}\) an \(L_{2}\)-structure, we say \(M_{1}\) and \(M_{2}\) are bi-interpretable if \(M_{1}\) and \(M_{2}\) mutually interpretable, and the map \(\phi_{i}\) interpreting \(M_{i}\) in itself is definable in \(M_{i}\), for \(i=1,2\)._
**Example**.: _Presburger arithmetic is commonly expressed as \((\mathbb{N},0,1,+,\leq)\) and \((\mathbb{Z},0,1,+,-,\leq)\). These two models are bi-interpretable._
The identity map \(\phi\) on \(\mathbb{N}\subset\mathbb{Z}\) definable by \(0\leq x\) interprets \((\mathbb{N},0,1,+,\leq)\) in \((\mathbb{Z},0,1,+,-,\leq)\), and the map \(\psi:\mathbb{N}^{2}\to\mathbb{Z}\) with \(\psi(a,b)=a-b\) interprets as a surjection \((\mathbb{Z},0,1,+,-,\leq)\) in \((\mathbb{N},0,1,+,\leq)\). We then obtain \(\phi\psi:S\to\mathbb{N}\) from the definable set \(S=\{(a,b)\ |\ a\geq b\}\subset\mathbb{N}^{2}\), with graph
\[\{(a,b,c)\ |\ a\geq b\wedge c+b=a\}\]
definable in \((\mathbb{N},0,1,+,\leq)\). We also obtain \(\psi\phi^{2}:T\to\mathbb{Z}\) with \(T=\mathbb{N}^{2}\) definable in \(\mathbb{Z}^{2}\) by \(0\leq x\wedge 0\leq y\), whose graph
\[\{(a,b,c)|\ 0\leq a\wedge 0\leq b\wedge c=a-b\}\]
definable in \((\mathbb{Z},0,1,+,-,\leq)\).
We will use these two notions of Presburger arithmetic interchangeably.
## 3 The case \(n=2\)
### Interpreting \(P_{2}\) in \((\mathbb{Z},0,1,+,-,\leq)\)
First, we will explicitly treat the case \(n=2\) of tableaux on two letters. Such tableaux have three possible columns:
\begin{tabular}{|c|c|} \hline
2 & 1 & 2 \\ \hline \end{tabular} Denote by \(t\) the word \(21\) in \(A^{*}\). Then by abuse of notation \(\mathcal{C}^{*}=\{t,1,2\}^{*}\), and our rewriting system becomes:
\[R=\{21\to t\,\ 2t\to t2\,\ 1t\to t1\}\]
By the two commutativity rules, and the fact that any factor \(21\) would not appear in a reduced word, we can write any reduced word \(w\in(\mathcal{C},R)\) as some \(t^{w_{1}}1^{w_{2}}2^{w_{3}}\). Thus, by completeness of the rewriting system, each element of \(P_{2}\) corresponds to a triple \((w_{1},w_{2},w_{3})\in\mathbb{N}^{3}\), associated to a normal form \(t^{w_{1}}1^{w_{2}}2^{w_{3}}\). Likewise, each such triple corresponds to a tableau, hence an element of \(P_{2}\).
Consider the map \(\phi:S\to P_{2}\), where \(S=\mathbb{N}^{3}\subset\mathbb{Z}^{3}\) is definable by the formula
\[(0\leq x_{1})\wedge(0\leq x_{2})\wedge(0\leq x_{3})\]
and \(\phi(x_{1},x_{2},x_{3})=t^{x_{1}}1^{x_{2}}2^{x_{3}}\). This is a bijection from a definable set in \((\mathbb{Z},0,1,+,-,\leq)\), and the inverse graph of equality will be
\[\phi^{-1}(=) =\big{\{}(a_{1},a_{2},a_{3},b_{1},b_{2},b_{3})\in\mathbb{N}^{6}\ |\ t^{a_{1}}1^{a_{2}}2^{a_{3}}=t^{b_{1}}1^{b_{2}}2^{b_{3}}\big{\}}\] \[=\big{\{}(a_{1},a_{2},a_{3},b_{1},b_{2},b_{3})\in\mathbb{N}^{6}\ |\ a_{1}=b_{1},\ a_{2}=b_{2},\ a_{3}=b_{3}\big{\}}\subset \mathbb{Z}^{6}\]
Which is definable by the formula \((\underline{a}\in S)\wedge(\underline{b}\in S)\wedge\bigwedge\limits_{i\in[3 ]}(a_{i}\ =\ b_{i})\)
We check the preimage of the graph of multiplication
\[\phi^{-1}(\circ)=\big{\{}(\underline{a},\underline{b},\underline{c})\in S^{3 }\ |\ t^{a_{1}}1^{a_{2}}2^{a_{3}}t^{b_{1}}1^{b_{2}}2^{b_{3}}=t^{c_{1}}1^{c_{2}}2^{c _{3}}\big{\}}\]
Explicitly checking the multiplication yields
\[t^{a_{1}}1^{a_{2}}2^{a_{3}}t^{b_{1}}1^{b_{2}}2^{b_{3}} =t^{a_{1}+b_{1}}1^{a_{2}}2^{a_{3}}1^{b_{2}}2^{b_{3}}\] \[=\begin{cases}t^{a_{1}+b_{1}+a_{3}}1^{a_{2}+b_{2}-a_{3}}2^{b_{3}},\ a_{3}\leq b_{2}\\ t^{a_{1}+b_{1}+b_{2}}1^{a_{2}}2^{b_{3}+a_{3}-b_{2}},\ b_{2}\leq a_{3}\end{cases}\]
Thus we get the following formula for \(\phi^{-1}(\circ)\)
\[(\underline{a}\in S)\wedge(\underline{b}\in S)\wedge(\underline{c}\in S) \wedge[(a_{3}\leq b_{2}\wedge c_{1}=a_{1}+b_{1}+a_{3}\wedge c_{2}=a_ {2}+b_{2}-a_{3}\wedge c_{3}=b_{3})\] \[\vee(b_{2}\leq a_{3}\wedge c_{1}=a_{1}+b_{1}+b_{2}\wedge c_{2}=a_ {2}\wedge c_{3}=b_{3}+a_{3}-b_{2})]\]
It follows then that \(\phi\) is an interpretation of \(P_{2}\) in Presburger arithmetic. This yields the following result.
**Theorem 1**.: \(P_{2}\) _has decidable first order theory_
Proof.: Since \(\phi\) above is an interpretation of \(P_{2}\) in \((\mathbb{Z},0,1,+,-,\leq)\), every first order formula of \(P_{2}\) is interpreted as a first order formula of Presburger arithmetic, which is decidable by 2.6.
Note that this argument is closely related to the proof that the bicyclic monoid \(B=\langle a,b\ |\ ba=\varepsilon\rangle\) has decidable first order theory (see section 2.4 of [11]). Indeed, the map \(\psi:P_{2}\to B\) sending \(1\) to \(a\), \(2\) to \(b\), and \(t\) to \(\varepsilon\) is a monoid homomorphism, and \(\psi\circ\phi:S\to B\) is an interpretation of the bicyclic monoid in Presburger arithmetic.
### Bi-interpretability
The centre of \(P_{2}\), \(Z(P_{2})=\{t^{n}|\ n\in\mathbb{N}\}\) is a subset of \(P_{2}\) which is isomorphic to \((\mathbb{N},0,1,+,\leq)\), with \(a\leq b\) in \(\mathbb{N}\) corresponding to \(\exists y:t^{a}y=t^{b}\), and addition corresponding to monoid multiplication. The centre is definable in \(P_{2}\) via the formula \(x1=1x\wedge x2=2x\), since \(P_{2}\) is finitely generated. We can therefore take \(\psi:Z(P_{2})\to\mathbb{N}\) to be an interpreting map of \((\mathbb{N},0,1,+,\leq)\) in \(P_{2}\).
**Proposition 3.1**.: \(P_{2}\) _and Presburger arithmetic are bi-interpretable._
Proof.: Firstly, note that \(\phi:\mathbb{N}^{3}\to P_{2}\) defined as above is also an interpreting map of \(P_{2}\) into \((\mathbb{N},0,1,+,\leq)\), by replacing any subtraction formulas \(a=b-c\) with \(a+c=b\). Let \(\psi:Z(P_{2})\to\mathbb{N}\) be the interpretation described above. Then \(\psi\phi:S\to\mathbb{N}\), where \(S\) is the definable subset \(\{(n,0,0)\ |\ n\in\mathbb{N}\}\subset\mathbb{N}^{3}\), is the isomorphism sending \((n,0,0)\) to \(n\), which is clearly definable.
Consider \(\phi\psi:Z(P_{2})^{3}\to P_{2}\), sending \((t^{a},t^{b},t^{c})\) to \(w=t^{a}1^{b}2^{c}\). We have that the sets \(S_{1}=\{1^{n}|\ n\in\mathbb{N}\}\) and \(S_{2}=\{2^{n}|\ n\in\mathbb{N}\}\) are definable in \(P_{2}\) as follows:
First, let \(C_{1}=\{t^{a}1^{b}|\ a,b\in\mathbb{N}\}\subset P_{2}\) be the set of elements satisfying
\[\exists y:yx\in Z(P_{2})\]
then \(S_{1}\) will be the set of elements satisfying
\[x\in C_{1}\wedge\forall y\forall z:y\in Z(P_{2})\wedge x=yz\implies y=\varepsilon\]
Likewise, for \(C_{2}=\{t^{a}2^{b}|\ a,b\in\mathbb{N}\}\subset P_{2}\) defined by \(\exists y:xy\in Z(P_{2})\), we have that the following formula defines \(S_{2}\):
\[x\in C_{2}\wedge\forall y\forall z:y\in Z(P_{2})\wedge x=yz\implies y=\varepsilon\]
These formulas state that if we can write \(x\) as \(t^{a}1^{b}\) (respectively \(t^{a}2^{b}\)) then we must have \(a=0\), meaning \(x\) must belong to \(S_{1}\) (respectively \(S_{2}\)).
Given \(t^{b}\) we can define \(x=1^{b}\) by
\[\exists z:x\in S_{1}\wedge z\in S_{2}\wedge zx=t^{b}\]
Likewise we can define \(y=2^{c}\) by
\[\exists z:z\in S_{1}\wedge y\in S_{2}\wedge yz=t^{b}\]
Then we have that \(\phi\psi(t^{a},t^{b},t^{c})=t^{a}xy\), which is definable.
## 4 The general case
Throughout this section, let \(k=|\mathcal{C}|=2^{n}-1\). Index \(\mathcal{C}\) by \(i\in[k]\) with \(i\ <\ j\ \iff\ c_{i}\ \sqsubset\ c_{j}\)
Let \(S\subseteq\mathbb{N}^{k}\) be the set of all \((v_{1},\ldots,v_{k})\) such that \(c_{1}^{v_{1}}\ldots c_{k}^{v_{k}}\) is the normal form of a tableau, and let \(\phi:S\to P_{n}\) be the natural bijection. The normal form of any tableau will obey compatibility conditions: for each pair \((a,b)\in[k]\times[k]\) such that \(a<b\) and \(c_{a}\nsubseteq c_{b}\), we have that either \(v_{a}=0\) or \(v_{b}=0\). Let \(I\subset[k]\times[k]\) be the set of all such pairs. Then \(S\subset\mathbb{Z}^{k}\) is defined by the formula
\[\bigwedge_{i\in[k]}(0\leq x_{i})\wedge\bigwedge_{(a,b)\in I}[(x_{a}=0)\vee(x_ {b}=0)]\]
We claim that \(\phi\) is an interpreting map of \(P_{n}\) in Presburger arithmetic. Again, we check the diagonal:
\[\phi^{-1}(=)=\{(\underline{a},\underline{b})\in S^{2}\ |\ \phi(\underline{a})= \phi(\underline{b})\}\]
Which is definable by \((\underline{a}\in S)\wedge(\underline{b}\in S)\wedge\bigwedge_{i\in[k]}(a_{i} =b_{i})\) as in the \(n=2\) case. It remains to check whether the preimage of the multiplication graph
\[\phi^{-1}(\circ)=\big{\{}(\underline{a},\underline{b},\underline{c})\in S^{3 }\ |\ \phi(\underline{a})\phi(\underline{b})=\phi(\underline{c})\big{\}}\subset \mathbb{Z}^{3k}\]
is definable.
### Multiplication - the idea
Using the previous section as a base case, we will proceed with the induction hypothesis that, for each \(2\leq i\leq n-1\), we have a formula \(\eta_{i}\) in Presburger arithmetic defining multiplication in \(P_{i}\).
We first consider the structure of multiplication in \(P_{n}\). The recursive nature of Schensted's algorithm yields a characterisation of multiplication in \(P_{n}\) via bottom rows and top tableaux.
**Definition 4.1**.: _We call a tableau \(t\in P_{n}\) a top tableau if its row reading is a word over \(\{2,\ldots,n\}^{*}\). i.e. there are no 1's appearing in the tableau word representing \(t\)._
Note that each \(u\in P_{n}\) will have an associated top tableau - if \(u=r_{1}\ldots r_{l}\), then \(r_{1}\ldots r_{l-1}\) will be a top tableau.
For \(u,v\in P_{n}\), the product \(uv\) will be computed by first running an insertion algorithm into the bottom row of \(u\), and then inserting any bumped letters into the top tableau associated to \(u\). We will make this idea more precise.
**Definition 4.2**.: _Define the following maps:_
1. _The top map_ \(T:P_{n}\to P_{n}\) _maps an element_ \(w\) _with row form_ \(r_{1}\ldots r_{l}\) _in_ \(A^{*}\) _to its corresponding top tableau_ \(T(w)=r_{1}\ldots r_{l-1}\)__
2. _The bottom map_ \(B:P_{n}\to P_{n}\) _maps an element_ \(w\) _as above to its bottom row_ \(B(w)=r_{l}\)__
\begin{tabular}{|c|c|c|c|c|c|} \hline
3 & 4 & & & \\ \hline
2 & 3 & 3 & & \\ \hline
1 & 1 & 2 & 4 & 4 \\ \hline \end{tabular}
_Then \(T(t)=34\ 233\) and \(B(t)=11244\)_
For \(u,v\in P_{n}\), by the structure of Schensted's algorithm, the product \(uv\) will run an insertion algorithm first into \(B(u)\), followed by any letters that are bumped being inserted into \(T(u)\). This yields the following characterisation of the top and bottom of the product:
\[T(uv) =T(u)T(B(u)v)\] \[B(uv) =B(B(u)v)\]
Where equality is taken to mean equality in \(P_{n}\), not equality of words. Note that the set of top tableaux, which is equivalently the image of \(T\), is a submonoid isomorphic to \(P_{n-1}\) over the alphabet \(\{2,\ldots,n\}\). Thus the product of top tableaux will be definable via \(\eta_{i-1}\). Therefore, if we can define the row \(B(uv)\), and a way of stitching \(T(uv)\) and \(B(uv)\) into one tableau \(uv\), we will obtain \(\eta_{i}\) a formula defining multiplication in \(P_{n}\)
**Definition 4.3**.: _The stitch map \(\Sigma:P_{n}\times P_{n}\to P_{n}\) is defined as follows. For \(u\in P_{n}\) a top tableau with row reading \(r_{1}\ldots r_{n}\in A^{*}\) and \(v\) a row with row reading \(r_{v}\in A^{*}\), \(\Sigma(u,v)=uv\) if \(r_{1}\ldots r_{n}r_{v}\) is the row reading of a tableau. Otherwise, \(\Sigma(u,v)=\varepsilon\)._
If \(\Sigma\) has nontrivial output, we call \(u\) and \(v\)_compatible_, and \(uv\) the "stitched" tableau.
**Example**.: _Suppose \(u=43322234\) and \(v=11113\). Then \(\Sigma(u,v)=uv\), with corresponding tableau_
\begin{tabular}{|c|c|c|c|c|} \hline
4 & & & & \\ \hline
3 & 3 & & & \\ \hline
2 & 2 & 2 & 3 & 4 \\ \hline
1 & 1 & 1 & 1 & 3 \\ \hline \end{tabular}
Note that \(\Sigma(T(u),B(u))=u\). We can thus characterise multiplication via the above maps as follows
\[uv =\Sigma(T(uv),B(uv))\] \[=\Sigma(T(u)T(B(u)v),B(B(u)v))\]
Let us consider the structure of \(w\in P_{n}\) as a word in normal form in \(\mathcal{C}^{*}\). We have that \(w=c_{1}^{w_{1}}c_{2}^{w_{2}}\ldots c_{k}^{w_{k}}\) for some \(w_{i}\in\mathbb{N}\) for each \(i\), satisfying some compatibility conditions. But consider now each block \(c_{i}^{m}\) for some \(m\in\mathbb{N}\) as a tableau word in row form in the presentation \(\langle A|K\rangle\). Then for \(c_{i}=x_{1}x_{2}\ldots x_{r}\) in row form in \(A^{*}\), we have that \(c_{i}^{m}=x_{1}^{m}x_{2}^{m}\ldots x_{r}^{m}\) in row form in \(A^{*}\). For each \(c_{i}\), this row
form is unique, since each column corresponds to a unique decreasing sequence in \(A^{*}\).
Define \(\alpha\) to be the finite sequence of letters in \(A\) which first outputs in order the letters in the row form of \(c_{1}\), then the letters of the row form of \(c_{2}\), and so on. We also define \(\beta\) to be the finite sequence, taking values in \([k]\), with \(\beta_{i}=j\) when \(\alpha_{i}\) is a letter from column \(c_{j}\).
**Example**.: _The seven columns of \(P_{3}\):_
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
3 & & & & & & \\ \hline
2 & 1 & & & & & & \\ \hline
1 & & & & & & & \\ \hline \end{tabular}
\[\alpha =3,2,1,2,1,3,1,3,2,1,2,3\] \[\beta =1,1,1,2,2,3,3,4,4,5,6,7\]
Denote the length of \(\alpha\) and \(\beta\) by \(\ell\), which is fixed for any choice of \(n\). Then we can write \(w=\alpha_{1}^{w_{\beta_{1}}}\ldots\alpha_{\ell}^{w_{\beta_{\ell}}}\), where \(w_{\beta_{i}}\) is the coefficient of column \(c_{\beta_{i}}\) in the normal form of \(w\).
**Lemma 4.4**.: _Consider \(u,v,w\in P_{n}\). Suppose \(v\) has normal form \(c_{1}^{v_{1}}\ldots c_{k}^{v_{k}}\). Then \(w=uv\) is equivalent to the following:_
_There exist \(u_{0},u_{1},\ldots u_{\ell}\in P_{n}\) such that \(u_{0}=u\), \(u_{\ell}=w\), and we have a recursive formula for \(u_{i}\)_
\[u_{i}=u_{i-1}\alpha_{i}^{v_{\beta_{i}}}\]
This result is immediate from the structure of the insertion algorithm and the fact \(v=\alpha_{1}^{v_{\beta_{1}}}\ldots\alpha_{\ell}^{v_{\beta_{\ell}}}\)
**Corollary 4.5**.: \(\phi^{-1}(\circ)\) _is definable if the maps \(\mu_{x}:\mathbb{N}\times S\to S\) such that \(\phi(\mu_{x}(m,\underline{a}))=\phi(\underline{a})x^{m}\) are definable for each \(x\in A\)._
Proof.: By lemma 4.4, given \(\underline{a},\underline{b}\in S\), we have that \(\phi(\underline{c})=\phi(\underline{a})\phi(\underline{b})\) if and only if there is some \(\underline{c}^{0},\ldots,\underline{c}^{\ell}\) such that \(\underline{c}^{0}=\underline{a},\ \underline{c}=\underline{c}^{\ell}\), and
\[\underline{c}^{i}=\mu_{\alpha_{i}}(b_{\beta_{i}},\underline{c}^{i-1})\]
so the preimage of the graph of multiplication is a composition of finitely many applications of \(\mu_{x}\), which will be definable if each \(\mu_{x}\) is definable.
### The formula defining \(\mu_{x}\)
Henceforth, \(x\) is a fixed letter in \(A\).
Recall that if \(\underline{b}=\mu_{x}(m,\underline{a})\), then
\[\phi(\underline{b}) =\phi(\underline{a})x^{m}\] \[=\Sigma(T(\phi(\underline{a}))T(B(\phi(\underline{a}))x^{m}),B(B( \phi(\underline{a}))x^{m})).\]
So we wish to obtain a formula of Presburger arithmetic describing
\[\underline{b}=\phi^{-1}\Sigma(T(\phi(\underline{a}))T(B(\phi(\underline{a}))x ^{m}),B(B(\phi(\underline{a}))x^{m}))\]
We can break this down into a composition of several maps. First, define \(\underline{a}^{1}\) and \(\underline{a}^{2}\) such that
\[\underline{a}^{1} =\phi^{-1}T\phi(\underline{a})\] \[\underline{a}^{2} =\phi^{-1}B\phi(\underline{a})\]
Next, considering \(R\subset P_{n}\) the set of row words, we define two maps \(\rho_{1},\rho_{2}:R\ \rightarrow\ S\) such that, for \(r\in R\), \(\phi(\rho_{1}(r))=T(rx^{m})\) and \(\phi(\rho_{2}(r))=B(rx^{m})\). Then since \(\phi(\underline{a}^{2})\) is a row, we can define \(\underline{a}^{3}\) and \(\underline{a}^{4}\) to be such that
\[\underline{a}^{3} =\rho_{1}(\phi(\underline{a}^{2}))\] \[\underline{a}^{4} =\rho_{2}(\phi(\underline{a}^{2}))\]
That is, \(\phi(\underline{a}^{3})=T(B(\phi(\underline{a}))x^{m})\) and \(\phi(\underline{a}^{4})=B(B(\phi(\underline{a}))x^{m})\).
Next, define \(\underline{a}^{5}\) to be such that
\[\phi(\underline{a}^{5})=T(\phi(\underline{a}))T(B(\phi(\underline{a}))x^{m})= \phi(\underline{a}^{1})\phi(\underline{a}^{3})\]
By our induction hypothesis, this will be definable, as the set of top tableaux is generated by a known subset of columns. Thus the coefficients in \(\underline{a}^{5}\) will either be calculated by the formula \(\eta_{n-1}\), or will equal zero.
Finally, we have that
\[\underline{b}=\phi^{-1}\Sigma(\phi(\underline{a}^{5}),\phi(\underline{a}^{4}))\]
Since the composition of definable maps is definable, we have that \(\mu_{x}\) is definable precisely when \(\phi^{-1}T\phi(\underline{a}),\phi^{-1}B\phi(\underline{a}),\rho_{1},\rho_{2}\), and \(\phi^{-1}\Sigma(\phi(\ ),\phi(\ ))\) are definable. This will be the subject of the following three lemmas.
**Lemma 4.6**.: _The following maps are definable:_
1. \(\phi^{-1}B\phi:S\to S\)__
2. \(\phi^{-1}T\phi:S\to S\)__
Proof.: \((i)\) Define the finite sets \(B_{a}\) for each \(a\in A\) by
\[B_{a}=\{j\in[k]\ |\ c_{j}=x_{m}\ldots x_{1}\wedge x_{1}=a\}\]
which are nonempty for each \(a\). Then we get that \(\underline{b}=\phi^{-1}(B(\phi(\underline{a}))\) if and only if the following formula holds:
\[\bigwedge_{i\in[k-n]}(b_{i}=0)\wedge\bigwedge_{i\in[n]}\left(b_{k-n+i}=\sum_{j\in B _{i}}a_{j}\right)\]
The first part of the formula denoting the coefficient of each column of size \(\geq 2\) being zero, and the second part denoting the fact that each column \(x_{m}\ldots x_{1}\) in \(\phi(\underline{a})\) contributes to the coefficient of the \(x_{1}\) letter in the bottom row.
\((ii)\) Define the similar sets \(T_{i}\) for each \(i\in[k]\) by
\[T_{i}=\{j\in[k]\ |\ c_{j}=x_{m}\ldots x_{1}\wedge x_{m}\ldots x_{2}=c_{i}\}\]
Note that if \(i\in B_{1}\), then \(T_{i}=\emptyset\). Now, we have that \(\underline{b}=\phi^{-1}(T(\underline{a}))\) if and only if the following formula holds:
\[\bigwedge_{i\in[k]}\left(b_{i}=\sum_{j\in T_{i}}a_{j}\right)\]
Where we take the sum over an empty indexing set to be \(0\).
Note that the sets \(T_{i}\) and \(B_{a}\) can be constructed algorithmically for any given \(n\). Given the set of columns as decreasing sequences, we can check membership in each \(B_{a}\) by considering the minimal element of a column, and we can check membership in each \(T_{i}\) by considering the column without its minimal element. Note also that the set \(B_{a}\) and \(T_{j}\) partition \([k]\).
Next, we move on to defining the maps \(\phi^{-1}T(rx^{m})\) and \(\phi^{-1}B(rx^{m})\).
**Lemma 4.7**.: _The following maps are definable:_
* \(\rho_{1}\phi:S\to S\)__
* \(\rho_{2}\phi:S\to S\)__
Proof.: Consider \(S_{R}\subset S\) to be the subset of normal forms corresponding to rows (i.e. \(S_{R}\) is the preimage of \(R\subset P_{n}\)). Then \(S_{R}\) is a subset definable by
\[\bigwedge_{i\in[k-n]}x_{i}=0\]
and \(\rho_{1}\phi,\ \rho_{2}\phi\) will be maps from \(S_{R}\) to \(S_{R}\). Indeed, by [41], the number of rows after running Schensted's algorithm on any \(w\in A^{*}\) is equal to the length of the longest strictly decreasing subsequence of \(w\). Now, since we map a row \(r\) to \(rx^{m}\in A^{*}\), and since \(r\) is non-decreasing as a sequence in \(A^{*}\), the longest strictly decreasing subsequence of \(w=rx^{m}\) can have length at most \(2\).
Now, write \(\underline{r}=(0,\ldots,0,r_{1},r_{2},\ldots,r_{n})\). We will describe explicitly \(\underline{c}=(0,\ldots,0,c_{1},\ldots,c_{n})\) and \(\underline{d}\ =\ (0,\ldots,d_{1},\ldots,d_{n})\) such that \(\phi(\underline{c})=T(w)\) and \(\phi(\underline{d})=B(w)\).
First, consider \(B(w)\). In the setting \(\langle A|K\rangle\), we will have \(x^{m}\) inserted into
\[r=1^{r_{1}}2^{r_{2}}\ldots(x+1)^{r_{x+1}}(x+2)^{r_{x+2}}\ldots n^{r_{n}}\]
It will bump \(m\) letters from this row, starting at \(x+1\). This means that \(d_{i}=r_{i}\) for \(i<x\) and \(d_{x}=r_{x}+m\). We will now consider the later entries of \(\underline{d}\):
In the first case, suppose \(m\leq r_{x+1}\). Then we will bump \(m\) letters \(x+1\) and replace them with letters \(x\). This yields the effect that \(d_{x+1}=r_{x+1}-m\) and \(d_{i}=r_{i}\) for all \(i>x+1\).
Now, suppose \(r_{x+1}\leq m\leq r_{x+1}+r_{x+2}\). Then all letters \(x+1\) are bumped, as are \(m-r_{x+1}\) letters \(x+2\). Thus we have that \(d_{x+1}=0,\ d_{x+2}=r_{x+2}+r_{x+1}-m\), and \(d_{i}=r_{i}\) for all \(i>x+2\).
Continuing in this pattern, case \(i\) then becomes
\[\sum_{j=1}^{i-1}r_{x+j}\leq m\leq\sum_{j=1}^{i}r_{x+j}\]
And in this case, \(d_{x+j}=0\) for each \(0<j<i\). Also, \(d_{x+i}=\sum_{j=1}^{i}r_{x+j}-m\), and all later entries remain unchanged.
Suppose we are now in the case
\[\sum_{j=1}^{n-x}r_{x+j}\leq m\]
Then we have \(d_{x+j}=0\) for all \(j\).
Each case yields a formula in terms of \(\leq\), addition, and subtraction. Then the disjunction of the above cases, which will be a finite formula, will define \(\underline{d}\) such that \(\phi(\underline{d})=B(w)\).
Let us now consider \(T(w)\). This will be the row of bumped letters, which will mean that \(c_{i}=0\) for any \(i\leq x\). Now, suppose we are in case \(i\) as above. Then
\[\sum_{j=1}^{i-1}r_{x+j}\leq m\leq\sum_{j=1}^{i}r_{x+j}\]
and we will bump all letters \(x+1,\ldots,x+i-1\), as well as some letters \(x+i\). Therefore \(c_{x+j}=r_{x+j}\) for \(0<j<i\), and \(c_{x+i}=m-\sum_{j=1}^{i-1}r_{x+j}\). Note that the length of \(r_{1}\) is always exactly \(m\) in this case.
Now suppose we are in the case
\[\sum_{j=1}^{n-x}r_{x+j}\leq m\]
Then we will have that \(c_{x+j}=r_{x+j}\) for each \(j\).
Again, as above, the disjunction of all the cases yields a formula defining the graph of \(\phi^{-1}T(rx^{m})\).
We will now show the definability of the stitch map
**Lemma 4.8**.: _The map \(\phi^{-1}\Sigma(\phi(\ ),\phi(\ )):S^{2}\to S\) is definable._
Proof.: The condition for \(\Sigma\) to have nontrivial action is definable via the following formula:
Suppose \(\underline{a}\,\ \underline{b}\in S\). Consider the set \(B_{1}=\{i\ \in\ [k]\ |\ c_{i}\ =\ x_{m}\ldots x_{1}\wedge x_{1}=a\}\) as in lemma 4.6. Then \(\phi(\underline{a})\) being a top tableau is definable by \(\left(\bigwedge\limits_{i\in B_{1}}a_{i}=0\right)\). Also, \(\phi(\underline{b})\) being a row is definable by the formula \(\left(\bigwedge\limits_{i\in[k-n]}b_{i}=0\right)\)
Now, let \(e_{a}=\sum_{i\in B_{a}}a_{i}\). In order for it to be possible to stitch two inputs, we need \(e_{2}\leq b_{k-n+1}\), \(e_{3}\leq b_{k-n+2}+b_{k-n+1}-e_{2}\), and so on. We can rearrange this to get the following compatibility condition:
\[\underline{a}\in S\wedge\underline{b}\in S\wedge\left(\bigwedge\limits_{i\in B _{1}}a_{i}=0\right)\wedge\bigwedge\limits_{i\in[k-n]}(b_{i}=0)\wedge\bigwedge \limits_{i\in[n]}\left(\sum_{j=1}^{i}e_{i}\leq\sum_{j=0}^{i-1}b_{k-n+j}\right)\]
Where we take empty sums to be \(0\). Note that all sums used are finite, so we obtain a valid formula in Presburger arithmetic. Denote this compatibility formula \(\gamma(\underline{a},\underline{b})\)
If \(\gamma\) is satisfied, we will define the stitch recursively. We wish to construct \(\underline{d}=\phi^{-1}\Sigma(\phi(\underline{a}),\phi(\underline{b}))\).
When a top tableau \(t\) is multiplied by a compatible bottom row, by the bumping property of Schensted's algorithm, columns of \(t\) will be bumped from left to right. Therefore, we must construct each coefficient \(d_{1},\ldots,d_{k}\) in order, with columns of \(\underline{a}\) and \(\underline{b}\) being 'used up' in a sense.
Suppose we have calculated the coefficients \(d_{1},\ldots,d_{i-1}\), and have obtained modified elements \(\underline{a}^{i-1}\) and \(\underline{b}^{i-1}\) representing all columns that have not yet been used up in a stitch. We calculate the coefficient \(d_{i}\) as follows:
\(d_{i}\) is the coefficient of \(c_{i}\in\mathcal{C}\), which we can write as \(c_{i}=x_{m}\ldots x_{1}\in A^{*}\). Then \(x_{m}\ldots x_{2}\) and \(x_{1}\) are two columns, which we denote respectively by \(c_{i_{T}}\) and \(c_{i_{B}}\) in \(\mathcal{C}\). By the structure of Schensted's algorithm it is straightforward to see that \(d_{i}=\min(a_{i_{T}}^{i-1},b_{i_{B}}^{i-1})\), which is definable in Presburger arithmetic by
\[(a_{i_{T}}^{i-1}\leq b_{i_{B}}^{i-1}\wedge d_{i}=a_{i_{T}}^{i-1})\vee(b_{i_{ B}}^{i-1}\leq a_{i_{T}}^{i-1}\wedge d_{i}=b_{i_{B}}^{i-1})\]
Now define \(a^{i}_{j}=a^{i-1}_{j}\) for \(j\neq i_{T}\), and \(a^{i}_{i_{T}}=a^{i-1}_{i_{T}}-d_{i}\). Likewise, define \(b^{i}_{j}=b^{i-1}_{j}\) for \(j\neq i_{B}\) and \(b^{i}_{i_{B}}=b^{i-1}_{i_{B}}-d_{i}\). This corresponds to the fact that these columns have now been used up in a stitch. Note that we will always get one of these coefficients being set to zero. This is clearly definable in Presburger arithmetic, and we will denote the formula for obtaining \(\underline{a}^{i}\) and \(\underline{b}^{i}\) from \(\underline{a}^{i-1}\) and \(\underline{b}^{i-1}\) by \(\delta_{i}\), taking \(\underline{a}^{0}=\underline{a}\) and \(\underline{b}^{0}=\underline{b}\). This also allows us to calculate \(d_{1}\) in terms of \(\underline{a}^{0}\) and \(\underline{b}^{0}\)
Now we get that \(\phi^{-1}\Sigma(\phi(\ ),\phi(\ ))\) has graph \(\{\underline{a},\underline{b},\underline{d}\}\) satisfying:
\[\gamma(\underline{a},\underline{b})\wedge\exists\underline{a}^{0}\ldots\exists \underline{a}^{k}\exists\underline{b}^{0}\ldots\exists\underline{b}^{k}:( \underline{a}^{0}=\underline{a})\wedge(\underline{b}^{0}=\underline{b}) \wedge\left(\bigwedge_{i\in[k]}d_{i}=\min(a^{i-1}_{i_{T}},b^{i-1}_{i_{B}}) \wedge\delta_{i}\right)\]
With the above lemmas in hand, we can now prove the following result.
**Proposition 4.9**.: _For any \(x\in A\), the map \(\mu_{x}\) is definable._
Proof.: By the discussion before lemma 4.6, \(\mu_{x}\) is a composition of the maps \(\phi^{-1}B\phi\), \(\phi^{-1}T\phi\), \(\rho_{1}\phi\), \(\rho_{2}\phi\), \(\phi^{-1}\Sigma(\phi(\ ),\phi(\ ))\), and multiplication of top tableaux. By lemmas 4.6, 4.7, and 4.8, all five required maps are definable.
To define \(\underline{a}^{5}=\phi^{-1}(T(\phi(\underline{a}))T(B(\phi(\underline{a}))x^{ m}))\), first note that since \(\underline{a}^{5}\) denotes a top tableau, we have that \(a^{5}_{i}=0\) for each \(i\in B_{1}\) as defined in lemma 4.6. Furthermore, for each \(i\notin B_{1}\), we have that \(a^{5}_{i}\) is determined by the formula \(\eta_{n-1}\) applied to \(\phi^{-1}(T(\phi(\underline{a})))\) and \(\phi^{-1}(T(B(\phi(\underline{a}))x^{m}))\). This determines \(\eta_{n}\) by induction, with base case \(\eta_{2}\) as detailed in section 3.1. This completes the proof.
**Theorem 2**.: _For any \(n\), the first order theory of \(P_{n}\) decidable_
Proof.: By proposition 4.9 and corollary 4.5, \(\phi^{-1}(\circ)\) is definable. Thus \(\phi:S\to P_{n}\) is an interpretation of \(P_{n}\) in Presburger arithmetic. This reduces \(FOTh(P_{n})\) to \(FOTh\left(\mathbb{Z},0,1,+,-,\leq\right)\), which is decidable by lemma 2.6.
By transitivity of interpretations, we have the following corollary.
**Corollary 4.10**.: _For any \(n\in\mathbb{N},\ P_{n}\) is PE interpretable in \(P_{2}\)_
**Corollary 4.11**.: _The diophantine problem (via the positive existential theory) and the problem of checking if a given identity holds (via the positive universal theory) are decidable in \(P_{n}\)_
The Diophantine problem in the infinite case
We note that the above interpretations were constructed algorithmically in a uniform way. That is to say, there will exist an effective procedure which, given \(n\), will construct the interpreting map for \(P_{n}\). The procedure runs as follows:
1. Generate the interpretation for \(P_{n-1}\)
2. Given \(n\), generate the power set of \([n]\) except the empty set.
3. Enumerate this set by the order \(\sqsubseteq\) on columns. Since each column is a decreasing sequence of elements in \([n]\), each column corresponds to a unique element of the power set.
4. Run Schensted's algorithm on each pair of columns. If the output of running Schensted's algorithm on \(c_{i}c_{j}\) is not \(c_{i}c_{j}\), then \((i,j)\) is an incompatible pair.
5. Generate the formula defining \(S\) by conjuncting with \(x_{i}=0\lor x_{j}=0\) for each incompatible pair discovered in step 3.
6. Generate the formula defining equality in terms of the formula defining \(S\).
7. Generate the formula for \(\mu_{x}\) in terms of the interpretation for \(P_{n-1}\)
8. Generate the sequences \(\alpha\) and \(\beta\) from lemma 4.4. Steps 7 and 8 yield a formula defining multiplication.
Step 1 will repeat recursively until we reach \(P_{2}\), which can be written explicitly as in section 3.
### The plactic monoid of all tableaux
We consider \(A=\mathbb{N}\setminus\{0\}\) with \(K_{\mathbb{N}}\) the set of Knuth relations for all triples \((x,y,z)\ \in\ \mathbb{N}^{3}\). Then the associated plactic monoid \(P(\mathbb{N})\) is the monoid of _all_ semistandard Young tableaux. Despite the work in this paper, the question of deciding the theory of \(P(\mathbb{N})\) remains open. However, we present an algorithm, by uniformity, for deciding the Diophantine problem for \(P(\mathbb{N})\).
**Lemma 5.1**.: _For any \(n\in\mathbb{N}\) the map \(\phi:P(\mathbb{N})\to P_{n}\), defined on generators as \(\phi(k)=k\) if \(k\leq n\) and \(\phi(k)=\varepsilon\) if \(k>n\) and extended to words in the natural way, is a homomorphism._
Proof.: Considered as a map from \(\mathbb{N}^{*}\) to \([n]^{*}\), \(\phi\) is clearly a homomorphism. It only remains to show that \(\phi\) is well defined as a map from \(P(\mathbb{N})\) to \(P_{n}\). We will show this by proving that each Knuth relation in \(K_{\mathbb{N}}\) will map to a relation that holds in \(P_{n}\).
Suppose \(u=xzy\) and \(v=zxy\), for \(x\leq y<z\). If \(z\leq n\) then \(\phi(u)=u,\ \phi(v)=v\), and \(u=v\) in \(P_{n}\) so there is nothing to prove. If \(z>n\) then
\(\phi(v)\), as \(\phi(z)=\varepsilon\). Thus \(\phi(u)=\phi(v)\) will always hold in \(P_{n}\). An analogous argument shows \(\phi(u)=\phi(v)\) for \(u=yxz\) and \(v=yzx\) with \(x<y\leq z\).
**Theorem 3**.: _The Diophantine problem for \(P(\mathbb{N})\) is decidable._
Proof.: Suppose we are given some equation \(u_{1}X_{1}\ldots X_{n}u_{n+1}=v_{1}Y_{1}\ldots Y_{m}v_{m+1}\). Denote this equation by \(\varphi\). We define the support of \(\varphi\) to be all letters appearing in the supports of \(u_{i}\) and \(v_{i}\)
\[supp(\varphi)=\bigcup_{i\leq n+1}supp(u_{i})\cup\bigcup_{j\leq m+1}supp(v_{j})\]
Let \(k=\max(supp(\varphi))\). Then by the above proposition there exists a homomorphism \(\phi:P(\mathbb{N})\to P_{k}\). Since each \(u_{i}\) and \(v_{j}\) is an element of \(P_{k}\), \(\phi(u_{i})=u_{i}\) and \(\phi(v_{j})=v_{j}\).
Suppose \(\varphi\) has a solution \((x_{1},\ldots,x_{n},y_{1},\ldots,y_{m})\in P(\mathbb{N})^{m+n}\).
Then \((\phi(x_{1}),\ldots\phi(x_{n}),\phi(y_{1}),\ldots,\phi(y_{m}))\in P_{k}^{n+m}\) will also be a solution to \(\varphi\). Thus \(\varphi\) has a solution in \(P(\mathbb{N})\) if and only if it has a solution in \(P_{k}\).
Now, since there is a uniform algorithm for deciding first order sentences in \(P_{k}\) for any \(k\), we obtain the following procedure for solving Diophantine problems in \(P(\mathbb{N})\):
1. Given \(\varphi\) as input, calculate \(k=\max(supp(\varphi))\).
2. Generate the interpretation of \(P_{k}\) into Presburger arithmetic
3. Interpret the sentence \[\exists X_{1}\ldots\exists X_{n}\exists Y_{1}\ldots\exists Y_{m}:u_{1}X_{1} \ldots X_{n}u_{n+1}=v_{1}Y_{1}\ldots Y_{m}v_{m+1}\] in Presburger arithmetic using the interpretation of \(P_{k}\), and check whether it holds.
### A plactic monoid on integers
We need not restrict ourselves to plactic monoids generated by \(\mathbb{N}\).
Let's consider instead tableaux with labels taken from \(\mathbb{Z}\). By the total order on \(\mathbb{Z}\) we obtain a set \(K_{\mathbb{Z}}\) of Knuth relations on triples \((x,y,z)\in\mathbb{Z}^{3}\). Define the plactic monoid on integers to be \(P(\mathbb{Z})=\langle\mathbb{Z}\mid K_{\mathbb{Z}}\rangle\). This is an infintely generated plactic monoid, but note that \(P(\mathbb{N})\) and \(P(\mathbb{Z})\) are not isomorphic.
Indeed, suppose \(\psi:P(\mathbb{Z})\to P(\mathbb{N})\) were an isomorphism. Then for some \(y\in P(\mathbb{Z})\) we have \(\psi(y)=1\). Since \(1\) is irreducible, we must have \(y\in\mathbb{Z}\). Consider \(x<y<z,\ x,y,z\in\mathbb{Z}\). Then by irreducibility, \(\psi(x),\psi(z)\in\mathbb{N}\). Thus we have some \(a,b\in\mathbb{N}\) such that \(1ab=1ba\). Such an equality cannot hold in \(P(\mathbb{N})\).
Given \(\varphi\) a Diophantine equation in \(P(\mathbb{Z})\), we will have \(supp(\varphi)\) a finite totally ordered set. This set will have some smallest element \(a\in\mathbb{Z}\) and some largest element \(b\in\mathbb{Z}\). Then the interval \([a,b]\subset\mathbb{Z}\) has size \(k=b-a+1\), and we can define an order preserving injective map from \(supp(\varphi)\) to \([k]\). We will extend this map to a homomorphism.
**Lemma 5.2**.: _Let \(\{z_{1}<z_{2}<\cdots<z_{n}\}\) be a finite set of integers with their standard order. Then the map \(\phi:P(\mathbb{Z})\to P_{k}\), with \(k=z_{n}-z_{1}+1\) defined on generators by_
\[\phi(z)=\begin{cases}\varepsilon,\ z<z_{1}\\ z-z_{1}+1,\ z\in[z_{1},z_{n}]\\ \varepsilon,\ z>z_{n}\end{cases}\]
_and extended to words in the natural way, is a homomorphism_
Proof.: As in lemma 5.1, consider \(u=xzy\) and \(v=zxy\), for \(x\leq yz\). If more than one letter in \(\{x,y,z\}\) is mapped to \(\varepsilon\), there is nothing to prove. Likewise if no letters are mapped to \(\varepsilon\). If only one letter is mapped to \(\varepsilon\), then this is either \(x\), yielding \(\phi(u)=\phi(z)\phi(y)=\phi(v)\), or this letter is \(z\), yielding \(\phi(u)=\phi(x)\phi(y)=\phi(v)\). An analogous argument holds for all other Knuth relations.
Thus, as in the case above, any Diophantine equation \(\varphi\) is solvable in \(P(\mathbb{Z})\) if and only if it has a solution in a fixed finite rank plactic monoid. Therefore, by uniformity of the above algorithm, the Diophantine problem for \(P(\mathbb{Z})\) is also decidable.
### Two open questions
1. Is the first order theory of \(P(\mathbb{N})\) decidable? It is known that this monoid satisfies no identities [18], and the above proof shows it has decidable Diophantine problem. Can this be extended to the whole theory? What about in the \(P(\mathbb{Z})\) case?
2. Do infinite rank plactic monoids defined on other generating sets have decidable Diophantine problem? For example, does \(P(\mathbb{Q})=\langle\mathbb{Q}\mid K_{\mathbb{Q}}\rangle\) have decidable Diophantine problem? What about \(P(L)\) for an arbitrary recursive total order?
## Acknowledgements
This research was conducted during my master's study at the University of East Anglia, and will form part of my thesis. I thank Robert Gray for his support and feedback as supervisor, and Lorna Gregory for her feedback and useful discussion of the model theoretic background. I also thank the PhD student community at the UEA for their support. |
2304.10100 | Dai-Freed anomaly in the standard model and topological inflation | When we impose the discrete symmetry in the standard model we have Dai-Freed
global anomalies. However, interestingly if we introduce three right-handed
neutrinos we can have an anomaly-free discrete $Z_4$ gauge symmetry. This $Z_4$
symmetry should be spontaneously broken down to the $Z_2$ symmetry to generate
the heavy Majorana masses for the right-handed neutrinos. We show that this
symmetry breaking naturally generates topological inflation, which is
consistent with the CMB observations at present and predicts a significant
tensor mode with scalar-tensor ratio $r > 0.03$. The right-handed neutrinos
play an important role in reheating processes. The reheating temperature is as
high as $\sim 10^8$GeV, and non-thermal leptogenesis successfully takes place. | Masahiro Kawasaki, Tsutomu T. Yanagida | 2023-04-20T05:49:29Z | http://arxiv.org/abs/2304.10100v3 | # Dai-Freed anomaly in the standard model and topological inflation
###### Abstract
When we impose the discrete symmetry in the standard model we have Dai-Freed global anomalies. However, interestingly if we introduce three right-handed neutrinos we can have an anomaly-free discrete \(Z_{4}\) gauge symmetry. This \(Z_{4}\) symmetry should be spontaneously broken down to the \(Z_{2}\) symmetry to generate the heavy Majorana masses for the right-handed neutrinos. We show that this symmetry breaking naturally generates topological inflation, which is consistent with the CMB observations at present and predicts a significant tensor mode with scalar-tensor ratio \(r>0.03\). The right-handed neutrinos play an important role in reheating processes. The reheating temperature is as high as \(\sim 10^{8}\) GeV, and non-thermal leptogenesis successfully takes place.
Keywords: physics of the early universe, inflation, leptogenesis
## 1 Introduction
Anomalies give strong constraints on models based on quantum field theories. If there are anomalies of gauge symmetries, they must be canceled out to have consistent gauge symmetries in quantum field theories. The anomalies are classified into two classes. One is traditional local anomalies [1; 2] and the other is anomalies of large gauge transformations. Witten's \(SU(2)\) global anomaly [3] is a famous example for the latter class. Both are well described by the Dai-Freed viewpoint [4; 5], and we call them the Dai-Freed anomalies [6]. The Dai-Freed anomaly free conditions result in additional constraints on particle physics models since they require non-trivial conditions on massless chiral fermions.
It is known that the standard model is Dai-Freed anomaly free. However, if we impose additional discrete symmetries, it is not necessarily anomaly free and we should add new fermions to cancel the anomalies in the SM model. In fact, if we impose a discrete \(Z_{4}\) gauge symmetry in the SM, the theory has the Dai-Freed anomalies. However, the introduction of one right-handed neutrino for each generation cancels out the anomalies [7].
The right-handed neutrinos should obtain the Majorana masses when the discrete \(Z_{4}\) symmetry breaks down to the discrete \(Z_{2}\) by the condensation of a scalar \(\phi\) boson. The large Majorana masses are very important for inducing the observed small masses for the active neutrinos via the seesaw mechanism [8; 9; 10; 11] and for generating the baryon-number asymmetry in the universe [12]. However, the spontaneous breaking of the discrete \(Z_{4}\) gauge symmetry creates domain walls, which can be removed away by inflation. The simplest possibility is to identify the boson \(\phi\) itself with the inflaton.
In this paper we show that topological inflation naturally occurs in a large parameter region of the present model with the discrete \(Z_{4}\) gauge symmetry. The topological inflation is very attractive, since it is free from the initial state tuning problem [13; 14]. In this model the right-handed neutrinos play an important role in the reheating processes as well as in generating the baryon-number asymmetry in the present universe.
This paper is organized as follows. In Section 2, we briefly describe \(Z_{4}\) symmetry and introduce the right-handed neutrinos. Dynamics and observational implications of the topological inflation model are described in Section 3. Section 4 is devoted to the conclusion and discussion of our results.
## 2 Discrete \(Z_{4}\) gauge symmetry in the standard model
We introduce a discrete \(Z_{4}\) gauge symmetry in the standard model (SM). The charges are shown in Table 1. Here, we use the \(SU(5)\) representations for the SM particles for simplicity of the notations, but we consider its subgroup \(SU(3)\times SU(2)\times U(1)\) as a gauge group. It is stressed [7] that the SM with the \(Z_{4}\) is Dai-Freed anomaly free if we introduce one right-handed neutrino, \(N_{i}\), for each generation \(i=1,2,3\). Thus, we assume the SM with three right-handed neutrinos. The anomaly free nature might be understood by embedding the \(Z_{4}\) into the well known \(U(1)_{B-L}\) gauge group, but it is not necessarily. It might be amusing that three right-handed neutrinos are required by the cancellation of the Dai-Freed anomalies in the SM with the gauged \(Z_{4}\) symmetry.
We add a SM gauge singlet boson \(\phi\) to generate Majorana masses for the right-handed neutrinos, \(N_{i}\). We consider a coupling of the \(\phi\) to \(N_{i}N_{i}\) as
\[L=\frac{y_{i}}{2}\phi N_{i}N_{i}\ +\ h.c.., \tag{1}\]
with \(y_{i}\) coupling constants. Here, the boson \(\phi\) should have the \(Z_{4}\) charge 2 mod 4, since \(N_{i}\) have the \(Z_{4}\) charge 1 (see Table 1). This new boson \(\phi\) of the \(Z_{4}\) charge 2 does not generate any Dai-Freed anomalies.
## 3 Topological inflation and leptogenesis
In this paper, we consider the scalar \(\phi\) to be the inflaton. The scalar \(\phi\) is neutral since it carries a \(Z_{4}\) charge 2. The potential \(V\) of the inflaton \(\phi\) is then given by
\[V=v^{4}-2gv^{4}\phi^{2}+kv^{4}\phi^{4}+...., \tag{2}\]
where the potential is invariant of the discrete \(Z_{4}\) gauge symmetry, \(g,k\) are coupling constants and \(v\) is the energy scale of the potential. Here and hereafter we use the Planck unit with the Planck mass \(M_{p}=2.4\times 10^{12}\) GeV \(=1\). The minimal nontrivial potential is given by
\[V=v^{4}(1-g\phi^{2})^{2}. \tag{3}\]
Here, we have chosen coupling so that the minimal of the potential has the vanishing vacuum energy. The \(\phi\) obtains the vacuum expectation value \(\langle\phi\rangle=\pm\sqrt{1/g}\) in the vacua, and the discrete \(Z_{4}\) symmetry is spontaneously broken down to the \(Z_{2}\) symmetry. In this paper we adopt this minimal potential to show how successfully the present model reproduces an inflationary universe consistent with the observations. We neglect possible higher order terms like \(\phi^{4},\phi^{6},\ldots\) in the parenthesis of the potential (3). The extension to a more general potential will be studied in the future.
\begin{table}
\begin{tabular}{|c|c c c c c|} \hline & \(5^{*}\) & 10 & \(N\) & \(H\) & \(\phi\) \\ \hline \hline \(Z_{4}\) & 1 & 1 & 1 & 2 & 2 \\ \hline \end{tabular}
\end{table}
Table 1: \(Z_{4}\) charges. \(H\) is the Higgs doublet boson in the SM.
Suppose that the inflaton \(\phi\) sits at the origin of the potential (\(\phi=0\)) in the early universe. This is realized, for example, if the inflaton field acquires a thermal mass through the coupling (1). Then, when the energy density of the universe becomes \(v^{4}\), \(Z_{4}\) symmetry is spontaneously broken, and domain wall formation starts. The width of the domain wall is given by \(\langle\phi\rangle/\sqrt{V(0)}\simeq(gv^{2})^{-1}\)[13] which is comparable to or larger than the Hubble horizon \(H^{-1}\simeq v^{-2}\) for \(g\lesssim\mathcal{O}(1)\). Thus, inflation takes place in horizons contained inside the domain walls. This type of inflation is called topological inflation. Topological inflation is attractive because it is free from the initial value problem [13; 14]. Moreover, since the initial horizon at \(H\simeq v^{2}\), where the inflaton field is nearly homogeneous, becomes much larger than the present horizon by inflation, the domain wall problem does not exist.
Now we discuss the dynamics of topological inflation. From the potential (11) we obtain the slow-roll parameters \(\epsilon\) and \(\eta\) as
\[\epsilon =\frac{1}{2}\left(\frac{V^{\prime}}{V}\right)^{2}=\frac{8g^{2} \phi^{2}}{1-2g\phi+g^{2}\phi^{2}}, \tag{12}\] \[\eta =\frac{V^{\prime\prime}}{V}=\frac{-4g+12g^{2}\phi^{2}}{1-2g\phi+ g^{2}\phi^{2}}. \tag{13}\]
When \(g\ll 1\), the slow-roll parameters \(\epsilon\) and \(\eta\) are smaller than \(1\) for \(\phi\lesssim\langle\phi\rangle\), and hence inflation lasts until the inflaton reaches nearly the minimum of the potential. Defining \(\chi\) as \(\phi-\langle\phi\rangle\), the potential is dominated by the quadratic term \(\sim\chi^{2}\) around \(\phi=\langle\phi\rangle\). Thus, inflation changes from a hilltop type for \(\phi\ll 1\) to a chaotic one near \(\phi\sim\langle\phi\rangle\).
With use of \(\chi\), the potential and the slow-roll parameters are rewritten as
\[V =4gv^{4}\chi^{2}(1+g^{1/2}\chi/2)^{2} \tag{14}\] \[\epsilon =\frac{2(1+g^{1/2}\chi)^{2}}{\chi^{2}(1+g^{1/2}\chi/2)^{2}},\] (15) \[\eta =\frac{2+6g^{1/2}\chi+3g\chi^{2}}{\chi^{2}(1+g^{1/2}\chi/2)^{2}}. \tag{16}\]
The field value at the end of inflation is then given by \(\chi_{f}\simeq-\sqrt{2}\). The e-fold number during inflation (\(N=\ln(a(t_{f})/a(t))\)) is
\[N\simeq\int_{\chi_{N}}^{\chi_{f}}\frac{d\phi}{\sqrt{2\epsilon}}=\frac{\chi_{N }-\sqrt{2}}{4g^{1/2}}+\frac{\chi_{N}^{2}-2}{8}-\frac{1}{4g}\ln\left[\frac{1+g ^{1/2}\chi_{N}}{1+\sqrt{2}g^{1/2}}\right], \tag{17}\]
from which we obtain \(\chi_{N}\) as a function of \(N\). In Table 2 we show \(\chi_{50}\), \(\chi_{55}\) and \(\chi_{60}\) for several values of \(g\).
The amplitude and spectral index of curvature perturbations at the CMB scale
\(0.002\)Mpc\({}^{-1}\)) are written as
\[{\cal P}(k_{*}) = \frac{V}{24\pi^{2}\epsilon}=\frac{gv^{4}}{12\pi^{2}}\frac{\chi_{N_{ *}}^{2}(1+g^{1/2}\chi_{N_{*}}/2)^{4}}{(1+g^{1/2}\chi_{N_{*}})^{2}}, \tag{11}\] \[n_{s} = 1-6\epsilon+2\eta=1-\frac{12(1+g^{1/2}\chi_{N_{*}})^{2}}{\chi_{N_ {*}}^{2}(1+g^{1/2}\chi_{N_{*}}/2)^{2}}+\frac{4+12g^{2}\chi_{N_{*}}+6g\chi_{N_{* }}^{2}}{\chi_{N_{*}}^{2}(1+g^{1/2}\chi_{N_{*}}/2)^{2}}, \tag{12}\]
Figure 1: Spectral index as a function of \(g\) for \(N_{*}=50-60\). The observed value with \(1\sigma\) and \(2\sigma\) errors is shown by the magenta-shaded region.
Figure 2: Tensor-scalar ratio as a function of \(g\) for \(N_{*}=50-60\). The observed constraint \(r<0.036\) is shown by the magenta shaded region.
where \(N_{*}\) is the e-fold number when the scale \(k_{*}\) exits the Hubble horizon. From Eq. (10) the inflation scale \(v\) is calculated as
\[v=\left[\frac{12\pi^{2}\mathcal{P}_{\zeta}(k_{*})}{g}\,\frac{(1+g^{1/2}\chi_{N_{ *}})^{2}}{\chi_{N_{*}}^{2}(1+g^{1/2}\chi_{N_{*}}/2)^{4}}\right]^{1/4}. \tag{13}\]
The model parameters \(g\) and \(v\) can determined by comparison with the Planck observation \(\mathcal{P}_{\zeta}(k_{*})=2.1\times 10^{9}\) and \(n_{s}=0.965\pm 0.004\,(1\sigma)\)[15]. The predicted spectral index for \(N_{*}=50-60\) is shown in Fig. 1 together with the observed value with \(1\sigma\) and \(2\sigma\) errors. From the figure, \(g\lesssim 0.004\) is consistent with the observation. The tensor-scalar ratio \(r\) is given by \(r=16\epsilon\) and shown in Fig. 2 together with the observational constraint \(r<0.036\)[16], from which we obtain the constraint on \(g\) as \(g\gtrsim 0.0035\). Therefore, \(g\) should be \(\simeq 0.0035-0.004\) to satisfy the observational constraints on \(n_{s}\) and \(r\).
Hereafter we take \(g=0.0038\) and \(N_{*}=55\) as our reference values, which gives \(n_{s}=0.959\) and \(r=0.034\). (Although the spectral index is slightly small it is consistent within \(1.5\sigma\).) The inflaton field value at \(N=N_{*}\) is then calculated as
\[\chi_{*}=-12.2M_{\rm p}. \tag{14}\]
The vacuum expectation value of the inflaton is given by \(\langle\phi\rangle\simeq 16.2\). Using the observed amplitude \(\mathcal{P}_{\zeta}(k_{*})\) the inflaton scale \(v\) and the Hubble parameter during inflation are obtained as
\[v \simeq 1.44\times 10^{16}\ {\rm GeV} \tag{15}\] \[H_{\rm inf} \simeq 4.59\times 10^{13}\ {\rm GeV}. \tag{16}\]
Let us estimate the reheating temperature in this model. The inflaton mass \(m_{\phi}\) is
\[m_{\phi}=2gv^{2}\langle\phi\rangle=2\sqrt{g}v^{2}\simeq 1.04\times 10^{13}\ {\rm GeV}. \tag{17}\]
The inflaton \(\phi\) decays the right-handed neutrino through the interaction (1) if a right-handed neutrino mass \(m_{N_{i}}\) is less than \(m_{\phi}/2\). The decay rate is written as
\[\Gamma_{\phi}=\frac{1}{8\pi}y_{i}^{2}m_{\phi}=\frac{1}{8\pi}\frac{m_{N_{i}}^{ 2}}{|\langle\phi\rangle|^{2}}m_{\phi}. \tag{18}\]
Here we have used \(m_{N_{i}}=y_{i}|\langle\phi\rangle|\). From the decay rate we obtain the reheating temperature
\[T_{R} \simeq\left(\frac{\pi^{2}g_{*}}{90}\right)^{1/4}\Gamma_{\phi}^{1/2}\] \[=7.0\times 10^{7}\ {\rm GeV}\left(\frac{m_{N_{i}}}{5\times 10^{12 }{\rm GeV}}\right)\left(\frac{g_{*}}{100}\right)^{-1/4}, \tag{19}\]
where \(g_{*}\) is the relativistic degree of freedom. The reheating temperature is not high
enough for thermal leptogenesis, but non-thermal leptogenesis is possible [17; 18; 19; 20; 21; 22]1. When the inflaton mainly decays into the lightest right-handed neutrino \(N_{1}\), the generated lepton number entropy ratio \(n_{L}/s\) is
Footnote 1: We have suppressed a inflaton coupling \(\phi^{2}H^{\dagger}H\) since otherwise it generates too large mass for the Higgs boson. However, we may have this term if the mass term \(m^{2}H^{\dagger}H\) cancels the induced Higgs boson mass. In this case we have a fast \(\phi\) decay to a pair of the Higgs bosons and the reheating temperature becomes much higher and the thermal leptogenesis works.
\[\frac{n_{L}}{s}=-3\times 10^{-10}\left(\frac{T_{R}}{10^{6}\text{GeV}}\right) \left(\frac{m_{N_{1}}}{m_{\phi}}\right)\left(\frac{m_{3}}{0.05\text{eV}} \right)\delta_{\text{eff}}, \tag{20}\]
where \(m_{3}\) is the heaviest neutrino mass and \(\delta_{\text{eff}}\) is a degree of the CP violation (\(\delta_{\text{eff}}\leq 1\)). The lepton asymmetry is converted into the baryon asymmetry given by \(n_{B}/s=-(8/23)\,n_{L}/s\). With the reheating temperature (20) we have a successful non-thermal leptogenesis.
## 4 Conclusion and Discussion
If we impose a discrete \(Z_{4}\) gauge symmetry in the standard model (SM), we need extra chiral fermions to cancel the Dai-Freed global anomalies. The most simple candidate of the extra fermions is three right-handed neutrinos. This might be the reason why we have the right-handed neutrinos. This discrete \(Z_{4}\) symmetry must be broken down to \(Z_{2}\) by a scalar \(\phi\) condensation to generate Majorana masses for the right-handed neutrinos. However, the spontaneous breaking of the discrete symmetry causes too many domain walls in the early universe. The most simple solution to this problem is to identify the scalar \(\phi\) with the inflaton, generating topological inflation.
In this paper, we have shown that the above topological inflation is consistent with the present observations. Furthermore, we have shown that the tensor-scalar ratio \(r\) is predicted as \(r>0.03\) using the scalar mode observation \(n_{s}=0.095\pm 0.008\,(2\sigma)\)2. This prediction on the tensor mode will be tested in near future experiments like Simons Observatory [25] and LiteBIRD [26].
Footnote 2: The supersymmetry extension of the present model is also Dai-Freed anomaly free. In this extension the inflaton potential becomes identical to the potential in [23] and the constraint on the scalar-tensor ratio \(r\) is much weaker as shown in [24].
In the present model, reheating takes place through the inflaton coupling to the lightest right-handed neutrino \(N_{1}\). We have found that the reheating temperature \(T_{R}\simeq 10^{8}\) GeV and we have a successful non-thermal leptogenesis for the baryon asymmetry in the present universe.
In general, topological inflation is eternal because the quantum fluctuation \(\delta_{q}\phi\simeq H_{\text{inf}}/2\pi\) dominates over the classical displacement \(\delta_{c}\phi\simeq\dot{\phi}H_{\text{inf}}^{-1}\) near origin of the potential. Using the potential (19) the condition for eternal inflation is written as
\[\phi\lesssim\phi_{\text{eternal}}=\frac{3v^{2}}{8\pi g}=\frac{3v^{2}}{8\pi g^{1 /2}}\langle\phi\rangle. \tag{21}\]
For \(g\sim\mathcal{O}(10^{-3})\), we found \(v\sim 10^{-2}M_{p}\) and \(\phi_{N_{*}}\gtrsim 0.1\langle\phi\rangle\). Therefore, \(\phi_{N_{*}}\gg\phi_{\text{eternal}}\) and our analysis based on classical slow-roll approximation is valid.
## Acknowledgements
T. T. Y. thanks Kazuya Yonekura for the discussion on the Dai-Freed anomaly. T. T. Y. is supported by the China Grant for Talent Scientific Start-Up Project and by Natural Science Foundation of China (NSFC) under grant No. 12175134, JSPS Grant-in-Aid for Scientific Research Grants No. 19H05810, and World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan. M. K. is supported by JSPS KAKENHI Grant Nos. 20H05851(M.K.) and 21K03567(M.K.).
|
2305.10229 | How does Contrastive Learning Organize Images? | Contrastive learning, a dominant self-supervised technique, emphasizes
similarity in representations between augmentations of the same input and
dissimilarity for different ones. Although low contrastive loss often
correlates with high classification accuracy, recent studies challenge this
direct relationship, spotlighting the crucial role of inductive biases. We
delve into these biases from a clustering viewpoint, noting that contrastive
learning creates locally dense clusters, contrasting the globally dense
clusters from supervised learning. To capture this discrepancy, we introduce
the "RLD (Relative Local Density)" metric. While this cluster property can
hinder linear classification accuracy, leveraging a Graph Convolutional Network
(GCN) based classifier mitigates this, boosting accuracy and reducing parameter
requirements. The code is available
\href{https://github.com/xsgxlz/How-does-Contrastive-Learning-Organize-Images/tree/main}{here}. | Yunzhe Zhang, Yao Lu, Qi Xuan | 2023-05-17T14:10:54Z | http://arxiv.org/abs/2305.10229v2 | # Exploring Inductive Biases in Contrastive Learning:
###### Abstract
This paper investigates the differences in data organization between contrastive and supervised learning methods, focusing on the concept of locally dense clusters. We introduce a novel metric, Relative Local Density (RLD), to quantitatively measure local density within clusters. Visual examples are provided to highlight the distinctions between locally dense clusters and globally dense ones. By comparing the clusters formed by contrastive and supervised learning, we reveal that contrastive learning generates locally dense clusters without global density, while supervised learning creates clusters with both local and global density. We further explore the use of a Graph Convolutional Network (GCN) classifier as an alternative to linear classifiers for handling locally dense clusters. Finally, we utilize t-SNE visualizations to substantiate the differences between the features generated by contrastive and supervised learning methods. We conclude by proposing future research directions, including the development of efficient classifiers tailored to contrastive learning and the creation of innovative augmentation algorithms. Our code is available at this site.
## 1 Introduction
Contrastive learning has significantly influenced machine learning by learning representations from unlabeled data, thereby edging us closer to the goal of building models that can generalize across diverse data distributions and tasks. However, a comprehensive conceptual understanding of this phenomenon is yet to be established. Recent research efforts ([2], [41], [17], [44]) have aimed to bridge this gap by analyzing properties of augmented views and their distributions, making assumptions, and providing mathematical proofs to justify the contrastive loss function.
However, these assumptions, often based on intuition, may not align with real-world data distributions. For instance, [36] highlights that in practice, there is often little overlap between the distributions of
augmented views, regardless of class membership, making assumptions like those in [44] potentially incorrect.
We address this by examining the role of inductive bias in contrastive loss from a clustering perspective. We note that contrastive learning forms cluster distinct from those generated by supervised learning due to its inductive bias. Our key observations are:
* In contrastive learning, visually similar images have similar vector representations, even if they belong to different classes. Conversely, visually dissimilar images from the same class have distinct representations.
* This leads to the formation of locally dense clusters for images from the same class, while visually different images of the same class are not globally clustered. We propose a metric called "RLD (Relative Local Density)" to quantify this property.
* The local density of clusters poses a challenge for linear classifiers. Hence, we propose using a GCN classifier, which performs better for most models due to the locally dense clustering nature of contrastive learning.
## 2 Related Work and Preliminaries
### Understanding Contrastive Learning
Contrastive learning, which learns representations from similar data pairs, has shown impressive results in downstream tasks. These pairs are generated using temporal information ([43], [28]) or different augmentations of inputs ([21], [15], [18], [13], [7], [45], [40], [6], [3]). By minimizing a contrastive loss, the similarity of a representation across various 'views' of the same image is maximized, while minimizing similarity with distracting negative samples.
Multiple views can be obtained naturally from multimodal data ([1], [24], [30], [32], [33], [39]), or through cropping and data augmentation for image-only modalities ([3], [20], [21], [31], [6], [11], [13], [18], [45]). Positive pairs are views of the same data point, and negatives are sampled views of different data points, although the need for negative samples has been debated ([7], [16], [46]).
Many studies have explored contrastive learning, with early works assuming that positive samples are (nearly) conditionally independent and establishing a bound on contrastive loss and classification performance ([37], [26]). Later work [44] pointed an overlap between intra-class augmented views and provided a bound for downstream performance.
However, [36] argued that neglecting inductive biases leads to an inadequate understanding of contrastive learning's success and can even result in vacuous guarantees. This work also highlighted the minimal overlap between the distributions of augmented images, contradicting assumptions in previous works.
Armed with this understanding, we will examine the role of inductive biases and their effect on clustering and classification performance.
### Cluster and Community Evaluation
In this paper, clusters are defined as distinct subsets within a data partition, each representing a group of similar data points. Cluster evaluation, a long-standing question ([22], [38]), often focuses on cluster structure [35] or consistency with ground truth labels [34]. We primarily address the former, considering true labels as a partition.
Traditionally, well-structured clusters are expected to be "dense and well-separated". The Calinski-Harabasz score or Variance Ratio Criterion [5] quantifies this:
\[\text{CH score}=\frac{\sum_{i=1}^{k}n_{i}\|\mu_{i}-\mu\|^{2}/(k-1)}{\sum_{i=1} ^{k}\sum_{x\in C_{i}}\|x-\mu_{i}\|^{2}/(n-k)} \tag{1}\]
where \(k\) is the cluster count, \(n\) the data point count, \(n_{i}\) the count in cluster \(C_{i}\), \(\mu_{i}\) the centroid of \(C_{i}\), and \(\mu\) the overall centroid. The score is the ratio of between-cluster to within-cluster dispersion, with
dispersion defined as the sum of squared distances. However, as we'll show, this score doesn't suit clusters formed by contrastive learning, prompting an alternative cluster concept.
Here, communities serve as cluster analogs in graphs rather than Euclidean space. Modularity [8] is a common community quality metric, quantifying internal connections density versus expected random connections density. For graph \(G\) with adjacency matrix \(A\in\mathbb{R}^{N\times N}\), modularity \(Q\) is:
\[Q=\frac{1}{2m}\sum_{i,j}\left[A_{ij}-\frac{k_{i}k_{j}}{2m}\right]\delta(y_{i},y _{j}) \tag{2}\]
where \(m\) is the sum of edge weights, \(k_{i}\) and \(k_{j}\) are nodes \(i\) and \(j\) degrees, \(y_{i}\) and \(y_{j}\) are nodes \(i\) and \(j\) community assignments, and \(\delta(y_{i},y_{j})\) is the Kronecker delta function. Our experiments suggest that with appropriate \(G\) construction, modularity can better reveal contrastive learning cluster structures.
### Graph Convolutional Network
Graph Convolutional Networks (GCNs) are neural networks specifically designed for graph-structured data, gaining traction due to their proficiency in learning from this data type ([23], [4], [9]). This paper focuses on the GCN architecture proposed by [23], which underpins many other GCN models and encapsulates key characteristics of GCN models.
Graph-structured data typically includes node features \(X\in\mathbb{R}^{N\times C}\) and an adjacency matrix \(A\in\mathbb{R}^{N\times N}\), where \(N\) is the number of nodes and \(C\) is the node feature dimensions. A GCN maps the input node features \(X\) to new node features \(Z\in\mathbb{R}^{N\times F}\), where \(F\) is the output feature dimension. The GCN proposed by [23] performs graph convolutions as follows:
\[Z=\sigma\left(\tilde{A}XW\right), \tag{3}\]
where \(\tilde{A}=\hat{D}^{-\frac{1}{2}}\hat{A}\hat{D}^{-\frac{1}{2}}\), with \(\hat{A}=A+I_{N}\) and \(\hat{D}\) is the diagonal degree matrix of \(\hat{A}\). \(W\in\mathbb{R}^{C\times F}\) is a learnable weight matrix, and \(\sigma(\cdot)\) is an element-wise activation function.
For a deeper GCN with \(L\) layers, the output feature matrix \(Z^{(L)}\) is computed through a series of graph convolutions:
\[Z^{(l)}=\sigma\left(\tilde{A}Z^{(l-1)}W^{(l)}\right), \tag{4}\]
where \(l=1,\ldots,L\), \(Z^{(0)}=X\), and \(W^{(l)}\in\mathbb{R}^{F_{l-1}\times F_{l}}\) are learnable weight matrices for each layer. The dimensions of the input and output features change through layers, with \(F_{0}=C\) and \(F_{L}=F\).
We modify the GCN by introducing a learnable scale parameter \(\alpha\) in each layer, transforming the adjacency matrix as \(\hat{A}=\alpha A+I_{N}\). This allows for dynamic balancing between self-connections and connections to neighboring nodes, potentially enhancing the network's ability to extract valuable information from the graph structure. The updated adjacency matrix \(\tilde{A}\) becomes:
\[\tilde{A}=\hat{D}^{-\frac{1}{2}}(\alpha A+I_{N})\hat{D}^{-\frac{1}{2}}, \tag{5}\]
This modified \(\tilde{A}\) enables the GCN to better adjust to the underlying graph structure during training, as it learns the optimal value of \(\alpha\) alongside the weight matrices \(W\).
Due to its compatibility with graph space, we utilize a GCN as the classifier. This makes it a better fit for the characteristics of clusters formed by contrastive learning. Our experiments show that the GCN surpasses a linear classifier in various scenarios, further affirming its suitability in this context.
## 3 Experimental Settings
This section presents our experimental setup for training models on the CIFAR-10 dataset [25]. We base our framework on [1] and enhance it with [14]'s prediction head, using a 512-dimension
representation space. The tested model architectures include ResNet18, ResNet101, and a modified Vision Transformer comparable to ResNet101 in parameter count ([19], [12]). For benchmarking, we train three models with identical architectures using CIFAR-10 supervised learning, also with a 512-dimension representation. All models use the AdamW optimizer [29]. Detailed architecture and hyperparameter information is in the appendix. Experiments run on a single A100 GPU.
In the following sections, we extract image vector representations from raw dataset images using both contrastive and supervised models, per standard evaluation practices. All features are normalized to unit vectors due to the undefined lengths during the training process.
We note that our experimental setup is not fine-tuned for peak performance. Consequently, our focus is not on achieving record-breaking results. Instead, we aim to gain new insights into contrastive learning's inner mechanisms, rather than proposing practical enhancements or techniques.
## 4 Micro View: Image Pairs Under Contrastive Learning
This section explores how contrastive and supervised learning methods organize images within the representation space. Our analysis investigates how these methods group images based on visual similarities and class labels. We first highlight contrastive learning's ability to group visually similar images, regardless of their class labels, then discuss the class label focused organization of images in supervised learning, which may overlook visual similarities within and across classes. We also examine the properties of k-nearest neighbors in the contrastive representation space and introduce the Class Homogeneity Index (CHI) to quantify the average proportion of k-nearest neighbors with the same class label. By comparing CHI across different models under both learning methods, we offer insights into both local and global class homogeneity of neighborhoods in the representation space, and how architectural choices affect contrastive learning behavior.
### Contrastive Representation Space and Visual Similarity
We scrutinize specific image pairs from the CIFAR-10 training dataset to demonstrate the distinct properties of contrastive representation space. Figures 0(a), 0(b), and 0(c) depict image pairs from different
Figure 1: **Exploring Image Pairs and Their Cosine Similarities in Representations.****(a)** displays the image pair with the highest similarity across different labels in contrastive ResNet18, while **(b)** demonstrates the same for ResNet101, and **(c)** for ViT. In contrast, **(d)** illustrates the image pair with the lowest similarity within the same class in contrastive ResNet18, **(e)** for ResNet101, and **(f)** for ViT. The first line beneath each image pair indicates the similarity in contrastive space, and the second line denotes the similarity in supervised space (using a supervised model with the same architecture). In **(a)**, the top image portrays a bird, and the bottom one depicts an airplane. The blue sky backgrounds and dark main bodies contribute to their visual similarity. Contrastive ResNet18 has difficulty distinguishing them, whereas supervised ResNet18 can. This observation also applies to **(b)** and **(c)**. Conversely, **(d)**, **(e)**, and **(f)** present pairs of visually distinct images (including hue, shape, perspective, etc.) within the same class, which are recognized by supervised models but not by contrastive models.
classes with the highest cosine similarity in contrastive space, while Figures 0(d), 0(e), and 0(f) present image pairs from the same class with the lowest similarity.
Obviously, in contrastive space, image pairs showcasing visual resemblance exhibit high similarity, irrespective of their class. Conversely, pairs with similar class labels but disparate visual appearances have a low similarity. This trend is not mirrored in supervised space, where class labels primarily dictate similarity, rather than visual attributes.
We attribute this discrepancy to the unsupervised nature of contrastive learning, which leans on inherent sample information, while supervised learning relies heavily on provided labels to establish image relationships.
### Minimal Overlap Between Visually Similar Images
Previously we observed that contrastive learning effectively groups visually similar images. This raises a question: Is the grouping a result of overlapping distributions of augmented views for these images? [44] posits that visually similar images have overlapping augmented view distributions and that's why they are grouped.
Formally, let \(x_{1}\) and \(x_{2}\) be visually similar images, and \(f(y|x)\) be the probability density function of the augmented distribution for a given image \(x\). It's suggested that there's at least one \(\hat{y}\) for which \(f(\hat{y}|x_{1})f(\hat{y}|x_{2})>0\), indicating distribution overlap, driving the similarity of \(x_{1}\) and \(x_{2}\) in contrastive representations.
Here, we aim to quantify this overlap. We use \(P(y_{1}=y_{2}|y_{1}\sim f(y|x_{1}),y_{2}\sim f(y|x_{2}))\) as our metric, representing the probability that two augmented views, \(y_{1}\) and \(y_{2}\), are equal, drawn from \(x_{1}\) and \(x_{2}\)'s distributions. Higher values indicate more overlap.
The Monte Carlo method is employed to approximate this probability for image pairs from Figures 0(a), 0(b), and 0(c), generating 100,000,000 pairs of augmented views for each. Our analysis reveals no overlap--no identical augment pairs. With the Clopper-Pearson method, we calculate a 95% confidence interval for the proportion, yielding an upper bound of \(3.69\times 10^{-8}\).
This result indicates minimal overlap in the augmented views of visually similar images. Hence, the ability of contrastive learning to group visually similar images may stem from inductive biases like the model's ability to capture similar high-level features, not direct overlap in augmented data.
### K-Nearest Neighbors in Contrastive Representation Space
As we've seen in the previous subsections, contrastive learning tends to group visually similar images together, regardless of their class labels. However, empirically, it is reasonable to assume that for most images, their visually similar counterparts belong to the same class. So we can then infer that most representations have neighbors with the same label in contrastive space.
Delving into the properties of the k-nearest neighbors for each data point within the contrastive representation space, we use cosine similarity as our distance metric and compute the average proportion of k-nearest neighbors that share the same class, a metric we term the Class Homogeneity Index (CHI). We explore the effects of varying \(k\) values (\(k=1,10,100,500,1000\)) on CHI and summarize our findings in Table 1.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \multirow{2}{*}{CHI} & \multicolumn{3}{c}{Contrastive} & \multicolumn{3}{c}{Supervised} \\ \cline{2-7} & ResNet18 & ResNet101 & ViT & ResNet18 & ResNet101 & ViT \\ \hline k=1 & 0.8938 & 0.8818 & 0.8730 & 0.9680 & 0.9611 & 0.9903 \\ k=10 & 0.8762 & 0.8662 & 0.8473 & 0.9674 & 0.9611 & 0.9893 \\ k=100 & 0.8457 & 0.8438 & 0.8047 & 0.9669 & 0.9605 & 0.9886 \\ k=500 & 0.7785 & 0.7945 & 0.6675 & 0.9666 & 0.9605 & 0.9881 \\ k=1000 & 0.7094 & 0.7267 & 0.5273 & 0.9663 & 0.9606 & 0.9879 \\ \hline \end{tabular}
\end{table}
Table 1: **CHI across diverse models under contrastive and supervised learning. The table presents the Class Homogeneity Index (CHI) for diverse models—ResNet18, ResNet101, and ViT—under both contrastive and supervised learning conditions, across different \(k\) values.**
Several implications emerge from Table 1. First, for smaller \(k\) values, the high CHI observed in contrastive learning underscores that nearest neighbors typically belong to the same class. This significant feature further validates the proposition: in the contrastive representation space, most representations have neighbors with the same label.
Second, the steep decrease in CHI as \(k\) increases in contrastive learning compared to supervised learning reveals an intriguing attribute. The neighborhood's class-wise homogeneity in the contrastive representation space seems to diminish more rapidly as we expand our neighborhood size. This implies that while contrastive features exhibit a local class-wise homogeneity, they do not extend this effect at a more global scale, unlike the supervised features.
Third, the most noticeable decline in CHI is observed for the ViT. This could indicate that its weaker inductive bias is a contributing factor. Therefore, the behavior of contrastive learning might be shaped not only by the data but also by architectural choices and their inherent inductive biases.
## 5 Macro View: Contrastive Learning Forms Locally Dense Clusters
Moving on from the individual image analysis, we now focus on how contrastive learning arranges data at a larger scale, at the cluster level. We introduce a new concept of "Locally Dense" clusters, where data points within the same class are closely packed. As a result, a specific data point's nearest neighbors are predominantly from its own cluster, leading to high class homogeneity within local regions of the cluster. We propose a new graph-based metric, the Relative Local Density (RLD), to quantify this local density.
Next, we illustrate what locally dense clusters look like and how they differ from "dense and well-separated" clusters. The latter refers to clusters that are not just densely packed internally but are also clearly separated from other clusters, implying a global density and separation.
Finally, we show that contrastive learning forms clusters that are locally dense but not globally, while supervised learning clusters exhibit both local and global density and separation.
### Relative Local Density: A Quantitative Measure for Locally Dense Clusters
While the qualitative notion of locally dense clusters provides a conceptual foundation, it leaves room for a concrete, quantifiable measure. Thus, we introduce RLD, a novel metric aimed at capturing the notion of local density. This subsection details the construction of RLD, illuminating its correspondence to local density and its unique advantages.
The computation of RLD involves several stages, each contributing to the encapsulation of local density. Initially, we construct a similarity matrix, \(S\), for a given set of data points \(X\). Each entry, \(S_{ij}\), captures the pairwise distance between data points \(X_{i}\) and \(X_{j}\), normalized by the mean distance and the square root of the dimension of \(X\) to ensure scale-invariance:
\[S_{ij}=\frac{-dist(X_{i},X_{j})}{\sqrt{dim(X)}\cdot mean(dist(X_{p},X_{q}):p,q \in 1,2,...,n,p\neq q)} \tag{6}\]
This process converts distances into similarity scores, with higher scores indicating closer data points. The matrix diagonal elements are set to negative infinity, ensuring a data point does not regard itself as its neighbor.
The similarity matrix is then transformed into an adjacency matrix, \(A\), which encapsulates the relationships between data points in the feature space. A temperature parameter, \(T\), modulates this transformation, balancing the emphasis on local and global structures. A higher \(T\) yields a matrix with a more global structure, while a lower \(T\) retains more local information. The matrix is normalized, each entry divided by \(T\) and exponentially transformed to accentuate differences between data points:
\[A_{ij}=\frac{n^{2}\cdot exp(S_{ij}/T)^{2}}{\sum_{k=1}^{n}exp(S_{ik}/T)^{2} \cdot\sum_{k=1}^{n}exp(S_{kj}/T)^{2}} \tag{7}\]
The final RLD computation step measures the modularity of the adjacency matrix concerning class labels, \(y\), with equation 2. Modularity, a graph theory concept purposed by [8], quantifies the strength
of a graph's division into clusters. A high modularity score signifies dense intra-cluster connections and sparse inter-cluster connections, aligning with our local density intuition.
The RLD thus quantifies local density, gauging a cluster's internal cohesion relative to its separation. As a cluster evaluation metric, RLD offers several advantages:
1. **Local structure emphasis:** Unlike global metrics like the CH score, RLD captures local structure details, offering a more fine-grained data organization understanding.
2. **Differentiability:** RLD's full differentiability enables integration into optimization processes, including gradient-based learning.
3. **Scale invariance:** RLD's insensitivity to the scale and number of data points enhances its versatility across diverse datasets.
4. **Graph techniques compatibility:** RLD's graph-based nature permits integration with other graph techniques, such as community detection and graph partitioning. This expands the analysis range and can yield additional cluster structure insights.
5. **Comparability:** The modularity scaling to \((-0.5,1)\) standardizes RLD, facilitating comparisons. Its application enables direct cluster comparison, as scores are relative to the same scale.
### Visualizing Locally Dense Clusters: Examples and Comparisons
To gain a more intuitive understanding of local and global density concepts, it's important to visualize them before delving into the cluster analysis formed by contrastive and supervised learning methods. To facilitate this, we introduce illustrative examples that distinguish between locally dense clusters, as determined by RLD, and globally dense ones, as gauged by the CH score.
Figure 2 presents six distinct cluster examples, each illustrating the divergent nature of local and global densities. Let's examine some key observations:
1. A high RLD doesn't guarantee a high CH score. This fact is exemplified by clusters (b), (c), and (d), which, despite their high RLDs, register significantly lower CH scores compared to clusters (e) and (f).
2. The CH score fails to acknowledge the well-structured nature of clusters (b) and (c). This observation underscores the insensitivity of the CH score to certain types of cluster formations.
3. Clusters (b), (c), and (d) pose challenges for linear classifiers, which often struggle to define linear boundaries effectively. On the other hand, these classifiers are likely to perform well with clusters (e) and (f). Conversely, classifiers that operate in a neighbor-centric space, such as KNN, may deliver better separation results for clusters (b), (c), and (d).
### Comparing Clusters Formed by Contrastive Learning and Supervised Learning
With a solid understanding of local and global density concepts, we can now turn our attention to examining the clusters produced by contrastive and supervised learning methods.
To provide a comprehensive and dynamic picture of cluster formation, we consider not only clusters created by fully trained models but also those formed during the training process which enables us to capture the evolution of clusters throughout the training phase and observe the unique ways in which different learning methods shape them over time.
In Figure 3 (a) and (b), we present a synthesized analysis. The RLDs (\(T=0.1\)) and CH scores for all contrastive and supervised features are displayed in relation to linear classifier accuracy, along with their respective Spearman correlation coefficients. This collective view provides a comparative perspective on how these learning methods impact data organization.
As demonstrated in Figure 3 (a), there is a visible positive correlation between the CH score of supervised learning and linear classifier accuracy. In contrast, contrastive learning shows a negative correlation and maintains a nearly constant CH score throughout the entire training process. This observation suggests that contrastive learning does not create globally dense clusters.
However, as depicted in Figure 3 (b), the Spearman correlation between linear classifier accuracy and RLD remains positive for both learning methods, even almost the same given the same architecture. This result implies that both contrastive learning and supervised learning form locally dense clusters.
Applying a GCN ClassifierGiven the challenges faced by linear classifiers in distinguishing locally dense clusters, as observed in Figure 2 (b), (c), and (d), we decided to explore the use of a GCN classifier as an alternative. To facilitate this, we first constructed a graph identical to the one used in the computation of the RLDs. When assigning node features, we utilized a one-hot encoding scheme. Each node was assigned a one-hot encoded vector of length equal to the number of classes. In this scheme, each vector represented a specific class label, with the element corresponding to the class label set to \(1\), and all other elements set to \(0\).
Following the training method described in [23], we applied a random mask rate sampled from \((0.05,0.5)\) and use a three-layer GCN. The model accuracy is then plotted in Figure 3 (c), comparing it with linear accuracy. The results indicate that for the ResNet101 and ViT, GCN accuracy is slightly higher than linear accuracy for most of the models, although the opposite is observed for the ResNet18. This suggests that a GCN classifier can sometimes outperform linear classifiers in contrastive learning.
Visual Evidence via t-SNELastly, we employ t-SNE [42] as a visualization tool to further illustrate the differences between the features generated by contrastive and supervised learning methods. t-SNE is known for its ability to manage both 'local' and 'global' views via a tunable parameter known as perplexity. When we adjust this parameter in Figure 4, it becomes evident that contrastive clusters start to fade as the perplexity increases. This effect, however, is not observed in the clusters formed
Figure 2: **Varied Cluster Configurations and Their Corresponding RLDs and CH scores.** Each subfigure, from (a) to (f), symbolizes a unique cluster configuration. Subfigure (a) presents data points uniformly scattered without discernible clustering, whereas (b) demonstrates a circular pattern with identifiable local clusters. Subfigure (c) showcases clusters arranged along distinct lines, and (d) displays small, tightly grouped local clusters dispersed throughout the plot. Subfigure (e) illustrates clusters formed along random lines, while (f) portrays multiple Gaussian distributions, each constituting a separate cluster. These examples underscore the varied characteristics of local and global densities and highlight the limitations of relying solely on global metrics, like the CH score, for evaluating cluster quality.
by supervised learning, thereby reinforcing our earlier observations about the distinct differences between these two types of learning methods.
## 6 Conclusion and Future Directions
This study illuminates distinct differences in how contrastive and supervised learning algorithms structure data in representational space. We discover that contrastive learning primarily fosters 'locally dense' clusters, irrespective of class labels, while supervised learning tends to generate globally dense clusters that align with class labels. The novel RLD metric introduced in this study quantifies local density, offering a contrast to the traditional Calinski-Harabasz score. Implementing a GCN classifier demonstrates potential in tackling locally dense clusters, and the differences between learning methods are further highlighted through t-SNE visualizations.
Figure 4: **t-SNE Visualization of Features Generated by Contrastive and Supervised Learning. This figure presents t-SNE visualizations of 25,000 data points for contrastive (left column) and supervised (right column) learning methods, across three different model architectures: ResNet18, ResNet101, and ViT. Each row corresponds to a different perplexity value (25, 2500, and 24900), showcasing the influence of this parameter on the visualization.**
Figure 3: **Comparative Analysis of Cluster Evaluation Metrics and Classifier Accuracy across Models. The figure presents three scatter plots comparing different metrics with linear classifier accuracy for contrastive and supervised learning methods across three models: ResNet18, ResNet101, and ViT. (a) shows the correlation between CH scores and linear classifier accuracy. (b) illustrates the relationship between RLD and linear classifier accuracy. (c) provides a comparison between GCN classifier accuracy and linear classifier accuracy for contrastive learning models.**
Looking ahead, we propose two potential research directions. First, the development of more efficient classifiers tailored to clusters created by contrastive learning should be a priority. While GCNs show promise, their computational and memory requirements present challenges. Second, creating innovative augmentation algorithms could help prevent models from misclassifying visually similar images from different classes by distinguishing these images in augmented views.
## 7 Limitations
While our study provides insightful findings, it is not without its constraints. Resource limitations prevented us from evaluating a wider array of model architectures, such as the Swin Transformer [27], or utilizing larger datasets like ImageNet [10]. Further exploration of these areas is suggested for future research endeavors.
|
2306.11467 | Shellable simplicial complex and switching rook polynomial of frame
polyominoes | Let $\mathcal{P}$ be a frame polyomino, a new kind of non-simple polyomino.
In this paper we study the $h$-polynomial of $K[\mathcal{P}]$ in terms of the
switching rook polynomial of $\mathcal{P}$ using the shellable simplicial
complex $\Delta(\mathcal{P})$ attached to $\mathcal{P}$. We provide a suitable
shelling order for $\Delta(\mathcal{P})$ and we define a bijection between the
set of the canonical configurations of $j$ rooks in $\mathcal{P}$ and the
facets of $\Delta(\mathcal{P})$ with $j$ steps. Finally we use a well-known
combinatorial result, due to McMullen and Walkup, about the $h$-vector of a
shellable simplicial complex to interpret the $h$-polynomial of
$K[\mathcal{P}]$ as the switching rook polynomial of $\mathcal{P}$. | Rizwan Jahangir, Francesco Navarra | 2023-06-20T11:42:09Z | http://arxiv.org/abs/2306.11467v2 | # Shellable simplicial complex and switching rook polynomial of frame polyominoes
###### Abstract.
Let \(\mathcal{P}\) be a frame polyomino, a new kind of non-simple polyomino. In this paper we study the \(h\)-polynomial of \(K[\mathcal{P}]\) in terms of the switching rook polynomial of \(\mathcal{P}\) using the shellable simplicial complex \(\Delta(\mathcal{P})\) attached to \(\mathcal{P}\). We provide a suitable shelling order for \(\Delta(\mathcal{P})\) in relation to a new combinatorial object, which we call a _step_ of a facet, and we define a bijection between the set of the canonical configuration of \(k\) rooks in \(\mathcal{P}\) and the facets of \(\Delta(\mathcal{P})\) with \(k\) steps. Finally we use a famous combinatorial result, due to McMullen and Walkup, about the \(h\)-vector of a shellable simplicial complex to interpret the \(h\)-polynomial of \(K[\mathcal{P}]\) as the switching rook polynomial of \(\mathcal{P}\).
Key words and phrases:Polyominoes, Hilbert series, rook-polynomial 2010 Mathematics Subject Classification: 05B50, 05E40, 13C05, 13G05
## 1. Introduction
A central topic in Combinatorial Commutative Algebra is the study of the ideals of \(t\)-minors of an \(m\times n\) matrix of indeterminates, for any integer \(1\leq t\leq\min\{m,n\}\). The most known references are [3], [4], [5], [17] for the ideals generated by all \(t\)-minors of a one or two sided ladder, [19], [25], [30] for the ideals of adjacent \(2\)-minors and [20] for the ideals generated by an arbitrary set of \(2\)-minors in a \(2\times n\) matrix. In [31] A.A. Qureshi defines a new kind of binomial ideal which arises from \(2\)-minors of a polyomino, generalizing the class of the ideals generated by \(2\)-minors of \(m\times n\) matrices.
A polyomino is a finite collection of unitary squares joined edge by edge. This term was coined by Golomb in 1953 for the first time and several combinatorical aspects of polyominoes are discussed in his monograph [16]. If \(\mathcal{P}\) is a polyomino, then the _polyomino ideal_ of \(\mathcal{P}\) is the ideal generated by all inner \(2\)-minors of \(\mathcal{P}\) in a suitable polynomial ring \(S_{\mathcal{P}}\) and it is denoted by \(I_{\mathcal{P}}\). The study of the main algebraic properties of the quotient ring \(K[\mathcal{P}]=S_{\mathcal{P}}/I_{\mathcal{P}}\), depending on the shape of \(\mathcal{P}\), is the purpose of this research. The primality of the polyomino ideal is studied in many articles (see [6], [8], [22], [24], [28], [29], [33]) but a complete characterization for the prime polyomino ideals is still unknown, as well as for the Cohen-Macaulayness and the normality of \(K[\mathcal{P}]\) ([9]) or for the Konig type property of \(I_{\mathcal{P}}\) (see [11], [18]). Consult [1], [7], [13], [15], [27] for other interesting algebraic properties of the polyomino ideal.
Recently a new line of research has been fascinatable, regarding the study of the Hilbert-Poincare series and the Castelnuovo-Mumford regularity of \(K[\mathcal{P}]\) in terms of the rook polynomial of \(\mathcal{P}\) and its invariants. the rook polynomial of a polyomino \(\mathcal{P}\) is well-known in literature; that is a polynomial \(r_{\mathcal{P}}(t)=\sum_{i=0}^{n}r_{i}t^{i}\in\mathbb{Z}[t]\), whose coefficient \(r_{i}\) represents the number of distinct possibilities of arranging \(i\) rooks on cells of \(\mathcal{P}\) in non-attacking positions (with the convention \(r_{0}=1\)); in particular, the degree of \(r_{\mathcal{P}}(t)\) is the rook number of \(\mathcal{P}\) which is the maximum number of rooks which can be arranged in \(\mathcal{P}\) in non-attacking positions. In [14] the authors show that if \(\mathcal{P}\) is an \(L\)-convex polyomino then Castelnuovo-Mumford regularity of \(K[\mathcal{P}]\) is equal to the rook number of \(\mathcal{P}\). In [34] it is proved that for a simple thin polyomino the \(h\)-polynomial \(h(t)\) of \(K[\mathcal{P}]\) is the rook polynomial of \(\mathcal{P}\) and the Gorenstein simple thin polyominoes are also characterized using the \(S\)-property. Roughly speaking a polyomino is simple if it has no hole and it is thin if it does not contain the square tetromino. Finally it is conjecture that \(r_{\mathcal{P}}(t)=h(t)\) if and only \(\mathcal{P}\)
Introduction
Let \((i,j)\) be a real number and let \(\mathcal{P}\) be a polynomial of \(\mathcal{P}\). Let \(\mathcal{P}\) be a polynomial of \(\mathcal{P}\).
_left_ and _lower right corners_ of \(C\). The sets \(\{a,c\}\), \(\{c,b\}\), \(\{b,d\}\) and \(\{a,d\}\) are the _edges_ of \(C\). We put \(V(C)=\{a,b,c,d\}\) and \(E(C)=\{\{a,c\},\{c,b\},\{b,d\},\{a,d\}\}\). Let \(\mathcal{S}\) be a non-empty collection of cells in \(\mathbb{Z}^{2}\). The set of the vertices and of the edges of \(\mathcal{S}\) are respectively \(V(\mathcal{S})=\bigcup_{C\in\mathcal{S}}V(C)\) and \(E(\mathcal{S})=\bigcup_{C\in\mathcal{S}}E(C)\), while \(\operatorname{rank}\mathcal{S}\) is the number of cells belonging to \(\mathcal{S}\). If \(C\) and \(D\) are two distinct cells of \(\mathcal{S}\), then a _walk_ from \(C\) to \(D\) in \(\mathcal{S}\) is a sequence \(\mathcal{C}:C=C_{1},\ldots,C_{m}=D\) of cells of \(\mathbb{Z}^{2}\) such that \(C_{i}\cap C_{i+1}\) is an edge of \(C_{i}\) and \(C_{i+1}\) for \(i=1,\ldots,m-1\). In addition, if \(C_{i}\neq C_{j}\) for all \(i\neq j\), then \(\mathcal{C}\) is called a _path_ from \(C\) to \(D\). Two cells \(C\) and \(D\) are connected in \(\mathcal{S}\) if there exists a path of cells in \(\mathcal{S}\) from \(C\) to \(D\). A _polyomino_\(\mathcal{P}\) is a non-empty, finite collection of cells in \(\mathbb{Z}^{2}\) where any two cells of \(\mathcal{P}\) are connected in \(\mathcal{P}\). For instance, see Figure 1.
A polyomino \(\mathcal{P}\) is _simple_ if for any two cells \(C\) and \(D\) not in \(\mathcal{P}\) there exists a path of cells not in \(\mathcal{P}\) from \(C\) to \(D\). A finite collection of cells \(\mathcal{H}\) not in \(\mathcal{P}\) is a _hole_ of \(\mathcal{P}\) if any two cells of \(\mathcal{H}\) are connected in \(\mathcal{H}\) and \(\mathcal{H}\) is maximal with respect to set inclusion. For example, the polyomino in Figure 1 is not simple with a hole. Obviously, each hole of \(\mathcal{P}\) is a simple polyomino and \(\mathcal{P}\) is simple if and only if it has no hole.
Consider two cells \(A\) and \(B\) of \(\mathbb{Z}^{2}\) with \(a=(i,j)\) and \(b=(k,l)\) as the lower left corners of \(A\) and \(B\) and \(a\leq b\). A _cell interval_\([A,B]\) is the set of the cells of \(\mathbb{Z}^{2}\) with lower left corner \((r,s)\) such that \(i\leqslant r\leqslant k\) and \(j\leqslant s\leqslant l\). If \((i,j)\) and \((k,l)\) are in horizontal (or vertical) position, we say that the cells \(A\) and \(B\) are in _horizontal_ (or _vertical_) _position_.
Let \(\mathcal{P}\) be a polyomino. Consider two cells \(A\) and \(B\) of \(\mathcal{P}\) in a vertical or horizontal position. The cell interval \([A,B]\), containing \(n>1\) cells, is called a _block of \(\mathcal{P}\) of rank n_ if all cells of \([A,B]\) belong to \(\mathcal{P}\). The cells \(A\) and \(B\) are called _extremal_ cells of \([A,B]\). Moreover, a block \(\mathcal{B}\) of \(\mathcal{P}\) is _maximal_ if there does not exist any block of \(\mathcal{P}\) which contains properly \(\mathcal{B}\). It is clear that an interval of \(\mathbb{Z}^{2}\) identifies a cell interval of \(\mathbb{Z}^{2}\) and vice versa, hence we associate to an interval \(I\) of \(\mathbb{Z}^{2}\) the corresponding cell interval denoted by \(\mathcal{P}_{I}\). If \(I=[a,b]\) and \(c\), \(d\) are the anti-diagonal corner of \(I\), then the two cells containing \(a\) or \(b\) (\(c\) or \(d\)) are called _diagonal_ (_anti-diagonal_) cells of \(\mathcal{P}_{I}\). A proper interval \([a,b]\) is called an _inner interval_ of \(\mathcal{P}\) if all cells of \(\mathcal{P}_{[a,b]}\) belong to \(\mathcal{P}\). We denote by \(\mathcal{I}(\mathcal{P})\) the set of all inner intervals of \(\mathcal{P}\). An interval \([a,b]\) with \(a=(i,j)\), \(b=(k,j)\) and \(i<k\) is called a _horizontal edge interval_ of \(\mathcal{P}\) if the sets \(\{(\ell,j),(\ell+1,j)\}\) are edges of cells of \(\mathcal{P}\) for all \(\ell=i,\ldots,k-1\). In addition, if \(\{(i-1,j),(i,j)\}\) and \(\{(k,j),(k+1,j)\}\) do not belong to \(E(\mathcal{P})\), then \([a,b]\) is called a _maximal_ horizontal edge interval of \(\mathcal{P}\). We define similarly a _vertical edge interval_ and a _maximal_ vertical edge interval.
Let \(\mathcal{P}\) be a polyomino and \(S_{\mathcal{P}}=K[x_{v}|v\in V(\mathcal{P})]\) be the polynomial ring of \(\mathcal{P}\) where \(K\) is a field. If \([a,b]\) is an inner interval of \(\mathcal{P}\), with \(a\),\(b\) and \(c\),\(d\) respectively diagonal and anti-diagonal corners, then the binomial \(x_{a}x_{b}-x_{c}x_{d}\) is called an _inner 2-minor_ of \(\mathcal{P}\). \(I_{\mathcal{P}}\) is called _polyomino ideal_ of \(\mathcal{P}\) and is defined as the ideal in \(S_{\mathcal{P}}\) generated by all the inner 2-minors of \(\mathcal{P}\). We denote by \(G(\mathcal{P})\) the set of all generators of \(I_{\mathcal{P}}\). We set also \(K[\mathcal{P}]=S_{\mathcal{P}}/I_{\mathcal{P}}\), which is the _coordinate ring_ of \(\mathcal{P}\). Along the paper, we work with the reverse lexicographical order \(<\) on \(S\) induced by the ordering of the variables defined by \(x_{ij}>x_{kl}\) if \(j>l\), or, \(j=l\) and \(i>k\). We recall the conditions in order to the set of generators of \(I_{\mathcal{P}}\) forms the reduced Grobner basis with respect to \(<\).
Figure 1. A polyomino.
**Theorem 2.1** ([31], Remark 4.2).: _Let \(\mathcal{P}\) be a collection of cells. Then the set of inner 2-minors of \(\mathcal{P}\) form the reduced (quadratic) Grobner basis with respect to \(<\) if and only if for any two inner intervals \([b,a]\) and \([d,c]\) of \(\mathcal{P}\) with anti-diagonal corners \(e,f\) and \(f,g\) as shown in Figure 2, either \(b,g\) or \(e,c\) are anti-diagonal corners of an inner interval of \(\mathcal{P}\)._
We recall some notions on the Hilbert function and the Hilbert-Poincare series of a graded \(K\)-algebra \(R/I\). Let \(R\) be a graded \(K\)-algebra and \(I\) be an homogeneous ideal of \(R\), then \(R/I\) has a natural structure of graded \(K\)-algebra as \(\bigoplus_{k\in\mathbb{N}}(R/I)_{k}\). The function \(\mathrm{H}_{R/I}:\mathbb{N}\to\mathbb{N}\) with \(\mathrm{H}_{R/I}(k)=\dim_{K}(R/I)_{k}\) is called the _Hilbert function_ of \(R/I\) and the formal power series \(\mathrm{HP}_{R/I}(t)=\sum_{k\in\mathbb{N}}\mathrm{H}_{R/I}(k)t^{k}\) is defined as the _Hilbert-Poincare series_ of \(R/I\). It is known by Hilbert-Serre theorem that there exists a unique polynomial \(h(t)\in\mathbb{Z}[t]\), called _h-polynomial_ of \(R/I\), such that \(h(1)\neq 0\) and \(\mathrm{HP}_{R/I}(t)=\frac{h(t)}{(1-t)^{d}}\), where \(d\) is the Krull dimension of \(R/I\). Moreover, if \(R/I\) is Cohen-Macaulay then \(\mathrm{reg}(R/I)=\deg h(t)\).
We recall some basic facts on simplicial complexes. A _finite simplicial complex_\(\Delta\) on \([n]:=\{1,\ldots,n\}\) is a collection of subsets of \([n]\) satisfying the following two conditions:
1. if \(F^{\prime}\in\Delta\) and \(F\subseteq F^{\prime}\) then \(F\in\Delta\);
2. \(\{i\}\in\Delta\) for all \(i=1,\ldots,n\).
The elements of \(\Delta\) are called _faces_, and the dimension of a face is one less than its cardinality. An _edge_ of \(\Delta\) is a face of dimension \(1\), while a _vertex_ of \(\Delta\) is a face of dimension \(0\). The maximal faces of \(\Delta\) with respect to the set inclusion are called _facets_. The dimension of \(\Delta\) is given by \(\max\{\dim(F):F\in\Delta\}\). A simplicial complex \(\Delta\) is _pure_ if all facets have the same dimension. For a pure simplicial complex \(\Delta\), the dimension of \(\Delta\) is given trivially by the dimension of a facet of \(\Delta\). Given a collection \(\mathcal{F}=\{F_{1},\ldots,F_{m}\}\) of subsets of \([n]\), we denoted by \(\langle F_{1},\ldots,F_{m}\rangle\) or briefly \(\langle\mathcal{F}\rangle\) the simplicial complex consisting of all subsets of \([n]\) which are contained in \(F_{i}\) for some \(i\in[m]\). This simplicial complex is said to be generated by \(F_{1},\ldots,F_{m}\); in particular, if \(\mathcal{F}\) is the set of the facets of a simplicial complex \(\Delta\), then \(\Delta\) is generated by \(\mathcal{F}\). A pure simplicial complex \(\Delta\) is called _shellable_ if the facets of \(\Delta\) can be ordered as \(F_{1},\ldots,F_{m}\) in such a way that \(\langle F_{1},\ldots,F_{i-1}\rangle\cap\langle F_{i}\rangle\) is generated by a non-empty set of maximal proper faces of \(F_{i}\), for all \(i\in\{2,\ldots,m\}\). In such a case \(F_{1},\ldots,F_{m}\) is called a _shelling_ of \(\Delta\).
Let \(\Delta\) be a simplicial complex on \([n]\) and \(R=K[x_{1},\ldots,x_{n}]\) be the polynomial ring in \(n\) variables over a field \(K\). To every collection \(F=\{i_{1},\ldots,i_{r}\}\) of \(r\) distinct vertices of \(\Delta\), there is an associated monomial \(x^{F}\) in \(R\) where \(x^{F}=x_{i_{1}}\ldots x_{i_{r}}.\) The monomial ideal generated by all monomials \(x^{F}\) such that \(F\) is not a face of \(\Delta\) is called _Stanley-Reisner ideal_ and it is denoted by \(I_{\Delta}\). The _face ring_ of \(\Delta\), denoted by \(K[\Delta]\), is defined to be the quotient ring \(R/I_{\Delta}\). From [36, Corollary 6.3.5], if \(\Delta\) is a simplicial complex on \([n]\) of dimension \(d\), then \(\dim K[\Delta]=d+1=\max\{s:x_{i_{1}}\ldots x_{i_{s}}\notin\)
Figure 2. Conditions for Gröbner basis with respect to \(<\).
\(I_{\Delta},i_{1}<\cdots<i_{s}\)). We conclude by recalling a nice combinatorial interpretation of the \(h\)-vector of a shellable simplicial complex, due to McMullen and Walkup (see [2, Corollary 5.1.14]).
**Proposition 2.2**.: _Let \(\Delta\) be a shellable simplicial complex of dimension \(d\) with shelling \(F_{1},\ldots,F_{m}\). For \(j\in\{2,\ldots,m\}\) we denote by \(r_{j}\) the number of facets of \(\langle F_{1},\ldots,F_{j-1}\rangle\cap\langle F_{j}\rangle\) and we set \(r_{1}=0\). Let \((h_{0},\ldots,h_{d+1})\) be the \(h\)-vector of \(K[\Delta]\). Then \(h_{i}=|\{j:r_{j}=i\}|\) for all \(i\in[d+1]\). In particular, up to their order, the numbers \(r_{j}\) do not depend on the particular shelling._
## 3. Shellability of the simplicial complex of a frame polyomino
In this section we introduce a new class of non-simple polyominoes, which we call _frame polyominoes_, and we study the shellability of the attached simplicial complex. A frame polyomino is basically a polyomino that can be obtained by removing a parallelogram polyomino from a rectangular polyomino.
Let us start recalling the definition of parallelogram polyomino given in [32]. Let \((a,b)\in\mathbb{Z}^{2}\). The sets \(\{(a,b),(a+1,b)\}\) and \(\{(a,b),(a,b+1)\}\) are called respectively _east step_ and _north step_ in \(\mathbb{Z}^{2}\). A sequence of vertices \((a_{0},b_{0}),(a_{1},b_{1}),\ldots,(a_{k},b_{k})\) in \(\mathbb{Z}^{2}\) is called a _north-east path_ if \(\{(a_{i},b_{i}),(a_{i+1},b_{i+1})\}\) is either an east or a north step. The vertices \((a_{0},b_{0})\) and \((a_{k},b_{k})\) are called the _endpoints_ of \(S\). Let \(S_{1}:(a_{0},b_{0}),(a_{1},b_{1}),\ldots,(a_{k},b_{k})\) and \(S_{2}:(c_{0},d_{0}),(c_{1},d_{1}),\ldots,(c_{k},d_{k})\) be two north-east paths such that \((a_{0},b_{0})=(c_{0},d_{0})\) and \((a_{k},b_{k})=(c_{k},d_{k})\). If for all \(1\leq i\) and \(j\leq k-1\), we have \(b_{i}>d_{j}\) when \(a_{i}=c_{j}\), then \(S_{1}\) is said to lie above \(S_{2}\).
If \(S_{1}\) lies above \(S_{2}\), we define _parallelogram_ polyomino, determined by \(S_{1}\) and \(S_{2}\), the set of cells in the region of \(\mathbb{Z}^{2}\) bounded above by \(S_{1}\) and below by \(S_{2}\).
**Definition 3.1**.: Let \(I=[(1,1),(m,n)]\) be an interval of \(\mathbb{Z}^{2}\) and \(\mathcal{S}\) be a parallelogram polyomino determined by \(S_{1}:(a_{0},b_{0}),(a_{1},b_{1}),\ldots,(a_{k},b_{k})\) and \(S_{2}:(c_{0},d_{0}),(c_{1},d_{1}),\ldots,(c_{k},d_{k})\), where \(1<a_{0}<a_{k}<m\) and \(1<b_{0}<b_{k}<n\). We call _frame polyomino_ the non-simple polyomino obtained by removing the cells of \(\mathcal{S}\) from \(\mathcal{P}_{I}\).
In Figure 3 we show three examples of frame polyominoes.
For a frame polyomino \(\mathcal{P}\) we provide an elementary decomposition, which we use along the paper. It consists of two suitable parallelogram sub-polyominoes, denoted by \(\mathcal{P}_{1}\) and \(\mathcal{P}_{2}\). Referring to Figure 4, \(\mathcal{P}_{1}\) is the sub-polyomino of \(\mathcal{P}\) highlighted with a red color, and \(\mathcal{P}_{2}\) is the other one with a hatching filling. Observe that \(\mathcal{P}_{1}\cap\mathcal{P}_{2}=\mathcal{Q}\), where \(\mathcal{Q}=\mathcal{P}_{[(1,1),(a_{0},b_{0})]}\cup\mathcal{P}_{[(a_{k},b_{k}),(m,n)]}\).
In the following proposition we show some basics algebraic properties of the polyomino ideal of a frame polyomino and in particular we determine the Krull dimension of the related coordinate ring using the simplicial complex theory.
Figure 3. Examples of frame polyominoes.
**Proposition 3.2**.: _Let \(\mathcal{P}\) be a frame polyomino defined by \(I=[(1,1),(m,n)]\) and by a parallelogram polyomino \(\mathcal{S}\) determined by \(S_{1}\) and \(S_{2}\) with endpoints \((a_{0},b_{0})\) and \((a_{k},b_{k})\). Then:_
1. _the set_ \(G(\mathcal{P})\) _forms the reduced Grobner basis of_ \(I_{\mathcal{P}}\) _with respect to_ \(<\)_;_
2. _the initial ideal_ \(\operatorname{in}_{<}(I_{\mathcal{P}})\) _is generated by the monomials_ \(x_{c}x_{d}\) _where_ \(c,d\) _are the anti-diagonal corners of an inner interval_ \([a,b]\) _of_ \(\mathcal{P}\)_;_
3. \(K[\mathcal{P}]\) _is a normal Cohen-Macaulay domain of Krull dimension_ \(|V(\mathcal{P})|-\operatorname{rank}(\mathcal{P})\)_._
Proof.: (1) It follows easily arguing as done in [24, Corollary 1.2].
(2) It is a trivial consequence of the previous point.
(3) It is known that \(I_{\mathcal{P}}\) is a toric ideal from [24, Theorem 2.1]. By [23, Corollary 4.26] and [24, Corollary 1.2] we have that \(K[\mathcal{P}]\) is normal and, as a consequence, by [2, Theorem 6.3.5] we obtain that \(K[\mathcal{P}]\) is Cohen-Macaulay. Now we compute the Krull dimension of \(K[\mathcal{P}]\). The square-free monomial ideal \(\operatorname{in}_{<}(I_{\mathcal{P}})\) can be viewed as the Stanley-Reisner ideal of a simplicial complex \(\Delta(\mathcal{P})\) on the vertex set of \(\mathcal{P}\). It is known that \(\Delta(\mathcal{P})\) is a pure shellable simplicial complex by [36, Theorem 9.6.1]. Observe that \(|V(\mathcal{P})|=|V(\mathcal{P}_{I})|-|V(\mathcal{S})|+|S_{1}|+|S_{2}|-2\) and \(\operatorname{rank}(\mathcal{P})=\operatorname{rank}(\mathcal{P}_{I})- \operatorname{rank}(\mathcal{S})\). Hence:
\[|V(\mathcal{P})|-\operatorname{rank}(\mathcal{P})=|V(\mathcal{P}_{I})|- \operatorname{rank}(\mathcal{P}_{I})-(|V(\mathcal{S})|-\operatorname{rank}( \mathcal{S}))+|S_{1}|+|S_{2}|-2. \tag{1}\]
Let \(\Delta(\mathcal{P}_{I})\) and \(\Delta(\mathcal{S})\) be the simplicial complex having respectively \(\operatorname{in}_{<}(I_{\mathcal{P}_{I}})\) and \(\operatorname{in}_{<}(I_{\mathcal{S}})\) as Stanley-Reisner ideal, in particular \(\dim K[\mathcal{P}_{I}]=\dim K[\Delta(\mathcal{P}_{I})]\) and \(\dim K[\mathcal{S}]=\dim K[\Delta(\mathcal{S})]\). Since \(\mathcal{P}_{I}\) and \(\mathcal{S}\) are simple polyominoes, from [21, Theorem 2.1] and [22, Corollary 3.3] we know that \(K[\mathcal{P}_{I}]\) and \(K[\mathcal{S}]\) are normal Cohen-Macaulay domain with respectively \(\dim K[\mathcal{P}_{I}]=|V(\mathcal{P}_{I})|-\operatorname{rank}(\mathcal{P} _{I})\) and \(\dim K[\mathcal{S}]=|V(\mathcal{S})|-\operatorname{rank}(\mathcal{S})\). As a consequence \(\Delta(\mathcal{P}_{I})\) and \(\Delta(\mathcal{S})\) are pure, so \(\dim(\Delta(\mathcal{P}_{I}))=|F_{I}|\) and \(\dim(\Delta(\mathcal{S}))=|S_{1}|=|S_{2}|\), where \(F_{I}=[(1,1),(1,n)]\cup[(1,n),(m,n)]\). Set \(S^{*}=S_{2}\backslash\{(a_{0},b_{0}),(a_{k},b_{k})\}\). Therefore, from (1) and from the previous arguments, we have that
\[|V(\mathcal{P})|-\operatorname{rank}(\mathcal{P})=|F_{I}|-|S_{1}|+|S_{1}|+|S_ {2}|-2=|F_{I}|+|S^{*}|=|F_{I}\sqcup S^{*}|.\]
We prove that \(F_{I}\sqcup S^{*}\) is a facet of \(\Delta(\mathcal{P})\). Firstly observe that \(F_{I}\sqcup S^{*}\) is a face of \(\Delta(\mathcal{P})\) because there does not exist any inner interval of \(\mathcal{P}\) whose anti-diagonal corners are in \(F_{I}\sqcup S^{*}\). For the maximality, suppose by contradiction that there exists a face \(K\) of \(\Delta(\mathcal{P})\) such that \(F_{I}\sqcup S^{*}\subset K\). Let \(w\in K\setminus(F_{I}\sqcup S^{*})\). If \(w\in V(\mathcal{P}_{1})\setminus F_{I}\) then the interval having \((1,n)\) and \(w\) as anti-diagonal corners is an inner interval of \(\mathcal{P}\), which is a contradiction with (2). If \(w\in V(\mathcal{P}_{2})\setminus(V(\mathcal{P}_{1})\cup S^{*})\), then it is easy to see that there is an inner interval of \(\mathcal{P}\) whose anti-diagonal corners are \(w\) and a
Figure 4. The two parallelogram sub-polyominoes of \(\mathcal{P}\).
vertex in \(\{(1,b_{0}),(a_{k},n)\}\sqcup S^{*}\), that is a contradiction with (2). Hence \(F_{I}\sqcup S^{*}\) is a facet of \(\Delta(\mathcal{P})\), so we get the desired conclusion.
If \(\mathcal{P}\) is a frame polyomino, then the simplicial complex \(\Delta(\mathcal{P})\) attached to \(\operatorname{in}_{<}(I_{\mathcal{P}})\) is shellable. In order to define a suitable shelling order of \(\Delta(\mathcal{P})\), we introduce a new combinatorial concept, called a _step_, which can be given for a face, whose dimension is at least two, of a simplicial complex attached to a generic polyomino.
**Definition 3.3**.: Let \(\mathcal{P}\) be a polyomino and \(\Delta(\mathcal{P})\) be a simplicial complex on \(V(\mathcal{P})\) attached to the initial ideal of \(I_{\mathcal{P}}\) with respect to a monomial order. Let \(F\) be a face of \(\Delta(\mathcal{P})\) with \(|F|\geq 3\) and \(F^{\prime}=\{(a,b),(c,d),(e,f)\}\subseteq F\). We say that \(F^{\prime}\) forms a _step_ in \(F\) or that \(F\) has a _step_\(F^{\prime}\) if:
1. \(b=d\), \(c=e\), \(a<c\) and \(d<f\);
2. for all \(i\in\{a,\ldots,c\}\) there does not exist any \((i,b)\) in \(F\);
3. for all \(j\in\{d,\ldots,f\}\) there does not exist any \((c,j)\) in \(F\);
4. \((c,d)\) is the lower right corner of a cell of \(\mathcal{P}\).
In such a case the vertex \((c,d)\) is said to be the _lower right corner_ of \(F^{\prime}\).
**Example 3.4**.: Let \(\mathcal{P}\) be the polyomino in Figure 5 and \(\Delta(\mathcal{P})\) be a simplicial complex attached to \(\operatorname{in}_{<}(I_{\mathcal{P}})\). The blue vertices represent a facet \(F\) of \(\Delta(\mathcal{P})\). \(F\) has five steps, which are \(\{a,b,g\}\), \(\{b,c,d\}\), \(\{g,h,p\}\), \(\{q,r,w\}\) and \(\{l,m,s\}\). Note that \(\{e,f,l\}\) is not a step of such a facet because \(m\) is not the lower right corner of a cell of \(\mathcal{P}\).
We show an useful property of a step of a facet of the simplicial complex attached to a frame polyomino.
**Lemma 3.5**.: _Let \(\mathcal{P}\) be a frame polyomino. Let \(\Delta(\mathcal{P})\) be the simplicial complex attached to \(\operatorname{in}_{<}(I_{\mathcal{P}})\) and \(F^{\prime}=\{(a,j),(i,j),(i,b)\}\) be a step of a facet of \(\Delta(\mathcal{P})\). Then \([(a,j),(i,b)]\) is an inner interval of \(\mathcal{P}\)._
Proof.: If \(a=i-1\) and \(b=j+1\) then we have the conclusion immediately from (4) of Definition 3.3. Assume that \(a\neq i-1\) or \(b\neq j+1\). If \((i,j)\in V(\mathcal{P}_{1})\) then \([(a,j),(i,b)]\) is an inner interval of \(\mathcal{P}\), by the structure of \(\mathcal{P}\) and by Definition 3.3. Assume that \((i,j)\notin V(\mathcal{P}_{1})\), so \((i,j)\in V(\mathcal{P}_{2})\setminus(\mathcal{P}_{[(1,1),(a_{0},b_{0})]}\cup \mathcal{P}_{[(a_{k},b_{k}),(m,n)]})\). Suppose by contradiction that \([(a,j),(i,b)]\) is not an inner interval of \(\mathcal{P}\). Then there exists a cell \(C\) with lower right corner \((h,l)\) such that \((h,l)\neq(i,j)\) and \(a<h\leq i\), \(j\leq l<b\). Observe that all vertices of \(\mathcal{P}\) in \([(a+1,1),(m,j)]\setminus\{(i,j)\}\) and \([(i,1),(m,b-1)]\setminus\{(i,j)\}\) do not belong to \(F\). Suppose \(h=i\). Then there does not exist any inner interval of \(\mathcal{P}\) having \((h,l)\) and another vertex in \(F\) as anti-diagonal corners, so \((h,l)\in F\) due to the maximality of \(F\), but this is a contradiction with (3) of Definition 3.3. A similar contradiction arises if \(l=j\). Assume now that \(h\neq i\) and \(l\neq j\). It is not restrictive to assume that \(l\) is the
Figure 5. Example of steps in a facet of \(\Delta(\mathcal{P})\).
minimum integer such that the cell \(C\) with lower right corner \((h,l)\) belongs to \([(a,j),(i,b)]\) but not to \(\mathcal{P}\). Let \(J\) be the maximal inner interval of \(\mathcal{P}\) having \((i,j)\) as the lower right corner and containing \((a,j)\); moreover, we denote by \(H\) the maximal edge interval of \(\mathcal{P}\) containing \((a,j)\) and \((i,j)\). Note that no vertex of \(J\setminus H\) belong to \(F\). Therefore, as explained before, \((h,j)\in F\), so we get again a contradiction with (2) of Definition 3.3. In conclusion \([(a,j),(i,b)]\) is an inner interval of \(\mathcal{P}\).
**Remark 3.6**.: The assumption that \(\mathcal{P}\) is a frame polyomino is important for the claim of Lemma 3.5. In fact, if \(\mathcal{P}\) is the polyomino in Figure 6 then \(\mathcal{P}\) satisfies the condition of Theorem 2.1 and the set of orange vertices figures out a facet \(F\) of the simplicial complex \(\Delta(\mathcal{P})\) attached to \(\operatorname{in}_{<}(I_{\mathcal{P}})\), where \(\{v_{1},v_{2},v_{3}\}\) is a step of \(F\) but \([v_{1},v_{3}]\) is not an inner interval of \(\mathcal{P}\).
To give a complete description of the structure of a facet of the simplicial complex associated to a frame polyomino, we start introducing some basics on finite distributive lattice, following [32]. Let \(P\) be a poset with partial order relation \(\prec\). A _chain_ of \(P\) is a totally ordered subset of \(P\). A chain \(C\) is said to be _maximal_ if there does not exist any chain of \(P\) containing \(C\). Given \(a\in P\), the _rank_ of \(a\) in \(P\) is the supremum of length of chains in \(P\) that descends from \(a\). The _rank_ of \(P\) is the supremum of length of chains of \(P\) and it is denoted by \(\operatorname{rank}(P)\). Let \(L\) be a simple planar distributive lattice. We say that, given \(x,y\in L\), \(y\) covers \(x\) if \(x\prec y\) and there is no \(z\in L\) such that \(x\prec z\prec y\); in such a case we use the notation \(x\to y\).
Along the paper, as done in [32], if a polyomino \(\mathcal{P}\) has a structure of a distributive lattice on \(V(\mathcal{P})\), then with abuse of notation we refer to \(\mathcal{P}\) as a distributive lattice. From [32, Proposition 2.3] we know that a finite collection of cells \(\mathcal{P}\) is a parallelogram polyomino if and only if \(\mathcal{P}\) is a simple planar distributive lattice.
Let \(\mathcal{P}\) be a parallelogram polyomino. Let \(\operatorname{rank}(\mathcal{P})=d+1\) as distributive lattice, \(\mathfrak{m}:\min L=x_{0}\to x_{1}\to x_{2}\to\ldots x_{d+1}=\max L\) be a maximal chain of \(\mathcal{P}\) and \(C=[(i,j),(i+1,j+1)]\) be a cell of \(\mathcal{P}\). We say that \(\mathfrak{m}\) has a _descent_ at \(C\) if \(\mathfrak{m}\) passes through the edges \((i,j)\to(i+1,j)\) and \((i+1,j)\to(i+1,j+1)\).
**Remark 3.7**.: Let \(\mathcal{P}\) be a parallelogram polyomino. Observe that \(\mathcal{P}\) satisfies the conditions in Theorem 2.1, so the set of generators of \(I_{\mathcal{P}}\) forms the (quadratic) reduced Grobner basis of \(I_{\mathcal{P}}\) with respect to \(<\). Let \(\Delta(\mathcal{P})\) be the simplicial complex attached to \(\operatorname{in}_{<}(I_{\mathcal{P}})\). For all \(k\geq 0\), it is easy to see that every maximal chain of \(\mathcal{P}\) with \(k\) descents as distributive lattice is a facet of \(\Delta(\mathcal{P})\) with \(k\) steps, and vice versa.
**Lemma 3.8**.: _Let \(\mathcal{P}\) be a frame polyomino and \(\Delta(\mathcal{P})\) be the attached simplicial complex. Let \(F\) be a facet of \(\Delta(\mathcal{P})\). For all maximal edge intervals \(\mathcal{L}\) of \(\mathcal{P}\) there exists \(v\in\mathcal{L}\) belonging to \(F\)._
Proof.: Let \(V\) be a maximal vertical edge interval of \(\mathcal{P}\). We may assume that \(V=[(a,1),(a,n)]\) with \(a\in\{1,\ldots,a_{0}\}\). Consider \(G=\{(i,j)\in F:1\leq i<a,1\leq j\leq n\}\). Observe that \(G\neq\emptyset\) since \((1,1)\in G\). Let \((i_{1},k)\in G\) with \(k=\max\{j:(i,j)\in G\}\). We may assume that \(k\leq b_{0}\). Note that
Figure 6. A facet.
the vertices of \(\mathcal{P}\) in \(V_{1}=\{(i,j):1\leq i<i_{1},k<j\leq n\}\) or in \(V_{2}=\{(i,j):i_{1}<i\leq m,1\leq j<k\}\) do not belong to \(F\), otherwise there exists an inner interval of \(\mathcal{P}\) where two anti-diagonal corners are in \(F\). Moreover, the vertices in \(V_{3}=\{(i,j):i_{1}\leq i\leq a,k<j<n\}\) are not in \(F\) for the maximality of \(k\). Consider \((a,k)\). Let \(\mathcal{I}\) be an inner interval of \(\mathcal{P}\) having \((a,k)\) as anti-diagonal corner and denote by \(v\) the other anti-diagonal corner of \(\mathcal{I}\). Note that \(v\in V_{1}\cup V_{2}\cup V_{3}\), so \(v\notin F\). As a consequence \((a,k)\in F\). All the other cases can be proved in a similar way.
**Discussion 3.9**.: Let \(\mathcal{P}\) be a frame polyomino and \(F\) be a facet of the simplicial complex \(\Delta(\mathcal{P})\) attached to \(\mathrm{in}_{<}(I_{\mathcal{P}})\). From Lemma 3.8 we know that in every maximal edge interval of \(\mathcal{P}\) we can find an element of \(F\). Now, we want to describe how the elements of \(F\) are arranged in \(\mathcal{P}\). Let \(f=(i,j)\) be the maximal vertex of \(F\) in \(V(\mathcal{P}_{[(1,1),(a_{0},b_{0})]})\) with respect to \(<\) and \(g=(t,l)\) be the minimal vertex of \(F\) in \(V(\mathcal{P}_{[(a_{k},b_{k}),(m,n)]})\) with respect to \(<\). Note that \((1,1),(m,n)\in F\), because they are not anti-diagonal corners of an inner interval of \(\mathcal{P}\). Starting from \((1,1)\), we observe that either \((1,2)\) or \((2,1)\) belongs to \(F\) otherwise we have a contradiction with (2) of Proposition 3.2. Assume that \((1,2)\in F\), so as a consequence either \((1,3)\) or \((2,2)\) is in \(F\). We can iterate the previous argument until \(f\), so it is easy to see that the elements of \(F\) forms a chain in \(\mathcal{P}_{[(1,1),(a_{0},b_{0})]}\) like \(\mathfrak{c}_{1}:(1,1)\to\cdots\to f\). Since a vertex in \(\{(p,q)\in V(\mathcal{P}):p<i,q>j\}\) does not belong to \(F\), starting from \((i,b_{0}+1)\), then either \((i,b_{0}+2)\) or \((i+1,b_{0}+1)\) is in \(F\). As done before, we can continue this procedure until \((a_{k}-1,l)\), so the elements of \(F\) forms a chain in \(\mathcal{P}_{1}\setminus\mathcal{Q}\) like \(\mathfrak{c}_{2}:(i,b_{0}+1)\to\cdots\to(a_{k}-1,l)\). By similar arguments, the elements of \(F\) provide a chain in \(\mathcal{P}_{2}\setminus\mathcal{Q}\) like \(\mathfrak{c}_{3}:(a_{0}+1,j)\to\cdots\to(t,b_{k}-1)\) and another one in \(\mathcal{P}_{[(a_{k},b_{k}),(m,n)]}\) which is \(\mathfrak{c}_{4}:(t,l)\to\cdots\to(m,n)\). In conclusion \(F\) is described by the chains \(\mathfrak{c}_{1}\), \(\mathfrak{c}_{2}\), \(\mathfrak{c}_{3}\) and \(\mathfrak{c}_{4}\). In Figure 7 we show two facets of the simplicial complex attached to a frame polyomino.
Moreover, if we denote by \(n(\mathfrak{c}_{i})\) the number of the descents in \(\mathfrak{c}_{i}\) for all \(i\in[4]\) and by \(n(F)\) the number of the steps in \(F\), we invite the reader to observe that \(\sum_{i=1}^{4}n(\mathfrak{c}_{i})\leq n(F)\leq\sum_{i=1}^{4}n(\mathfrak{c}_{i}) +4\). In fact every descent of \(\mathfrak{c}_{i}\) corresponds to a step of \(F\), so \(\sum_{i=1}^{4}n(\mathfrak{c}_{i})\leq n(F)\), and there could be at most four steps in \(F\) that are not descents of a chain \(\mathfrak{c}_{i}\), as shown in Figure 7 (B) by \(\{a,b,c\}\), \(\{b,d,e\}\), \(\{l,h,m\}\) and \(\{f,g,h\}\), so \(n(F)\leq\sum_{i=1}^{4}n(\mathfrak{c}_{i})+4\).
Let \(\Delta\) be a pure simplicial complex of dimension \(d-1\). We compare two facets of \(\Delta\), like \(F=\{a_{1},\ldots,a_{d}\}\) and \(G=\{b_{1},\ldots,b_{d}\}\), setting \(F<_{\mathrm{lex}}G\), if the most-left non-zero component of the
Figure 7. Examples of facets in a frame polyominoes.
vector \((a_{1}-b_{1},\ldots,a_{d}-b_{d})\) is positive. Finally, denote by \(\mathcal{F}_{\mathcal{P}}\) the set of the facets of \(\Delta(\mathcal{P})\), where \(\mathcal{P}\) is a frame polyomino, and we set \(F_{0}=[(1,1),(1,n)]\cup[(1,n),(m,n)]\cup\big{(}S_{2}\backslash\{(a_{0},b_{0}),( a_{k},b_{k})\}\big{)}\), which is a facet of \(\Delta(\mathcal{P})\) as proved in (3) of Proposition 3.2 and it has no steps.
Now, we are ready to state and prove the main result of this section.
**Theorem 3.10**.: _Let \(\mathcal{P}\) be a frame polyomino. Denote by \(\Delta(\mathcal{P})\) the simplicial complex attached to \(\operatorname{in}_{<}(I_{\mathcal{P}})\) and assume that \(\mathcal{F}_{\mathcal{P}}=\{F_{0},F_{1},\ldots,F_{r}\}\) is lexicographically ordered in descending. Let \(F\) be a facet of \(\Delta(\mathcal{P})\) different from \(F_{0}\). Set \(\mathcal{S}(F)=\{G\in\mathcal{F}_{\mathcal{P}}:F<_{\operatorname{lex}}G\}\) and \(\mathcal{K}_{F}=\{F\backslash\{v\}:v\text{ is the lower right corner of a step of }F\}\). Then:_
1. \(\langle\mathcal{S}(F)\rangle\cap\langle F\rangle=\langle\mathcal{K}_{F}\rangle\) _and, in particular,_ \(\{F_{0},F_{1},\ldots,F_{r}\}\) _forms a shelling order of_ \(\Delta(\mathcal{P})\)_._
2. _The_ \(i\)_-th coefficient of the_ \(h\)_-polynomial of_ \(K[\mathcal{P}]\) _is the number of the facets of_ \(\Delta(\mathcal{P})\) _having_ \(i\) _steps._
Proof.: (1) Firstly, we show that \(\langle\mathcal{K}_{F}\rangle\subseteq\langle\mathcal{S}(F)\rangle\cap\langle F\rangle\). Let \(F\backslash\{(i,j)\}\), where \((i,j)\) is the lower right corner of a step \(F^{\prime}=\{(i,b),(i,j),(a,j)\}\) of \(F\). Trivially \(F\backslash\{(i,j)\}\subset F\). We may assume that \((i,j),(i,b)\in V(\mathcal{P}_{1})\setminus(V(\mathcal{P}_{[(1,1),(a_{0},b_{0}) ]})\cup V(\mathcal{P}_{[(a_{k},b_{k}),(m,n)]}))\) and \((a,i)\in V(\mathcal{P}_{[(1,1),(a_{0},b_{0})]})\), as in Figure Figure 8, since all other cases can be proved by similar arguments. From Lemma 3.5, \([(a,j),(i,b)]\) is an inner interval of \(\mathcal{P}\). Since \((i,b),(a,j),(i,j)\in F\), observe that no vertex in
\[\mathcal{N}= \Big{(}[(i,1),(m,b-1)]\setminus\{(i,j)\}\Big{)}\cup\Big{(}[(a+1, 1),(i,j)]\setminus\{(i,j)\}\Big{)}\cup\] \[\cup\Big{(}[(1,j+1),(a-1,n)]\Big{)}\cup\Big{(}[(a,j),(i,b)] \setminus\{(a,j),(i,j),(i,b)\}\Big{)}\]
belongs to \(F\). For instance, \(\mathcal{N}\) is displayed with the blue, red, and grey parts in Figure 8. We consider the set \(H=F\backslash\{(i,j)\}\cup\{(a,b)\}\) and we prove that \(H\) is a facet of \(\Delta(\mathcal{P})\). In order to show that \(H\) is a face of \(\Delta(\mathcal{P})\), it is sufficient to note that there does not exist an inner interval of \(\mathcal{P}\) having \((a,b)\) and another vertex \(w\in F\setminus\{(i,j)\}\) as anti-diagonal corners, otherwise \(w\in\mathcal{N}\) and we get a contradiction. To prove the maximality of \(H\), we observe that if \(H\subset K\) for some face \(K\) of \(\Delta(\mathcal{P})\) then there exists a facet \(K_{max}\) of \(\Delta(\mathcal{P})\) such that \(H\subset K_{max}\), so \(|K_{max}|>|H|=|F|\), which is a contradiction with the pureness of \(\Delta(\mathcal{P})\). Therefore \(H\) is a facet of \(\Delta(\mathcal{P})\). Observe that \(F<_{\operatorname{lex}}H\) since we replace \((i,j)\) in \(F\) with \((a,b)\), so \(H\in\mathcal{S}(F)\). Hence \(F\backslash\{(i,j)\}\in\langle\mathcal{S}(F)\rangle\) and in conclusion \(\langle\mathcal{K}_{F}\rangle\subseteq\langle\mathcal{S}(F)\rangle\cap \langle F\rangle\).
Figure 8. An arrangement of intervals.
Now, we prove \(\langle\mathcal{S}(F)\rangle\cap\langle F\rangle\subseteq\langle\mathcal{K}_{F}\rangle\). Let \(G\) be in \(\langle\mathcal{S}(F)\rangle\cap\langle F\rangle\). Since \(G\in\langle F\rangle\), then \(G=F\backslash\{v_{i}\}_{i\in[t]}\), where \(t\) is an integer such that \(1\leq t\leq|F|\). Without loss of generality, we may assume that \(t=1\) and that \(v_{1}=(i,j)\). Observe that \(G=F\backslash\{v_{1}\}\in\langle\mathcal{S}(F)\rangle\) so there exists a facet \(H\in\mathcal{S}(F)\) such that \(G=F\backslash\{v_{1}\}\subset H\). Since \(\Delta(\mathcal{P})\) is pure and \(F\) and \(H\) are facets of \(\Delta(\mathcal{P})\) with \(F<_{\text{lex}}H\), then we can obtain \(H\) from \(G\) adding a vertex \(w=(k,l)\), where \(l>j\) or \(j=l\) and \(k>i\). Suppose by contradiction that \(v_{1}\) is not the lower right corner of a step in \(F\). With reference to Discussion 3.9, if we add to \(G=F\setminus\{v_{1}\}\) a vertex \(w=(k,l)\) (with \(l>j\) or \(j=l\) and \(k>i\)) then we find an inner interval of \(\mathcal{P}\) having \(w\) and a vertex in \(F\setminus\{v_{1}\}\) as anti-diagonal corners, so a contradiction with (2) of Proposition 3.2. The operation to get \(H\) from \(G\) adding a vertex \(w=(k,l)\) (with \(l>j\) or \(j=l\) and \(k>i\)) can be done just when \(v_{1}\) is the lower right corner of a step \(F^{\prime}\) of \(F\). In fact, in such a case, we can replace \(v_{1}\) in \(F\) with the anti-diagonal corner of the inner interval given by the step \(F^{\prime}\) to get \(H\). Hence \(v_{1}\) is necessarily the lower right corner of a step of \(F\), and in conclusion \(G\in\mathcal{K}_{F}\).
(2) It follows easily from (1) and Proposition 2.2.
**Remark 3.11**.: The previous theorem does not hold in general. Consider the polyomino in Figure 9 and assume that \(V(\mathcal{P})=[(1,1),(6,4)]\). Observe that \(\mathcal{P}\) is a prime polyomino (see [28]) and \(G(\mathcal{P})\) forms the reduced Grobner basis of \(I_{\mathcal{P}}\) with respect to \(<\), so \(\Delta(\mathcal{P})\) is a shellable simplicial complex. The first three facets of \(\Delta(\mathcal{P})\) lexicographically ordered in descending are:
* \(F_{0}=\{(6,4),(5,4),(4,4),(3,4),(2,4),(1,4),(1,3),(5,2),(3,2),(1,2),(1,1)\}\);
* \(F_{1}=\{(6,4),(5,4),(4,4),(3,4),(2,4),(1,4),(1,3),(5,2),(3,2),(3,1),(1,1)\}\);
* \(F_{2}=\{(6,4),(5,4),(4,4),(3,4),(2,4),(1,4),(1,3),(5,2),(5,1),(3,1),(1,1)\}\).
Note that \(F_{2}\) contains the two steps \(\{(1,1),(3,1),(3,4)\}\) and \(\{(3,1),(5,1),(5,2)\}\) but \(\langle F_{0},F_{1}\rangle\cap\langle F_{2}\rangle\) is generated just by \(\{(6,4),(5,4),(4,4),(3,4),(2,4),(1,4),(1,3),(5,2),(3,1),(1,1)\}=F_{2}\setminus \{(5,1)\}\) and not by \(F_{2}\setminus\{(3,1)\}\) and \(F_{2}\setminus\{(5,1)\}\).
## 4. Hilbert series and rook polynomial of frame polyominoes
In this section we study the Hilbert-Poincare series of the coordinate ring attached to a frame polyomino. First of all, we introduce some definitions and notions on a generalization of the rook polynomial, given in [32] for simple polyominoes. Actually, that definition can be extended naturally to a generic polyomino.
Let \(\mathcal{P}\) be a polyomino. Two rooks in \(\mathcal{P}\) are in _non-attacking position_ if they do not belong to the same row or column of cells of \(\mathcal{P}\). A \(k\)_-rook configuration_ in \(\mathcal{P}\) is a configuration of \(k\) rooks which are arranged in \(\mathcal{P}\) in non-attacking positions. Figure 10 shows a \(6\)-rook configuration.
Figure 9. A grid polyomino.
The rook number \(r_{\mathcal{P}}\) is the maximum number of rooks that can be placed in \(\mathcal{P}\) in non-attacking positions. We denote by \(\mathcal{R}_{k}\) the set of all \(k\)-rook configurations in \(\mathcal{P}\), setting conventionally \(R_{0}=\emptyset\), and note that the set \(\mathcal{R}=\mathcal{R}_{0}\cup\mathcal{R}_{1}\cdots\cup\mathcal{R}_{rp}\) is a simplicial complex, called _rook complex_. Two non-attacking rooks in \(\mathcal{P}\) are _switching_ rooks if they are placed in diagonal (resp. anti-diagonal) cells of a rectangle of \(\mathcal{P}\). In such a case we say that the rooks are in a diagonal (resp. anti-diagonal) position. Fix \(k\in\{0,\ldots,r_{\mathcal{P}}\}\). Let \(F\in\mathcal{R}_{k}\) and \(R_{1}\) and \(R_{2}\) be two switching rooks in \(F\) in diagonal (resp. anti-diagonal) position. Let \(R^{\prime}_{1}\) and \(R^{\prime}_{2}\) be the rooks in anti-diagonal (resp. diagonal) cells in \(R\). Then the set \(F\backslash\{R_{1},R_{2}\}\cup\{R^{\prime}_{1},R^{\prime}_{2}\}\) belongs to \(\mathcal{R}_{k}\). The operation of replacing \(R_{1}\) and \(R_{2}\) by \(R^{\prime}_{1}\) and \(R^{\prime}_{2}\) is called _switch of \(R_{1}\) and \(R_{2}\)_. This induce the following equivalence relation \(\sim\) on \(\mathcal{R}_{k}\): let \(F_{1},F_{2}\in\mathcal{R}_{k}\), so \(F_{1}\sim F_{2}\) if \(F_{2}\) can be obtained from \(F_{1}\) after some switches. We define the quotient set \(\tilde{\mathcal{R}}_{k}=\mathcal{R}_{k}/\sim\). We set \(\tilde{r}_{k}=|\tilde{\mathcal{R}}(\mathcal{P},k)|\) for all \(k\in[r_{\mathcal{P}}]\), conventionally \(\tilde{r}_{0}=1\). The _switching rook-polynomial_ of \(\mathcal{P}\) is the polynomial in \(\mathbb{Z}[t]\) defined as \(\tilde{r}_{\mathcal{P}}(t)=\sum_{k=0}^{r_{\mathcal{P}}}\tilde{r}_{k}t^{k}\).
In the following proposition, we show a natural representant of an equivalence class of \(\tilde{\mathcal{R}}_{k}\) associated with a frame polyomino.
**Proposition 4.1**.: _Let \(\mathcal{P}\) be a frame polyomino and \(\mathcal{R}=\{R_{1},\ldots,R_{k}\}\) be a \(k\)-rook configuration in \(\mathcal{P}\). Then there exists a \(k\)-rook configuration \(\mathcal{C}=\{R^{\prime}_{1},\ldots,R^{\prime}_{k}\}\) in \(\mathcal{P}\) such that \(\mathcal{C}\sim\mathcal{R}\) and the rooks \(R^{\prime}_{i}\) and \(R^{\prime}_{j}\) are placed in diagonal position, for all switching rooks \(R^{\prime}_{i}\) and \(R^{\prime}_{j}\) in \(\mathcal{C}\) (with \(i\neq j\))._
Proof.: We can distinguish the following cases with reference to Figure 4.
1. Assume that no rook of \(\mathcal{R}\) is in \(\mathcal{Q}\). Then, for all \(i\in[k]\), \(R_{i}\) is placed in \(\mathcal{P}_{1}\setminus\mathcal{Q}\) or in \(\mathcal{P}_{2}\setminus\mathcal{Q}\). Moreover, the rooks in \(\mathcal{P}_{1}\setminus\mathcal{Q}\) are never in attacking and switching position with the rooks in \(\mathcal{P}_{2}\setminus\mathcal{Q}\). Hence, the claim follows immediately from [32, Lemma 3.12] applying to \(\mathcal{P}_{1}\setminus\mathcal{Q}\) and to \(\mathcal{P}_{2}\setminus\mathcal{Q}\).
2. Suppose that the rooks of \(\mathcal{R}\) belong just to \(\mathcal{Q}\). Then the claim follows trivially from [32, Lemma 3.12], since the boxes \(\mathcal{P}_{[(1,1),(a_{0},b_{0})]}\) and \(\mathcal{P}_{[(a_{k},b_{k}),(m,n)]}\) are parallelogram polyominoes.
3. The same conclusion holds if either all rooks of \(\mathcal{R}\) are either in \(\mathcal{P}_{1}\) or in \(\mathcal{P}_{2}\).
4. Suppose that there exists a rook of \(\mathcal{R}\) in \(\mathcal{P}_{1}\setminus\mathcal{Q}\), another one in \(\mathcal{P}_{2}\setminus\mathcal{Q}\) and another in \(\mathcal{Q}\). Then there exist \(\mathcal{R}^{\prime},\mathcal{R}^{\prime\prime},\mathcal{R}^{\prime\prime \prime}\subset\mathcal{R}\) non-empty such that all rooks in \(\mathcal{R}^{\prime}\), \(\mathcal{R}^{\prime\prime}\) and \(\mathcal{R}^{\prime\prime\prime}\) are placed respectively in \(\mathcal{P}_{1}\), \(\mathcal{P}_{2}\) and \(\mathcal{Q}\). Obviously \(\mathcal{R}^{\prime\prime\prime}\subset\mathcal{R}^{\prime},\mathcal{R}^{ \prime\prime}\). Since \(\mathcal{P}_{1}\) is a parallelogram polyomino, then from [32, Lemma 3.12] there exists \(\mathcal{C}^{\prime}\sim\mathcal{R}^{\prime}\) such that the switching rooks of \(\mathcal{C}^{\prime}\) are placed in diagonal position two by two. Denote by \(\mathcal{W}\) the subset of \(\mathcal{C}^{\prime}\) of that rooks placed in \(\mathcal{Q}\) and consider \((\mathcal{R}^{\prime\prime}\setminus\mathcal{R}^{\prime\prime\prime})\cup \mathcal{W}\). Since \(\mathcal{P}_{2}\) is a parallelogram polyomino, there exists \(\mathcal{C}^{\prime\prime}\sim(\mathcal{R}^{\prime\prime}\setminus\mathcal{R}^{ \prime\prime\prime})\cup\mathcal{W}\) such that all switching rooks of \(\mathcal{C}^{\prime\prime}\) are placed in diagonal position two by two. Denote by \(\mathcal{Z}\) the subset of \(\mathcal{C}^{\prime\prime}\) of that rooks placed in \(\mathcal{Q}\). The placements of the rooks in \(\mathcal{Z}\) are obtained starting from \(\mathcal{W}\) and switching the rooks in \(\mathcal{Q}\) with the other
Figure 10. An example of a \(6\)-rook configuration in \(\mathcal{P}\).
ones in \(\mathcal{P}_{2}\setminus\mathcal{Q}\), so they remain in a diagonal position with respect the rooks in \(\mathcal{C}^{\prime}\setminus\mathcal{W}\). Hence \((\mathcal{C}^{\prime}\setminus\mathcal{W})\cup\mathcal{C}^{\prime\prime}\) is the claimed configuration.
**Definition 4.2**.: Let \(\mathcal{P}\) be a frame polyomino and \(\mathcal{R}\) be a \(k\)-rook configuration in \(\mathcal{P}\). We say that the \(k\)-rook configuration \(\mathcal{C}\) defined in Proposition 4.1 is called the _canonical configuration_ of \(\mathcal{R}\).
Now we provide some Lemmas and a Remark which will be useful to prove the Theorem 4.6.
**Lemma 4.3**.: _Let \(\mathcal{P}\) be a frame polyomino and \(\Delta(\mathcal{P})\) be the attached simplicial complex. Let \(F\) be a facet of \(\Delta(\mathcal{P})\) with \(k\) steps, with \(k\geq 2\). If \(v\) and \(w\) are the lower right corners of two distinct steps of \(F\) belonging on the same maximal horizontal edge interval of \(\mathcal{P}\) then \(v\in V(\mathcal{P}_{[(1,1),(a_{0},b_{0})]})\) and \(w\notin V(\mathcal{P}_{[(1,1),(a_{0},b_{0})]})\)._
Proof.: Let \(G=\{g_{1},v,g_{2}\}\) and \(U=\{u_{1},w,u_{2}\}\) be the two steps of \(F\) having respectively \(v\) and \(w\) as lower right corners. It is not restrictive to assume that \(v<w\). Since \(v\) and \(w\) are on the same maximal horizontal edge interval \(H\) of \(\mathcal{P}\) then \(g_{1},u_{1}\in H\) and \(g_{1}<v\leq u_{1}<w\). By contradiction suppose that \(v\notin V(\mathcal{P}_{[(1,1),(a_{0},b_{0})]})\) or \(w\in V(\mathcal{P}_{[(1,1),(a_{0},b_{0})]})\). We examine the three cases. If \(v\notin V(\mathcal{P}_{[(1,1),(a_{0},b_{0})]})\) and \(w\in V(\mathcal{P}_{[(1,1),(a_{0},b_{0})]})\), then necessarily \(w<v\), which is a contradiction with the assumption that \(v<w\). Assume that \(v\in V(\mathcal{P}_{[(1,1),(a_{0},b_{0})]})\) and \(w\in V(\mathcal{P}_{[(1,1),(a_{0},b_{0})]})\). In such a case, for the structure of \(\mathcal{P}\), \(g_{2}\) and \(w\) are the anti-diagonal corner of an inner interval of \(\mathcal{P}\) but \(g_{2},w\in F\), so we get a contradiction. The same comes when \(v\notin V(\mathcal{P}_{[(1,1),(a_{0},b_{0})]})\) and \(w\notin V(\mathcal{P}_{[(1,1),(a_{0},b_{0})]})\). In every case a contradiction arises. Hence the conclusion follows
**Lemma 4.4**.: _Let \(\mathcal{P}\) be a frame polyomino and \(\Delta(\mathcal{P})\) be the attached simplicial complex. Let \(F\) be a facet of \(\Delta(\mathcal{P})\) with \(k\) steps, with \(k\geq 2\). If \(v\) and \(w\) are the lower right corners of two distinct steps of \(F\) belonging on the same maximal vertical edge interval of \(\mathcal{P}\) then \(v\in V(\mathcal{P}_{[(a_{k},b_{k}),(m,n)]})\) and \(w\notin V(\mathcal{P}_{[(a_{k},b_{k}),(m,n)]})\)._
Proof.: The claim follows arguing similarly as done in the proof of Lemma 4.3.
**Remark 4.5**.: Let \(\mathcal{P}\) be a parallelogram polyomino and \(\Delta(\mathcal{P})\) be the simplicial complex attached to \(\operatorname{in}_{<}(I_{\mathcal{P}})\). We remember that, for all \(k\geq 0\), every maximal chain of \(\mathcal{P}\) with \(k\) descents as distributive lattice is a facet of \(\Delta(\mathcal{P})\) with \(k\) steps, and vice versa. As a consequence of [32, Proposition 3.11, Lemma 3.12], we observe that there is a one-to-one correspondence between the canonical positions in \(\mathcal{P}\) of \(k\) rooks and the facets of \(\Delta(\mathcal{P})\) with \(k\) steps.
Now we are ready to prove one of the crucial results of this work, which provides a bijection between the facets with \(k\) steps of the simplicial complex of a frame polyomino \(\mathcal{P}\) and the canonical position in \(\mathcal{P}\) of \(k\) rooks.
**Theorem 4.6**.: _Let \(\mathcal{P}\) be a frame polyomino and \(\Delta(\mathcal{P})\) be the attached simplicial complex. For all \(k\geq 0\), there exists a bijection between the facets with \(k\) steps of \(\Delta(\mathcal{P})\) and the canonical position in \(\mathcal{P}\) of \(k\) rooks._
Proof.: The first part of the proof is devoted to defining a desired bijective function that uniquely assigns to a facet of \(\Delta(\mathcal{P})\) a canonical position in \(\mathcal{P}\).
Let \(F\) be a facet of \(\Delta(\mathcal{P})\) with \(k\) steps. If \(k=0\), then we attach to the empty set the facet \(F_{I}\cup S^{*}\) defined in the proof of Proposition 3.2. If \(k=1\), then we associate the facet \(F\) with one step, whose lower right corner is \(v_{F}\), the \(1\)-rook configuration defined by placing the rook in the cell of \(\mathcal{P}\) having \(v_{F}\) as the lower right corner. Assume that \(k\geq 2\). Consider a step \(F^{\prime}\) of \(F\) having \(w\) as the lower right corner and we denote by \(H_{w}\) and \(V_{w}\) respectively the maximal horizontal and vertical edge intervals of \(\mathcal{P}\) containing \(w\). If there does not exist any step with a lower right corner belonging to \(H_{w}\) and to \(V_{w}\) then we can assign a rook to the cell of \(\mathcal{P}\) having \(w\) as the lower right
corner. Now, suppose there exists a step with a lower right corner \(v=(i,j)\) belonging to \(H_{w}\) or to \(V_{w}\). Assume that \(v\in H_{w}\). It is not restrictive to suppose that \(v<w\). Hence, from Lemma 4.3, we have that \(v\in V(\mathcal{P}_{[(1,1),(a_{0},b_{0})]})\) and \(w\notin V(\mathcal{P}_{[(1,1),(a_{0},b_{0})]})\). Therefore we assign a rook to the cell of \(\mathcal{P}\) having \(w\) as the lower right corner and another one to the cell having \((i,b_{0})\) as a lower right corner. Now, assume that \(v\in V_{w}\) and \(w<v\). From Lemma 4.4, we get \(v\in V(\mathcal{P}_{[(a_{k},b_{k}),(m,n)]})\) and \(w\notin V(\mathcal{P}_{[(a_{k},b_{k}),(m,n)]})\). Therefore we attach a rook to the cell of \(\mathcal{P}\) having \(w\) as the lower right corner and another one to the cell having \((a_{k},j)\) as a lower right corner. In this way, we can define a configuration \(\mathcal{R}\) of \(k\) rooks in \(\mathcal{P}\), related to the facet \(F\) of \(\Delta(\mathcal{P})\). We need to show that \(\mathcal{R}\) is a canonical configuration in \(\mathcal{P}\). Firstly, we prove that every pair of rooks of \(\mathcal{R}\) are in a non-attacking position. Let \(R_{1},R_{2}\in\mathcal{R}\) and \(F_{1}=\{a,v_{1},b\},F_{2}=\{c,v_{2},d\}\) be the two steps related respectively to \(R_{1}\) and \(R_{2}\) having \(v_{1}=(i_{1},j_{1})\) and \(v_{2}=(i_{2},j_{2})\) as lower right corners, with \(v_{1}<v_{2}\). The only case which we need to examine is when \(v_{1}\) and \(v_{2}\) are on the same maximal edge interval of \(\mathcal{P}\). It is not restrictive to assume that \(j_{1}=j_{2}\). From Lemma 4.3, we have that \(v_{1}\in V(\mathcal{P}_{[(1,1),(a_{0},b_{0})]})\) and \(v_{2}\notin V(\mathcal{P}_{[(1,1),(a_{0},b_{0})]})\). Obviously, \(R_{1}\) and \(R_{2}\) are not in an attacking position. Moreover, along the row and the column containing \(R_{1}\) we cannot find any placed rook. By contradiction, if such a rook exists, then there exists a step \(G=\{p,q,r\}\) where \(q\) is its lower right corner with \(q=(i_{1},j)\), where \(1\leq j\leq n\). But in such a case, we have that either \(p,v_{1}\) or \(a,q\) are the anti-diagonal corners of an inner interval of \(\mathcal{P}\), a contradiction. In a similar way, we can show that no rook can be placed on the same row of \(R_{1}\). Hence all pairs of rooks are in a non-attacking position. Moreover, we observe that two switching rooks cannot be in an anti-diagonal position, otherwise, the lower right corners are the anti-diagonal corners of an inner interval of \(\mathcal{P}\), which is a contradiction. We can conclude that \(\mathcal{R}\) is a canonical position. Set \(R=\mathcal{C}_{F}\). We denote by \(\mathcal{F}_{\mathcal{P},k}\) the set of the facets of \(\Delta(\mathcal{P})\) with \(k\) steps and by \(\mathfrak{C}_{\mathcal{P}}\) the set of all canonical positions in \(\mathcal{P}\) of \(k\) rooks. We introduce the map \(\psi:\mathcal{F}_{\mathcal{P},k}\rightarrow\mathfrak{C}_{\mathcal{P}}\) where \(\psi(F)\) is the canonical position \(\mathcal{C}_{F}\) defined before, for all \(F\in\mathcal{F}_{\mathcal{P},k}\). We prove that \(\psi\) is bijective.
Firstly, we show that \(\psi\) is injective. Let \(F_{1},F_{2}\in\mathcal{F}_{\mathcal{P},k}\) such that \(\mathcal{C}_{F_{1}}=\mathcal{C}_{F_{2}}\). We show that \(F_{1}=F_{2}\). Suppose by contradiction that \(F_{1}\neq F_{2}\). It is not restrictive to assume that \(F_{1}\setminus F_{2}=\{a\}\) and \(F_{2}\setminus F_{1}=\{b\}\), so we can obtain \(F_{2}\) from \(F_{1}\) adding \(b\) and removing \(a\). We set \(a=(a_{1},b_{1})\) and without loosing of generality we may assume that \(a\in V(\mathcal{P}_{[(1,1),(a_{0},b_{0})]})\). We distinguish two cases.
1. Suppose that \(a\) is the lower right corner of a step \(S=\{p,a,q\}\) of \(F_{1}\). Set \(p=(p_{x},b_{1})\) and \(q=(a_{1},q_{y})\). If \(q_{y}=b_{0}\), then necessarily \(b_{1}=b_{0}-1\) and \(p=(a_{1}-1,b_{1})\), so the unique possibility is to set \(b\) as the upper left corner of the cell having \(p,q\) as diagonal corners. Hence the rook placed in the cell having \(a\) as lower right corner belongs to \(\mathcal{C}_{F_{1}}\) but not to \(\mathcal{C}_{F_{2}}\), so \(\mathcal{C}_{F_{1}}\neq\mathcal{C}_{F_{2}}\), a contradiction. If \(q_{y}>b_{0}\), the same contradiction arises since \(b\) is necessarily the anti-diagonal corner of the inner interval of \(\mathcal{P}\) given by \(p,a,q\). Similarly when \(q_{y}<b_{0}\).
2. Suppose that \(a\) is not the lower right corner of a step of \(F_{1}\). If there are \(p=(a_{1},p_{y})\) and \(q=(a_{k},b_{1})\) in \(F\) such that there do not exist any \((a_{1},j)\) and \((i,b_{1})\) in \(F\) for all \(p_{y}<j<b_{1}\) and \(a_{1}<i<q_{x}\), then the only possibility is to set \(b\) as the lower right corner of the inner interval of \(\mathcal{P}\) given by \(p,a,q\). It is easy to see that \(\{p,b,q\}\) is a step of \(F_{2}\) and we get a contradiction similar to the previous case. Suppose there exist \(p=(p_{x},b_{1})\) and \(q=(q_{x},b_{1})\) on the same maximal horizontal edge interval of \(\mathcal{P}\) containing \(a\), with \(p<a<q\); similar arguments hold if the maximal edge interval is vertical. If \(q_{x}>a_{0}\) then necessarily \(a\) is the lower right corner of a step of \(F_{1}\), a contradiction. If \(q_{x}\leq a_{0}\), we set \(W_{1}=\{(i,j)\in V(\mathcal{P}):i>p_{x},j<b_{1}\}\) and \(W_{2}=\{(i,j)\in V(\mathcal{P}):i<q_{x},j>b_{1}\}\), so no vertex of \(W_{1}\cup W_{2}\) belongs to \(F\); in particular, no vertex of the maximal vertical interval \(V\) containing \(a\), except \(a\), is in \(F_{1}\), so if we remove \(a\) in \(F_{1}\) to obtain \(F_{2}\) adding the other vertex \(b\), we get a contradiction with Discussion 3.9.
The two examined cases lead to a contradiction. Moreover, all the other situations when \(a\notin V(\mathcal{P}_{[(1,1),(a_{0},b_{0})]})\) give us a contradiction, using an approach similar to the previous one. Hence we can conclude that \(\mathcal{C}_{F_{1}}=\mathcal{C}_{F_{2}}\), so \(\psi\) is injective.
We prove that \(\psi\) is surjective. Let \(\mathcal{T}\) be a canonical position of \(k\) rooks and we prove that there exists a facet \(F\) of \(\Delta(\mathcal{P})\) of \(k\) steps such that \(\psi(F)=\mathcal{T}\). Recall that the parallelogram polyomino \(\mathcal{S}\), which is the hole of \(\mathcal{P}\), is determined by \(S_{1}\) and \(S_{2}\) with endpoints \((a_{0},b_{0})\) and \((a_{k},b_{k})\). Referring to Figure 4, we consider several cases.
1. Assume that all rooks of \(\mathcal{T}\) are placed in \(\mathcal{P}_{1}\). From Remark 4.5, there exists a facet \(F_{1}\) of \(\Delta(\mathcal{P}_{1})\) of \(k\) steps corresponding to \(\mathcal{T}\). Set \(F=F_{1}\cup(S_{2}\setminus\{(a_{0},b_{0}),(a_{k},b_{k})\})\). It is easy to see that \(F\) is a facet with \(k\) steps of \(\Delta(\mathcal{P})\) and, in particular, \(\psi(F)=\mathcal{T}\), that is the claim.
2. Suppose that all rooks of \(\mathcal{T}\) are placed in \(\mathcal{P}_{2}\). Applying Remark 4.5, there exists a facet \(F_{2}\) of \(\Delta(\mathcal{P}_{2})\) of \(k\) steps corresponding to \(\mathcal{T}\). Now we examine four sub-cases depending on the placement of the rooks in \(\mathcal{Q}\). 1. Assume that in \(\mathcal{Q}\) there are not any rook. Set \(L_{1}=[(a_{0},b_{0}+1),(a_{0},b_{k})]\cup[(a_{0},b_{k}),(a_{k}-1,b_{k})]\) and \(F^{\prime}=F_{2}\cup L_{1}\). As in Case 1, \(F^{\prime}\) is a facet of \(k\) steps of \(\Delta(\mathcal{P})\) and \(\psi(F^{\prime})=\mathcal{T}\), which is the desired claim. 2. Suppose that at least a rook is placed in \(\mathcal{P}_{[(1,1),(a_{0},b_{0})]}\) and no one in \(\mathcal{P}_{[(a_{k},b_{k}),(m,n)]}\). We denote by \((a^{\prime},b^{\prime})\) the lower right corner of the cell of \(\mathcal{P}\) where it is placed the most right rook of \(\mathcal{T}\) in \(\mathcal{P}_{[(1,1),(a_{0},b_{0})]}\). If there exists a rook in \(\mathcal{P}_{[(a_{0},1),(m,b_{0})]}\) then we can denote by \((a^{\prime\prime},b^{\prime\prime})\) the lower right corner of the cell of \(\mathcal{P}\) where the most left rook is placed. Set \(L_{2}=[(a^{\prime},b_{0}+1),(a_{0},b_{k})]\cup[(a_{0},b_{k}),(a_{k}-1,b_{k})]\) and \(F^{\prime}=(F_{2}\setminus[(a^{\prime}+1,b^{\prime\prime}),(a_{0},b^{\prime \prime})])\cup L_{2}\). Then \(F^{\prime}\) is a facet of \(k\) steps of \(\Delta(\mathcal{P})\) and \(\psi(F^{\prime})=\mathcal{T}\). Otherwise, if there does not exist any rook in \(\mathcal{P}_{[(a_{0},1),(m,b_{0})]}\) then we put \(F^{\prime}=(F_{2}\setminus[(a^{\prime}+1,b_{0}),(a_{0},b_{0})])\cup L_{2}\). Then \(F^{\prime}\) is a facet of \(k\) steps of \(\Delta(\mathcal{P})\) and \(\psi(F^{\prime})=\mathcal{T}\). 3. The case when no rook is placed in \(\mathcal{P}_{[(1,1),(a_{0},b_{0})]}\) and at least one in \(\mathcal{P}_{[(a_{k},b_{k}),(m,n)]}\) can be proved similarly, as well as when a rook is both in \(\mathcal{P}_{[(1,1),(a_{0},b_{0})]}\) and in \(\mathcal{P}_{[(a_{k},b_{k}),(m,n)]}\).
3. Now, we can assume that a rook of \(\mathcal{T}\) is placed in \(\mathcal{P}_{1}\) and another one in \(\mathcal{P}_{2}\). Consider the following two cell intervals: \(E_{h}\) is the set of the cells of \(\mathcal{P}\) with lower right corner \((i,b_{0})\) for all \(i=2,\ldots,a_{0}\) and \(E_{v}\) is the set of the cells of \(\mathcal{P}\) with lower right corner \((a_{k},j)\) for all \(j=b_{k},\ldots,n-1\). Here we need to distinguish several cases depending on the placement of the rooks in \(\mathcal{Q}\), \(E_{h}\) or \(E_{v}\). 1. Suppose firstly that no rook of \(\mathcal{T}\) is placed in \(\mathcal{Q}\cup E_{h}\cup E_{v}\). Consider the parallelogram sub-polyomino \(\mathcal{K}_{1}=\mathcal{P}_{1}\setminus(\mathcal{Q}\cup E_{h}\cup E_{v})\) of \(\mathcal{P}_{1}\). Note that all rooks of \(\mathcal{T}\) are placed in \(\mathcal{K}_{1}\) and \(\mathcal{P}_{2}\setminus\mathcal{Q}\). Denote by \((t_{x},t_{y})\) and \((r_{x},r_{y})\) respectively the lower right corner of the cells in \(\mathcal{P}_{2}\) where the most left and the most right rooks of \(\mathcal{T}\) are placed. From Remark 4.5 it follows that there exist two facets \(K_{1}\) and \(K_{2}\) respectively of \(\Delta(\mathcal{K}_{1})\) and \(\Delta(\mathcal{P}_{2})\) corresponding to the canonical positions in \(\mathcal{K}_{1}\) and \(\mathcal{P}_{2}\). We define \(F^{\prime}\) as follows. * If \(t_{y}\leq b_{0}\) and \(r_{x}>a_{k}\), then \(F^{\prime}=K_{1}\cup(K_{2}\setminus([(2,t_{y}),(a_{0},t_{y})]\cup[(r_{x},b_{k}),(r_{x},n-1)])\); * If \(t_{y}\leq b_{0}\) and \(r_{x}\leq a_{k}\), then \(F^{\prime}=K_{1}\cup(K_{2}\setminus([(2,t_{y}),(a_{0},t_{y})]\cup[(a_{k},b_{k}),(a_{k},n-1)])\); * If \(t_{y}>b_{0}\) and \(r_{x}\leq a_{k}\), then \(F^{\prime}=K_{1}\cup(K_{2}\setminus([(2,b_{0}),(a_{0},b_{0})]\cup[(a_{k},b_{k}),(a_{k},n-1)])\); * If \(t_{y}>b_{0}\) and \(r_{x}>a_{k}\), then \(F^{\prime}=K_{1}\cup(K_{2}\setminus([(2,b_{0}),(a_{0},b_{0})]\cup[(r_{x},b_{k}),(r_{x},n-1)])\). It is easy to see that, in every case, \(F^{\prime}\) is a facet of \(k\) steps of \(\Delta(\mathcal{P})\) and, moreover, that \(\psi(F)=\mathcal{T}\), which is the claim. 2. Assume now that no rook of \(\mathcal{T}\) is placed in \(E_{h}\cup E_{v}\) but there exists at least one in \(\mathcal{Q}\). Moreover, we may suppose that a rook is in \(\mathcal{P}_{[(1,1),(a_{0},b_{0})]}\) and another one in \(\mathcal{P}_{[(a_{k},b_{k}),(m,n)]}\), because the other cases can be proved similarly. Observe that if there are not rooks in \(\mathcal{P}_{1}\setminus\mathcal{Q}\) (resp. \(\mathcal{P}_{2}\setminus\mathcal{Q}\)) then we are in Case 2 (resp. Case 1) so the proof is done. Assume that at least a rook is both in \(\mathcal{P}_{1}\setminus\mathcal{Q}\) and in \(\mathcal{P}_{2}\setminus\mathcal{Q}\). Let \((t_{x},t_{y})\)
and \((r_{x},r_{y})\) be respectively the lower right corner of the cells in \(\mathcal{P}_{2}\) where the most left and the most right rooks of \(\mathcal{T}\) are placed. Denote by \((c_{x},c_{y})\) the lower right corner of the cell in \(\mathcal{P}_{[(1,1),(a_{0},b_{0})]}\) where the most right rook of \(\mathcal{T}\) is placed, and by \((d_{x},d_{y})\) the lower right corner of the cell in \(\mathcal{P}_{[(a_{k},b_{k}),(m,n)]}\) where the most left rook of \(\mathcal{T}\) is placed. We have that \(c_{y}<t_{y}\) and \(r_{x}<d_{x}\). We examine the following cases. * If \(t_{y}<b_{0}\) and \(r_{x}>a_{k}\), then consider \(\mathcal{P}_{1}\) and the parallelogram polyomino \(\mathcal{K}_{2}\) given by the paths \([(a_{0},t_{y}),(r_{x},t_{y})]\cup[(r_{x},t_{y}),(r_{x},b_{k})]\) and \([(a_{0},t_{y}),(a_{0},b_{0})]\cup(S_{2}\setminus\{(a_{0},b_{0}),(a_{k},b_{k}) \}\cup[(a_{k},b_{k}),(r_{x},b_{k})]\). From Remark 4.5 there exist two facets \(K_{1}\) and \(K_{2}\) respectively of \(\Delta(\mathcal{P}_{1})\) and \(\Delta(\mathcal{K}_{2})\) corresponding to the canonical positions in \(\mathcal{P}_{1}\) and \(\mathcal{K}_{2}\). Then we set \(F^{\prime}=K_{1}\setminus([(c_{x},t_{y}+1),(c_{x},b_{0})]\cup[(a_{k},d_{y}),( r_{x}-1,d_{y})])\cup K_{2}\setminus\{(a_{0},t_{y}),(r_{x},b_{k})\}\). By construction, it is easy to see that \(F^{\prime}\) is a facet of \(k\) steps of \(\Delta(\mathcal{P})\) and, moreover, that \(\psi(F)=\mathcal{T}\). * If \(t_{y}\geq b_{0}\) and \(r_{x}\leq a_{k}\), then we need to consider \(\mathcal{P}_{1}\) and the parallelogram polyomino \(\mathcal{K}_{2}\) given by the paths \([(a_{0}+1,b_{0}-1),(a_{k}+1,b_{0}-1)]\cup[(a_{k}+1,b_{0}-1),(a_{k}+1,b_{k}-1)]\) and \([(a_{0}+1,b_{0}-1),(a_{0}+1,b_{0})]\cup(S_{2}\setminus\{(a_{0},b_{0}),(a_{k},b _{k})\}\cup[(a_{k},b_{k}-1),(a_{k}+1,b_{k}-1)]\). As before, there exist two facets \(K_{1}\) and \(K_{2}\) respectively of \(\Delta(\mathcal{P}_{1})\) and \(\Delta(\mathcal{K}_{2})\) corresponding to the canonical positions in \(\mathcal{P}_{1}\) and \(\mathcal{K}_{2}\), and in such a case the desired facet is \(F^{\prime}=K_{1}\cup K_{2}\setminus(\{(a_{0}+1,b_{0}-1),(a_{k}+1,b_{k}-1)\}\). * If \(t_{y}<b_{0}\) and \(r_{x}\leq a_{k}\), then it is sufficient to consider \(\mathcal{P}_{1}\) and the parallelogram polyomino \(\mathcal{K}_{2}\) given by the paths \([(a_{0},t_{y}),(a_{k}+1,t_{y})]\cup[(a_{k}+1,t_{y}),(a_{k}+1,b_{k}-1)]\) and \([(a_{0},t_{y}),(a_{0},b_{0})]\cup(S_{2}\setminus\{(a_{0},b_{0}),(a_{k},b_{k}) \}\cup[(a_{k},b_{k}-1),(a_{k}+1,b_{k}-1)]\). As before, \(K_{1}\) and \(K_{2}\) are the two facets respectively of \(\Delta(\mathcal{P}_{1})\) and \(\Delta(\mathcal{K}_{2})\) corresponding to the canonical positions in \(\mathcal{P}_{1}\) and \(\mathcal{K}_{2}\) and \(F^{\prime}=K_{1}\setminus[(c_{x},t_{y}+1),(c_{x},b_{0})]\cup K_{2}\setminus( \{(a_{0},t_{y}),(a_{k}+1,b_{k}-1)\}\). * For the other case the facet \(F^{\prime}\) can be defined in a similar way. * Suppose now that there exists a rook in \(E_{h}\cup E_{v}\). It is not restrictive to assume that there is a rook both in \(E_{h}\) and in \(E_{v}\), one in \(\mathcal{P}_{[(1,1),(a_{0},b_{0})]}\) and another one in \(\mathcal{P}_{[(a_{k},b_{k}),(m,n)]}\). Moreover, if there is not any rook in \(\mathcal{P}_{2}\setminus\mathcal{Q}\) then we are in Case 1, so we can assume that a rook is also in \(\mathcal{P}_{2}\setminus\mathcal{Q}\). We denote by \((f_{x},b_{0})\) and \((a_{k},g_{y})\) respectively the lower right corner of the rook in \(E_{h}\) and \(E_{v}\), and we set \((t_{x},t_{y})\), \((r_{x},r_{y})\), \((c_{x},c_{y})\) and \((d_{x},d_{y})\) as before. Here we examine just the case when \(t_{x}<b_{0}\) and \(r_{x}>a_{k}\) because the other ones can be proved using the same approach and more some considerations as done in (2) of Case 3. Consider now the following four parallelogram polyominoes: \(\mathcal{B}_{1}\) and \(\mathcal{B}_{2}\) are the rectangular polyominoes given respectively by \([(1,1),(f_{x},t_{y})]\) and \([(r_{x},d_{y}),(m,n)]\), \(\mathcal{G}_{1}\) is the parallelogram sub-polyomino of \(\mathcal{P}_{1}\) determined by the paths \([(f_{x}-1,b_{0}),(f_{x}-1,g_{y}+1)]\cup[(f_{x}-1,g_{y}+1),(a_{k},g_{y}+1)]\) and \([(f_{x}-1,b_{0}),(a_{0},b_{0})]\cup S_{1}\cup[(a_{k},b_{k}),(a_{k},g_{y}+1)]\) and \(\mathcal{G}_{2}\) is the parallelogram sub-polyomino of \(\mathcal{P}_{2}\) determined by \([(a_{0},t_{y}),(a_{0},b_{0})]\cup S_{2}\cup[(a_{k},b_{k}),(r_{x},b_{k})]\) and \([(a_{0},t_{y}),(r_{x},t_{y})]\cup[(r_{x},t_{y}),(r_{x},b_{k})]\). From Remark 4.5, there exist four facets \(F_{1}\), \(F_{2}\), \(F_{3}\) and \(F_{4}\) of respectively \(\Delta(\mathcal{B}_{1})\), \(\Delta(\mathcal{B}_{2})\), \(\Delta(\mathcal{G}_{1})\) and \(\Delta(\mathcal{G}_{2})\) corresponding to the canonical positions in that four parallelogram polyominoes. Consider \(F^{\prime}=F_{1}\cup F_{2}\cup(F_{3}\setminus\{(f_{x}-1,b_{0}),(f_{x},b_{0}),(a_{k },g_{y}),(a_{k},g_{y}+1)\}\cup(F_{4}\setminus\{(a_{0},t_{y}),(a_{k},b_{k})\}\). We want to point out that when we remove \((f_{x}-1,b_{0}),(f_{x},b_{0})\) from \(F_{3}\), we are removing the step in \(F_{3}\) corresponding to the rook in \(E_{h}\), but it is substituted by \(\{(f_{x}-1,t_{y}),(f_{x},t_{y}),(f_{x},b_{0}+1)\}\) as step of \(F^{\prime}\). The same holds for the vertices \((a_{k},g_{y}),(a_{k},g_{y}+1)\). In light of this, it is easy to see that \(F^{\prime}\) is a facet of \(k\) steps of \(\Delta(\mathcal{P})\) and \(\psi(F)=\mathcal{T}\).
In conclusion, \(\psi\) is surjective, hence \(\psi\) is bijective.
In Figure 11 we figure out a canonical position \(\mathcal{C}\) in \(\mathcal{P}\) of seven rooks and the facet of \(\Delta(\mathcal{P})\) attached to \(\mathcal{C}\).
Finally we are ready to prove the main result of this work.
**Theorem 4.7**.: _Let \(\mathcal{P}\) be a frame polyomino. The \(h\)-polynomial of \(K[\mathcal{P}]\) is the switching rook polynomial of \(\mathcal{P}\)._
Proof.: From (2) of Theorem 3.10 we know that the \(i\)-th coefficient of the \(h\)-polynomial of \(K[\mathcal{P}]\) is the number of the facets of \(\Delta(\mathcal{P})\) having \(i\) steps. From Theorem 4.6, the latter is equal to the number of the canonical positions in \(\mathcal{P}\) of \(i\) rooks, which is the \(i\)-th coefficient of the switching rook polynomial of \(\mathcal{P}\).
It is established that, if \(\mathcal{P}\) is a frame polyomino, then \(K[\mathcal{P}]\) is a Cohen-Macaulay domain so the Castelnuovo-Mumford regularity of \(K[\mathcal{P}]\) is the degree of the \(h\)-polynomial of \(K[\mathcal{P}]\), which represents the rook number of \(\mathcal{P}\) for Theorem 4.7.
**Corollary 4.8**.: _Let \(\mathcal{P}\) be a frame polyomino. Then the Castelnuovo-Mumford regularity of \(K[\mathcal{P}]\) is the rook number of \(\mathcal{P}\)._
We conclude this work observing that [32, Conjecture 3.2] is given just from simple polyominoes. Actually, a frame polyomino is a non-simple polyomino so it is natural to think that [32, Conjecture 3.2] could be extended to every polyomino.
**Conjecture 4.9**.: _Let \(\mathcal{P}\) be a polyomino. The \(h\)-polynomial of \(K[\mathcal{P}]\) is the switching rook polynomial of \(\mathcal{P}\)._
|
2304.00313 | Cost and Reliability Aware Scheduling of Workflows Across Multiple
Clouds with Security Constraints | Many real-world scientific workflows can be represented by a Directed Acyclic
Graph (DAG), where each node represents a task and a directed edge signifies a
dependency between two tasks. Due to the increasing computational resource
requirements of these workflows, they are deployed on multi-cloud systems for
execution. In this paper, we propose a scheduling algorithm that allocates
resources to the tasks present in the workflow using an efficient
list-scheduling approach based on the parameters cost, processing time, and
reliability. Next, for a given a task-resource mapping, we propose a cipher
assignment algorithm that assigns security services to edges responsible for
transferring data in time-optimal manner subject to a given security
constraint. The proposed algorithms have been analyzed to understand their time
and space requirements. We implement the proposed scheduling and cipher
assignment algorithm and experimented with two real-world scientific workflows
namely Epigenomics and Cybershake. We compare the performance of the proposed
scheduling algorithm with the state-of-art evolutionary methods. We observe
that our method outperforms the state-of-art methods always in terms of cost
and reliability, and is inferior in terms of makespan in some cases. | Atherve Tekawade, Suman Banerjee | 2023-04-01T13:50:11Z | http://arxiv.org/abs/2304.00313v1 | # Cost and Reliability Aware Scheduling of Workflows Across Multiple Clouds with Security Constraints
###### Abstract
Many real-world scientific workflows can be represented by a Directed Acyclic Graph (DAG), where each node represents a task and a directed edge signifies a dependency between two tasks. Due to the increasing computational resource requirements of these workflows, they are deployed on multi-cloud systems for execution. In this paper, we propose a scheduling algorithm that allocates resources to the tasks present in the workflow using an efficient list-scheduling approach based on the parameters cost, processing time, and reliability. Next, for a given a task-resource mapping, we propose a cipher assignment algorithm that assigns security services to edges responsible for transferring data in time-optimal manner subject to a given security constraint. The proposed algorithms have been analyzed to understand their time and space requirements. We implement the proposed scheduling and cipher assignment algorithm and experimented with two real-world scientific workflows namely Epigenomics and Cybershake. We compare the performance of the proposed scheduling algorithm with the state-of-art evolutionary methods. We observe that our method outperforms the state-of-art methods always in terms of cost and reliability, and is inferior in terms of makespan in some cases.
Keywords:Multi-Cloud System Workflow Virtual Machine Data-Security Pricing schemes Scheduler
## 1 Introduction
In recent times, _cloud computing_ has emerged as an alternative computing framework and become popular due to many features including 'pay as you use' kind of billing strategy, virtualization, rapid elasticity, on-demand use, and so on [20]. A scientific workflow is defined as a set of tasks where there are several dependencies among tasks [32]. In recent times cloud infrastructure has been used extensively used for the execution of workflows and many scientific workflows from different domains such as _Epigenomics_ in bio-informatics, _Cybershake_ in earthquake engineering, etc. has been successfully deployed on commercial clouds [11; 32]. Recently, due to large-scale computational resource and diversity requirements, multiple cloud providers club together to form a larger infrastructure, and such a framework is known as a multi-cloud system [26]. In many cases, they are managed by a third party. Each one of them provides its own set of Virtual Machines (VMs) and billing mechanisms.
MotivationAs mentioned in the literature, recently the resource requirements for the execution of these workflows have been increased extensively [13]. In such a situation, one solution is to use a multi-cloud system to execute such gigantic size workflows. A multi-cloud system is highly heterogeneous with its respective hardware, software, and network infrastructure [21]. Hence, it is prone to failure. As in a workflow, there are dependencies among the tasks so the failure of one task may lead to the failure of the entire workflow. For successful execution of the workflow, the cloud infrastructure must provide highly reliable computing services. Also in a multi-cloud system, VMs of two different cloud services may reside in two different locations and the network link connecting two servers where these two VMs resides may be accessible to the adversaries. It may so happen that the two tasks allocated to these VMs have dependencies. So some kind of security measures needs to be imposed before transmitting the data from one VM to the other. As a whole for the successful execution of a workflow in a multi-cloud system both security and reliability become important issues.
Related WorkIn the past few years, there has been an extensive study on multi-cloud systems [10; 19; 27]. Several scheduling strategies have been proposed for multi-cloud systems [17; 21]. Recently, there are a few methodologies for scheduling workflows in the multi-cloud systems as well [1; 18; 23]. In general, in a scientific workflow, tasks have a very high level of data and control flow dependency with the precedence task(s). Recently, there are several studies that focus on the reliability issues for workflow execution [9; 14; 28; 31]. Guo et al. [13] studied the problem of scheduling workflows to minimize the cost subject to deadline constraints using a heuristic approach which is a hybridization of the Genetic Algorithm and the Particle Swarm Optimization. Tang et al. [26] considered the problem of scheduling workflows to minimize makespan, cost and maximize reliability. There are a few studies that consider both the security and reliability of the workflow. Wen et al. [28] proposed an
algorithm to find deployments for workflows that are reliable, less expensive and satisfy security requirements in federated clouds. Zhu et al. [33] looked at the problem of task scheduling subject to reliability and security constraints. Apart from this, workflows are also known to be scheduled on multi-processor platforms taking into account the makespan, energy consumption and reliability requirements. [31] -[29]. To the best of our knowledge, there are limited literature on security and reliability of workflow execution. In this paper, we focus on these two aspects for workflow scheduling.
Our ContributionsIn this paper, we make the following contributions:
* We propose a system model to study workflow scheduling in a multi-cloud system considering communication costs and billing mechanisms of different cloud providers.
* We integrate the notion of data confidentiality in our model by assuming that data is encrypted using cryptographic ciphers of varying strengths. We develop an optimal assignment of ciphers to data to minimize the total encryption and decryption overhead.
* We also perform the reliability analysis for task execution considering that the failure of the tasks is modeled using Poisson Distribution.
* We propose a list-based heuristic followed by an iterative local-search algorithm that assigns the best resource to a task based on the makespan, cost, and reliability.
* The proposed solution approach has been implemented with real-world scientific workflows and compared with the state-of-art approaches.
Organization of the PaperThe rest of the paper is organized as follows. Section 2 describes the system's model and problem formulation. The proposed list-based heuristic solution has been described in Section 3. Section 4 contains the experimental evaluation of the proposed solution methodology. Finally, Section 5 concludes our study and gives future research directions.
## 2 Systems Model & Problem Formulation
In this section, we describe the system's model and describe our problem formally. For any positive integer \(n\), \([n]\) denotes the set \(\{1,2,\ldots,n\}\).
### Tasks and Workflow
A task is a job that needs to be executed on the multi-cloud system. A _scientific workflow_ consists of multiple interrelated tasks and is modeled as a _directed acyclic graph_ (DAG) \(G(V,E)\), where \(V(G)=\{v_{i}:i\in[n]\}\) is the set of \(n\) tasks and \(E(G)=\{e_{h}=(v_{i},v_{j}):v_{i},v_{j}\in V(G)\}\) is the set of edges. Here an edge \((v_{i},v_{j})\) signifies a precedence relationship between tasks \(v_{i}\) and \(v_{j}\), i.e., the execution of \(v_{j}\) can only be started after the execution of \(v_{i}\) is finished
and its output is transferred to \(v_{j}\). The weight of the edge (\(v_{i}v_{j}\)) is denoted by \(w_{v_{i},v_{j}}\) and signifies the amount of data to be transferred. The computational resource requirement of the task \(v_{i}\) is denoted by \(w_{v_{i}}\). Next, we present some definitions related to workflow.
Definition 1 (Predecessor of a Task): For any task \(v_{i}\), \(pred(v_{i})\) denotes the set of immediate predecessor tasks of this task and defined as \(pred(v_{i})=\{v_{j}:(v_{j},v_{i})\in E\}\).
Definition 2 (Successor of a Task): For any task \(v_{i}\), \(succ(v_{i})\) denotes the set of immediate successor tasks of this task and defined as \(succ(v_{i})=\{v_{j}:(v_{i},v_{j})\in E\}\).
Definition 3 (Entry Task): It is a redundant task denoted by \(v_{entry}\) having an outgoing edge with zero weight to every \(v\) such that \(pred(v)=\emptyset\).
Definition 4 (Exit Task): \(v_{exit}\) denotes the exit task. It is a redundant node having an incoming edge with zero weight from every \(v\) such that \(succ(v)=\emptyset\).
For simplicity, we assume that \(v_{entry}=v_{1}\) and \(v_{exit}=v_{n}\).
Definition 5 (Topological Level): For a task \(v\), \(top\_level(v)\) denotes its topological level and defined by Equation No. 1.
\[top\_level(v)=\left\{\begin{array}{cl}0&\text{if }v=v_{1}\\ max\limits_{u\in pred(v)}\{top\_level(u)+1\}&\text{otherwise}\end{array}\right. \tag{1}\]
The topological level can be computed using the Breadth-first search algorithm (BFS) [3].
### Multi-Cloud System
Our model consists of \(m\) different cloud providers, each providing their resources. These resources are offered in the form of Virtual Machines (henceforth mentioned as VMs) [24]. Assume that the \(k^{th}\) cloud provider offers a total of \(m_{k}\) different VM types. Let \(VM(k,p)\) denote the \(p^{th}\) type VM offered by the \(k^{th}\) cloud provider. A VM is characterized by its CPU, Disk, and Memory. We assume that VMs have sufficient memory to execute the workflow tasks [22]. Let \(w_{k,p}\) denote the processing capacity of \(VM(k,p)\), which is proportional to the MIPS (amount of computation performed per second). The higher the processing capacity of a VM, the faster it executes a task. The amount of time it takes to execute \(v_{i}\) on \(VM(k,p)\) is denoted by \(T_{exec}[v_{i},VM(k,p)]\) and is given by Equation No. 2. Additionally, we assume that a newly launched VM needs a specific initial boot time denoted by \(T_{boot}[VM(k,p)]\).
\[T_{exec}[v_{i},VM(k,p)]=\frac{w_{v_{i}}}{w_{k,p}} \tag{2}\]
### Network
The bandwidth between two VMs depends on multiple factors like their physical location, the cloud provider, etc. For simplicity, we assume VMs belonging to the same center are connected by a high-speed internal network. In contrast, those belonging to different centers are connected by slower external networks [26]. Let \(B_{k}\) and \(B_{k,k^{\prime}}\) denote the bandwidth of the communication links of the VMs within the \(k\)-th cloud and the bandwidth of the communication links connecting the \(k\)-th and \(k^{{}^{\prime}}\)-th clouds, respectively. Communication time between two tasks \(v_{i}\) and \(v_{j}\) is denoted by \(T_{comm}[v_{i},v_{j}]\). This depends on the amount of data to be transferred and the clouds where the tasks are hosted. This can be computed using Equation No. 3.
\[T_{comm}[v_{i},v_{j}]=\left\{\begin{array}{cl}0&\text{if $v_{i},v_{j}$ are scheduled on the same VM instance}\\ \frac{w_{v_{i},v_{j}}}{B_{k}}&\text{else if $k^{\prime}=k$}\\ \frac{w_{v_{i},v_{j}}}{B_{k,k^{\prime}}}&\text{otherwise}\end{array}\right. \tag{3}\]
### Resource Model and Timing Metrics
Due to the flexibility of resource acquisition provided by cloud providers, a client can run any number of instances of a given VM [22]. Hence the maximum number of instances can be at most \(\mathcal{N}\cdot n\), where \(\mathcal{N}=\sum_{k=1}^{m}\sum_{p=1}^{m_{k}}1\) assuming each task is run on every possible VM type. Let \(R=\{vm_{r}:r\in[\mathcal{N}\cdot n]\}\) denote the set of VM instances to be leased (pool of resources), and \(LST_{vm_{r}}\) and \(LFT_{vm_{r}}\) denote the start and end times respectively for which \(vm_{r}\) is leased. For task \(v_{i}\), once all the data between \(v_{j}\) and \(v_{j}\), \(\forall v_{j}\in pred(v_{i})\) is received, the decryption starts happening. Then, execution of the task \(v_{i}\) on its allocated VM starts happening, after which encryption of all data between \(v_{i}\) and \(v_{j}\), \(\forall v_{j}\in succ(v_{i})\) happens. Then finally, the data needs to be transferred. A VM must be kept on until a task has transferred data to all its successor nodes [22]. Hence the total processing time denoted by \(PT_{v_{i}}\) includes the decryption, execution, encryption, and communication time as shown in Equation No. 4.
\[PT_{v_{i}}=\left\{\begin{array}{cl}T_{exec}[v_{i},VM(k,p)]\\ +\sum_{v_{j}\in succ(v_{i})}(T_{enc}[v_{i},v_{j}]+T_{comm}[v_{i},v_{j}])&\text {if $v_{i}=v_{1}$}\\ \sum_{v_{j}\in pred(v_{i})}T_{dec}[v_{j},v_{i}]+T_{exec}[v_{i},VM(k,p)]&\text {else if $v_{i}=v_{n}$}\\ \sum_{v_{i}\in pred(v_{i})}T_{dec}[v_{j},v_{i}]+T_{exec}[v_{i},VM(k,p)]&\\ +\sum_{v_{j}\in succ(v_{i})}(T_{enc}[v_{i},v_{j}]+T_{comm}[v_{i},v_{j}])& \text{otherwise}\end{array}\right. \tag{4}\]
where \(v_{i}\) is assumed to be executed on VM of type \(VM(k,p)\). Let the start and finish times of task \(v_{i}\) be denoted by \(ST_{v_{i}}\) and \(FT_{v_{i}}\), respectively. As shown in Equation No, a task can start only after all its predecessors are complete. 5.
\[ST_{v_{i}}=\left\{\begin{array}{cl}0&\text{if $v_{i}=v_{1}$}\\ \max_{v_{j}\in pred(v_{i})}FT_{v_{j}}&\text{otherwise}\end{array}\right. \tag{5}\]
The finish time is given by the sum of start and processing times, as shown in Equation No. 6.
\[FT_{v_{i}}=ST_{v_{i}}+PT_{v_{i}} \tag{6}\]
The makespan is the total time required to execute the workflow which happens when the last task finishes.
\[makespan=FT_{v_{n}} \tag{7}\]
### Pricing Mechanisms
In this study, we consider three popular cloud-providing services: Microsoft Azure (_MA_), Amazon Web Services (_AWS_), and Google Cloud Platform (_GCP_). Each cloud provider charges the customer after a specified billing period \(\tau\). Let \(c_{k,p}\) denote the price for renting \(VM(k,p)\) for a single billing period. Let the cost corresponding to renting VM \(vm_{r}\) be denoted by \(C_{vm_{r}}\). Below we discuss the pricing mechanisms used by different cloud providers [26].
* _MA_ follows a fine-grained scheme where the customer is charged per minute of usage _i.e._\(\tau\) = 1 min. The cost is given in Equation No. 8. \[C_{vm_{r}}=\lceil\frac{LFT_{vm_{r}}-LST_{vm_{r}}}{\tau}\rceil\cdot c_{k,p}\] (8)
* _AWS_ follows a coarse-grained pricing mechanism where the customer per hour of usage. _i.e._\(\tau\) = 1 hr. Equation No can obtain the cost. 8.
* _GCP_ follows a hybrid pricing mechanism where the customer is charged for a minimum of ten minutes, after which per-minute billing is followed. The cost is formulated in Equation No. 9 where \(C_{k,p}\) denotes the price for the first ten minutes and \(\tau\) = 1 min. \[C_{vm_{r}}=C_{k,p}+\max(0,\lceil\frac{LFT_{vm_{r}}-LST_{vm_{r}}-10\cdot\tau}{ \tau}\rceil)\cdot c_{k,p}\] (9)
Table 1 illustrates the prices of various VMs [26]. Apart from this, each cloud provider has a specific pricing scheme associated with sending data out. Generally, ingress data is not charged [8], [6], [7]. Let \(c_{k,k^{{}^{\prime}}}\) denote the price per unit data for sending data between the \(k\)-th and \(k^{{}^{\prime}}\)-th providers. This price depends on many factors: Location where the VMs are hosted, size of data, etc. Table 2 shows the rates for transferring data as taken from the official websites [8], [6], [7]. In the table, across centers refers to locations managed by the same cloud provider but located in different places. The prices vary depending on the location (US, Europe, Asia, etc.). For simplicity, we consider the median value across all locations. Across clouds refers to the transfers across different cloud providers over the external internet. Again this price depends on
the location, so we consider the median value. The cost for transferring data between \(v_{i}\) and \(v_{j}\) is denoted by \(C_{v_{i},v_{j}}\) and is given by Equation No. 10.
\[C_{v_{i},v_{j}}=w_{v_{i},v_{j}}\cdot c_{k,k^{\prime}} \tag{10}\]
where \(v_{i},v_{j}\) are assumed to be scheduled on the \(k\)-th and \(k^{{}^{\prime}}\)-th clouds respectively.
As shown in Equation No, the total execution cost is the sum of costs associated with task execution on a VM and the cost of transferring data from one task to another. 11.
\[cost=\sum_{vm_{r}\in R}C_{vm_{r}}+\sum_{i=1}^{n}\sum_{v_{j}\in succ(v_{i})}C_{v _{i},v_{j}} \tag{11}\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{_MA_} & \multicolumn{2}{c|}{_AWS_} & \multicolumn{2}{c|}{_GCP_} \\ \hline _VM_ & _Per Minute(\$)_ & _VM_ & _Per hour(\$)_ & _VM_ & _Ten Minutes(\$)_ & _Per Minute(\$)_ \\ \hline
**B2MS** & 0.0015 & **m1.small** & 0.06 & **n1-highcpu-2** & 0.014 & 0.0012 \\ \hline
**B4MS** & 0.003 & **m1.medium** & 0.12 & **n1-highcpu-4** & 0.025 & 0.0023 \\ \hline
**B8MS** & 0.006 & **m1.large** & 0.24 & **n1-highcpu-8** & 0.05 & 0.0047 \\ \hline
**B16MS** & 0.012 & **m1.xlarge** & 0.45 & **n1-highcpu-16** & 0.1 & 0.0093 \\ \hline \end{tabular}
\end{table}
Table 1: VM costs for different cloud providers
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{_MA_} & \multicolumn{2}{c|}{_AWS_} & \multicolumn{2}{c|}{_GCP_} \\ \hline \multicolumn{2}{|c|}{**Same center - Free**} & \multicolumn{2}{c|}{**Same center- Free**} & \multicolumn{2}{c|}{**Same center - Free**} \\ \hline
**Across centers - \$0.08/GB** & \multicolumn{2}{c|}{**Across centers - \$0.02/GB**} & \multicolumn{2}{c|}{**Across centers - \$0.05/GB**} \\ \hline \multicolumn{2}{|c|}{**Across clouds**} & \multicolumn{2}{c|}{_Data size_} & \multicolumn{2}{c|}{_Per GB(\$)_} & \multicolumn{2}{c|}{_Data size_} & \multicolumn{2}{c|}{_Per GB(\$)_} \\ \hline Upto 100GB & Free & Upto 100GB & Free & 0-1TB & 0.19 \\ \hline First 10 TB & 0.11 & First 10 TB & 0.09 & 1-10TB & 0.18 \\ \hline Next 40TB & 0.075 & Next 40TB & 0.085 & 10TB+ & 0.15 \\ \hline Next 100TB & 0.07 & Next 100TB & 0.07 & & - \\ \hline Next 350TB & 0.06 & Greater than 150 TB & 0.05 & & - \\ \hline \end{tabular}
\end{table}
Table 2: Data-Transfer costs for different cloud providers
\begin{table}
\begin{tabular}{||c c c c c||} \hline Level & Rounds & Plaintexts & Vul & Time (\(\mu s/block\)) \\ \hline \hline
1 & 4 & 2\({}^{29}\) & 98 & 3.08 \\ \hline
2 & 8 & 2\({}^{51}\) & 67 & 3.58 \\ \hline
3 & 12 & 2\({}^{94}\) & 34 & 4.15 \\ \hline
4 & 16 & 2\({}^{118}\) & 10 & 4.63 \\ \hline
5 & 20 & 2\({}^{128}\) & 0 & 5.21 \\ \hline \end{tabular}
\end{table}
Table 3: Security comparison of RC6 variants.
### Security Mechanism
Due to data dependencies between tasks, data needs to be transferred between tasks. To ensure secure communication, different security mechanisms can be employed to achieve the goal of confidentiality. Similar to [16], we focus on achieving data confidentiality using block ciphers. The notion of _Security Level_ is used to measure of the strength of a cipher. The security level is proportionally related to the number of rounds of encryption. The designer has to make a trade off between security and time: To ensure more security, we must go for a cipher with more encryption rounds. Table 3 captures this trade off, where each row corresponds to one cipher with level \(L_{i}\). The Plaintexts column represents the number of plaintexts needed for a successful cryptanalysis attack, and the Time column represents the time required to encrypt a block (128 bits) of data. The Vul column represents the vulnerability \(V_{i}\) defined as the logarithm of the ratio of the maximum number of plaintexts required by the brute-force search, \(PT_{bf}\), to the number of plaintexts required using a chosen cryptanalysis algorithm, \(PT_{cc}(L_{i})\) as shown in Equation No. 12[4]. The table illustrates the comparison of parameters for different variants of RC6, a widely used block cipher [15].
\[V_{i}=\log_{2}\lfloor\frac{PT_{bf}}{PT_{cc}(L_{i})}\rfloor \tag{12}\]
The system vulnerability is defined as a weighted sum of the vulnerabilities of each data item, as shown in Equation No. 13[16].
\[V_{system}=\sum_{i=1}^{n}\sum_{v_{j}\in succ(v_{i})}W_{v_{i},v_{j}}\cdot V_{v _{i},v_{j}} \tag{13}\]
where \(W_{v_{i},v_{j}}\) and \(V_{v_{i},v_{j}}\) denote the weight and vulnerability of the data between \(v_{i}\) and \(v_{j}\) respectively. The maximum vulnerability \(V_{\max}\) is got by setting \(V_{v_{i},v_{j}}\) to the cipher with the maximum vulnerability in Equation 13.
The encryption time can be characterized by a linear function of data size and selected security level [30]. If the data between \(v_{i}\) and \(v_{j}\) is encrypted using level \(L_{i}\) security, the time required to encrypt is denoted by \(T_{enc}[v_{i},v_{j}]\) given by Equation No. 14.
\[T_{enc}[v_{i},v_{j}]=\left\{\begin{array}{rl}0&\text{if $v_{i},v_{j}$ are scheduled on the same VM instance}\\ t_{i}\cdot\frac{w_{v_{i},v_{j}}}{B}\cdot\frac{1}{w_{k,p}}&\text{otherwise} \end{array}\right. \tag{14}\]
where \(t_{i}\) denotes the amount of time required to encrypt one block of data using cipher of level \(L_{i}\), \(B\) denotes the block size (128 bits), and the \(v_{i}\) is assumed to be executed on \(VM(k,p)\). The decryption overhead is similar to that of encryption [16] and is denoted by \(T_{dec}[v_{i},v_{j}]\), the only difference is that it is executed on the VM executing \(v_{j}\).
### Reliability Analysis
Reliability is defined as the probability of a failure-free execution of the workflow. The occurrence of a failure is modeled as a Poisson Distribution [11]. Let \(\lambda_{k,p}\) denote the parameter of the distribution for \(VM(k,p)\). Similarly, let \(\lambda_{k}\) and \(\lambda_{k^{{}^{\prime}},k}\) denote the parameters for the communication links for the \(k\)-th cloud and for those between the \(k^{{}^{\prime}}\)-th and the \(k\)-th cloud, respectively. The probability that the data will be transferred successfully between \(v_{i}\) and \(v_{j}\) is denoted by \(\mathcal{R}_{v_{i},v_{j}}\) and is given by Equation No. 15.
\[\mathcal{R}_{v_{i},v_{j}}=\left\{\begin{array}{cl}1&\text{if $v_{i},v_{j}$ are scheduled on the same VM instance}\\ e^{-\lambda_{k^{{}^{\prime}},k^{{}^{\prime}}}}\text{ }\text
where \(UV_{req}\) and \(UV_{v_{i},v_{j}}\) denote the upper bounds on the system and data item between \(v_{i}\) and \(v_{j}\) vulnerabilities, respectively. The trade-off is between more security and time-overhead leading to more makespan, cost and lower reliability. Before presenting our algorithm, we first present the pseudocode to convert a task to resource mapping to a schedule in Algorithm 1 similar to the one in [13] and is described as follows. We initialize the resources leased till now \(R_{curr}\) and task to resource mapping \(\mathcal{M}\) to empty. If a task has predecessors, it can start only after they finish, as illustrated in Lines No. 9, 10. For each task, we find the decryption, encryption, transfer times, transfer costs, and reliability associated with transferring data using the \(process\_task()\) routine in Algorithm 2. The routine simply computes the encryption, decryption time using Equation No. 14, transfer time using Equation No. 3, transfer cost using Equation No. 10 and reliability using Equation No. 15. The processing time is computed in Line No. 13 using Equation No. 4. If the resource is already hosted, we know that the task can start only after the current lease finish time \(LFT_{vm_{u_{i}}}\) as shown in Line No. 15, 16. Otherwise, we first launch the corresponding VM instance with lease start time \(LST_{vm_{u_{i}}}\) set as shown in Line No. 19 and task start time after booting as shown in Line No. 18. Finally, the finish time of the task is calculated by adding the processing time to the start time as shown in Line No. 21. In Line No. 22, the lease finish time updated to the finish time of the task. After processing all the tasks, the makespan is given by the finish time of the last task as shown in Line No. 24. Lines 25 to 30 compute the cost and reliability associated with executing tasks.
## 3 Proposed Algorithm
Our solution methodology is divided into two parts: assigning each task to a VM and then the appropriate ciphers. We first describe the process employed to assign ciphers to data because this applies to any task to resource allocation.
### Cipher Assignment
Given a particular task to resource mapping, we wish to assign the ciphers to data so that the total encryption and decryption overhead is minimized subject to the security constraints as stated below.
**Minimize**\(\Big{\{}\sum_{i=1}^{n}\sum_{v_{j}\in succ(v_{i})}T_{enc}[v_{i},v_{j}]+T_{dec}[v_{i},v_{j}]\)
**Subject To**\(\Big{\{}\begin{array}{l}V_{system}\leq UV_{req}\\ V_{v_{i},v_{j}}\leq UV_{v_{i},v_{j}}\end{array}\)
We use a Dynamic Programming (DP) strategy to solve this problem as presented in Algorithm 3 and described as follows. The DP entry \(dp[h][Vul]\)
describes the least time to encrypt and decrypt the first \(h\) edges and the corresponding cipher for the \(h-\)th edge, as shown in Line No. 21 for the security constraint \(V_{system}\leq Vul\). We begin by finding the VMs that the tasks \(v_{i},v_{j}\) are executed on in Line No 4. Then assignment of all possible ciphers to the \(h-\)th edge is done in Line No. 8. If the constraints in Line No. 9 are satisfied, we calculate the encryption and decryption time in Line No. 10. If this is the first edge (\(h=1\)), we find the least encryption time as in Line No. 13. If not, we find the remaining vulnerability \(V_{remaining}\) in Line No. 16 that the first \(h-1\) edges should satisfy using Equation No. 13. The total time will
be given by the best time to encrypt the \(h-1\) edges and the time to encrypt the \(h-\)th edge, which is calculated in Line No. 17. The total minimum time is updated in Line No. 19. The overall recurrence relation is shown in Equation No. 18. Finally, we begin assigning the ciphers to each edge in Line No. 23-27. We start adding the ciphers in a reverse manner _i.e._ first the last edge will be assigned the cipher, then the second last, and so on. As mentioned earlier, the DP entry \(dp[|E|][UV_{req}]\) stores the total minimum time and the cipher corresponding to the last edge for the constraint of \(UV_{req}\). Hence, we start with \(Vul=UV_{req}\) as shown in Line No. 22. The corresponding cipher is got from the dp table entry \(dp[h][Vul]\) in Line No. 25, and \(Vul\) is updated
to the new constraint for the reduced edge set \(\{e_{1},e_{2},\ldots,e_{h-1}\}\) according to Equation No. 13 in Line No. 27.
\[dp[h][Vul][0]=\left\{\begin{array}{rl}\min_{cipher\in cipher\_tabular}T_{enc}[v_{i },v_{j}]+T_{dec}[v_{i},v_{j}]&\text{if h = 0}\\ \min_{cipher\in cipher\_tabular}T_{enc}[v_{i},v_{j}]+T_{dec}[v_{i},v_{j}]&\\ +dp[h-1][Vul-W_{v_{i},v_{j}}\cdot V_{v_{i},v_{j}}]&\text{otherwise}\end{array}\right. \tag{18}\]
Note that the weights \(W_{v_{i},v_{j}}\) and constraint \(UV_{req}\) may be decimals, so we convert them into integers by multiplying with powers of 10.
```
Input:\(G(V,E)\), Multi-cloud system parameters, Task to resource mapping (\(X\)), Security Cipher Table similar to Table 3 (\(cipher\_tabular\)), \(W_{v_{i},v_{j}},UV_{req},UV_{v_{i},v_{j}}\) Output:\(\mathcal{C}\)
1 Initialize DP table \(dp[|E|+1][UV_{req}+1]\);
2for\(h\longleftarrow\) 1 to \(|E|\)do
3\((v_{i},v_{j})\longleftarrow\)\(e_{h}\);
4\(VM(k,p)\longleftarrow\)\(type(R[X[v_{i}]]),VM(k^{\prime},p^{\prime})\longleftarrow\)\(type(R[X[v_{j}]])\);
5for\(Vul\longleftarrow\) 0 to \(UV_{req}\)do
6\(best\_time\longleftarrow\)\(INT\_MAX\);
7\(best\_cipher\longleftarrow\)None;
8for\(cipher\in cipher\_tabular\)do
9if\(W_{v_{i},v_{j}}\cdot V_{cipher}\leq Vul,V_{cipher}\leq UV_{v_{i},v_{j}}\)then
10 Calculate \(T_{enc}[v_{i},v_{j}],T_{dec}[v_{i},v_{j}]\) corresponding to \(cipher\) and assuming \(v_{i},v_{j}\) are executed on \(VM(k,p),VM(k^{\prime},p^{\prime})\) respectively from Equation No. 14;
11if\(h=1\)then
12if\(T_{enc}[v_{i},v_{j}]+T_{dec}[v_{i},v_{j}]<best\_time\)then
13\(best\_time\longleftarrow\)\(T_{enc}[v_{i},v_{j}]+T_{dec}[v_{i},v_{j}]\);
14\(best\_cipher\longleftarrow\)cipher;
15
16else
17\(V_{remaining}=Vul-W_{v_{i},v_{j}}\cdot V_{cipher}\);
18\(total\_time\longleftarrow\)\(dp[h-1][V_{remaining}][0]+T_{enc}[v_{i},v_{j}]+T_{dec}[v_{i},v_{j}]\);
19if\(total\_time<best\_time\)then
20\(best\_time\longleftarrow\)\(total\_time\);
21\(best\_cipher\longleftarrow\)cipher;
22
23
24
25
26
27\(dp[h][Vul]\longleftarrow\)\([best\_time,best\_cipher]\);
28
29\(Vul\longleftarrow\)\(UV_{req}\);
30for\(h\longleftarrow|E|\) to 1do
31\((v_{i},v_{j})\longleftarrow\)\(e_{h}\);
32\(C_{v_{i},v_{j}}\longleftarrow dp[h][Vul][1]\);
33 Add tuple \((v_{i},v_{j},C_{v_{i},v_{j}})\) to \(\mathcal{C}\);
34\(Vul\longleftarrow\)\(Vul-W_{v_{i},v_{j}}\cdot V_{v_{i},v_{j}}\);
35
36return\(\mathcal{C}\);
```
**Algorithm 3**Cipher Assignment Algorithm
### VM Allocation
We follow an efficient list-based approach to assign resources to tasks. List-based scheduling generally works in two phases: Ordering the tasks based on some rank followed by assigning the resources to the tasks in order one by one. Firstly, we reduce the size of the resource pool under consideration similar to [22] as follows: Let \(\mathcal{P}\) denote the largest set of tasks that can be executed in parallel. Then \(R\) can be redefined as \(R=\{vm_{r}:r\in[|\mathcal{P}|\cdot n]\}\), by considering a VM instance of each type for each task in \(\mathcal{P}\). One way to find the set \(\mathcal{P}\) is to find the largest set of tasks, all having the same topological level. Since all the tasks have the same topological level, they have no dependency relation. The set \(\mathcal{P}\) computed in this manner may not necessarily be the largest one. Secondly, we define the notion of rank of a task in Definition 6.
Definition 6: \(Rank(v_{i})\) denotes the rank of the task \(v_{i}\), which gives an idea of the worst-case processing time along all paths from \(v_{i}\) to \(v_{n}\) formulated in Equation No. 19.
\[Rank_{v_{i}}=\left\{\begin{array}{cl}\max_{v_{j}\in succ(v_{i})}\{Rank(v_{ j})\}+\overline{T_{exec}}[v_{i},VM(k,p)]&\\ \qquad+\sum_{v_{j}\in succ(v_{i})}\overline{T_{comm}}[v_{i},v_{j}]&\text{if $v_{i}=v_{1}$}\\ \qquad\sum_{v_{j}\in pred(v_{i})}\overline{T_{exec}}[v_{i},VM(k,p)]&\text{else if $v_{i}=v_{n}$}\\ \max_{v_{j}\in succ(v_{i})}\{Rank(v_{j})\}+\overline{T_{exec}}[v_{i},VM(k,p)] &\\ \qquad+\sum_{v_{j}\in succ(v_{i})}+\overline{T_{comm}}[v_{i},v_{j}]&\text{ otherwise}\end{array}\right. \tag{19}\]
where \(\overline{T_{exec}}[v_{i},VM(k,p)]\) denotes the average execution time of executing \(v_{i}\) on VM types, \(\overline{T_{comm}}[v_{i},v_{j}]=\frac{w(v_{i},v_{j})}{\overline{B}}\), \(\overline{B}\) denoting the average bandwidth. The allocation algorithm known as List Based Scheduler (LBS) is presented in Algorithm 4 and described as follows: We start allocating tasks bottom-up because the processing time of a task depends majorly on the successor nodes, as seen from Equation No. 4. For this, we sort the tasks in decreasing order of their topological levels, prioritizing tasks with a higher rank in Line No. 1, 2. Since tasks with the same topological level can execute simultaneously, we assign different VM instances to each of them, and the set \(vms\_allocated\) keeps track of the VMs allocated till now, which is initialized to empty set in Line No. 8. We use the \(process\_task\) routine to come up with the necessary components in the processing time, denoted by \(PT_{v_{i},vm_{r}}\) when \(v_{i}\) is assumed to be executed on \(vm_{r}\) in Line No. 12, 13. Since we know where each successor is allocated, we can compute the \(transfer\_time,transfer\_cost,rel\). Since we do not know the cipher assignment yet, we assume \(enc\_time,dec\_time\) to be zero. The cost and reliability associated with processing time and data-transfer are calculated in Line No. 14-17. After this, we calculate a metric \(metric_{v_{i},vm_{r}}\) based on a linear combination of cost, processing time, and reliability with each parameter normalized using min-max normalization in Line No. 20. The weights \(\alpha\), \(\beta\), and \(\gamma\) give the relative importance between the parameters. In this study, we give more importance to cost and choose the weights: \(\alpha=0.7,\beta=0.2,\gamma=0.1\). A
VM instance with a lesser value of \(metric_{v_{i},vm_{r}}\) is desirable. Hence we sort the instances in increasing order of \(metric_{v_{i},vm_{r}}\) and assign the first VM instance yet not assigned to any previous task with the same topological level in Line No. 22-27 to increase parallelizability. Our algorithm differs from list-based methods used in other algorithms [26][29], in that it allocates tasks in a reverse order, i.e., it allocates a successor task before its predecessor task. This is because, as mentioned earlier, the processing time is mostly dependent on the successor nodes.
Lastly, we propose an iterative local search algorithm (LS) that takes as input a VM allocation and reassigns VMs to tasks for better makespan, cost, and reliability, as illustrated in Algorithm 5. The tasks are first ordered by their Ranks in Line No. 1. For a fixed number of iterations, the algorithm begins by reassigning a VM instance \(vm_{r}\) to each task \(v_{i}\) and computing the corresponding \(makespan(s_{v_{i},vm_{r}}),cost(cost_{v_{i},vm_{r}}),reliability(\mathcal{R}_{v _{i},vm_{r}})\). Similar to Algorithm 4, we use min-max normalization to come up with a metric with \(\alpha=0.6,\beta=0.2,\gamma=0.2\) in Line No. 9. The VM instance \(vm_{u_{i}}\) with least value is assigned to the task \(v_{i}\) in Line No. 10. If there is no change in the allocation, we terminate the procedure as shown in Line No. 11, 12.
The overall algorithm proceeds by the below-listed steps:
* Calculate the reduced resource pool \(R\) using BFS [3].
* Use Algorithm 4 to obtain the task to VM allocation using Algorithm 4.
* (Optional) Use Algorithm 5 to get an improved task for VM allocation.
* Compute the edge to cipher mapping \(\mathcal{C}\) corresponding to the obtained task to VM allocation using Algorithm 3.
* Use Algorithm 1 to compute the final schedule \(\mathcal{S}\).
### Complexity Analysis
The time complexity for the \(process\_task\) routine is \(\mathcal{O}(n)\) because each task can have at most \(n\) successors or predecessors, hence the time needed for one task in Algorithm 1 is \(\mathcal{O}(n)\). For all tasks, the total time needed will be \(\mathcal{O}(n^{2})\). Finally iterating through \(R_{curr}\) in Line No. 25 of Algorithm 1 takes at most \(\mathcal{O}(\mathcal{N}\cdot n)\) time as \(|R_{curr}|\leq\mathcal{N}\cdot n\). Algorithm 3 is a DP-based approach, and the time needed for computing one entry \(dp[i][j]\) depends on the number of ciphers available for encryption \(|cipher\_tab|\) as seen from Line No. 7. Hence, the total time needed to compute all entries will be \(\mathcal{O}(|cipher\_tab|\cdot UV_{req}\cdot n^{2})\) as \(|E|\leq n^{2}\). In Algorithm 4, computing the topological levels and Ranks takes \(\mathcal{O}(n^{2})\) time. For each task, for each VM instance pair, calculating the processing time, cost and reliability take \(\mathcal{O}(n)\) time as seen from the \(process\_task\) routine. Sorting the metrics over all VM instance pairs takes \(\mathcal{O}(|R|\cdot log|R|)\) time. Hence for one task, the time required is \(\mathcal{O}(|R|\cdot(n+\log|R|))\). For all tasks, the time complexity will be \(\mathcal{O}(n\cdot|R|\cdot(n+\log|R|))\). As \(|R|\leq\mathcal{N}\cdot n\), the time complexity will be at most \(\mathcal{O}(\mathcal{N}\cdot n^{2}\cdot(n+\log n+\log|\mathcal{N}|))\). Most approaches
use evolutionary methods for VM allocation [5], [2], [13], which are very time-consuming compared to Algorithm 4. Lastly, we look at the time needed for Algorithm 5. For each reassignment of a VM instance to a task in Line No. 5, we compute the schedule in Line No. 6, which takes \(\mathcal{O}(n^{2})\) time. Hence the total time needed over all iterations, tasks, and VM instances will be \(\mathcal{O}(num\_iter\cdot\mathcal{N}\cdot n^{4})\).
```
Input:\(G(V,E)\), Multi-cloud system parameters, Resource Pool (\(R\)), Task to resource mapping (\(X\)) Output:New task to resource mapping (\(X^{\prime}\))
1 Sort tasks in decreasing order of their Ranks in list \(L\);
2for\(itr\longleftarrow 1\) to \(num\_iter\)do
3for\(v_{i}\in L\)do
4for\(vm_{r}\in R\)do
5 Assign \(vm_{r}\) to \(v_{i}\) to get new schedule \(X^{\prime}\);
6 Calculate \(s_{v_{i},vm_{r}},cost_{v_{i},vm_{r}},\mathcal{R}_{v_{i},vm_{r}}\) corresponding to \(X^{\prime}\) using Algorithm 1;
7 Calculate \(s_{v_{i},\min},s_{v_{i},\max},cost_{v_{i},\min},cost_{v_{i},\max}\)
8\(\mathcal{R}_{v_{i},\min},\mathcal{R}_{v_{i},\max}\) over all VM instances in \(R\).;
9\(vm_{u_{i}}\longleftarrow\min_{vm_{r}\in R}\{\alpha.\frac{cost_{v_{i},vm_{r}}- cost_{v_{i},\min}}{cost_{v_{i},\max}-cost_{v_{i},\min}}+\beta.\frac{s_{v_{i},vm_{r}}-s_{v_{i}, \min}}{s_{v_{i},\max}-s_{v_{i},\min}}+\gamma\).
10\(\frac{\mathcal{R}_{v_{i},\max}-\mathcal{R}_{v_{i},vm_{r}}}{\mathcal{R}_{v_{i}, \max}-\mathcal{R}_{v_{i},\min}}\);
11\(X^{\prime}[v_{i}]\longleftarrow u_{i}\);
12
13if\(X^{\prime}=X\)then
14break;
15
16return\(X^{\prime}\);
```
**Algorithm 5**LS Algorithm
## 4 Experimental Evaluation
In this section, we describe the experimental evaluation of the proposed solution approach. First, we start by describing the experimental setup and example task graphs.
### Experimental setup, Task graphs, and Algorithms
We implement the proposed methodology on a workbench system with i7 12\({}^{\text{th}}\) generation processor and 16GB memory in Python 3.8.10. For this study, we consider two real-world task graphs: Epigenomics and LIGO, which are widely used in literature [26], [25] for comparison. We consider Epigenomics workflows with \(n=24,100\) and Cybershake workflows with \(n=30,100\)[25]. The structure of each workflow can be obtained in XML format from the website [12]. The smaller size of each workflow is called Small (S), and the larger one is called Large (L). Due to space limitation, we only consider two workflows with two sizes each. More information about the workflows can be obtained from [11].
### Competitive Algorithms
LBS is compared with the following methods:
* Gravitational Search Algorithm (GSA) [5]: It aims to minimize the makespan and cost of scheduling the workflow in single cloud.
* Fault-tolerant Cost-Efficient workflow Scheduling (FCWS) [26]: This is a list-based approach for scheduling workflows on multi-cloud systems taking into account the makespan, cost and reliability. VMs are assigned based on a linear combination of cost and reliability. Fault-tolerance is based on hazard rate of the distribution used for reliability analysis. In our case of Poisson distribution, the condition for fault-tolerance used in FCWS algorithm does not apply, so we do not consider it. Also FCWS does not consider communication cost while assigning VMs, so we include it in the cost while assigning VMs.
Since LBS, GSA and FCWS are task to VM allocation algorithms, we use Algorithm 3 on top of them to decide the ciphers for encryption. We do not compare with other workflow scheduling algorithms that take into account security constraints like [28], because it does not consider makespan.
### Experimental description
We implement the proposed methodology on a workstation having \(i5\), \(10^{\text{th}}\) generation processor and 32GB memory in Python 3.8.10. Parameters of our multi-cloud system are set as in [26]. VMs are set with varying computation capacities from 1 to 32. The average bandwidth internal to a cloud is set to 20Mbps, and external bandwidth is set to 100Mbps. Pricing mechanisms are set proportional to VM compute capacities as in Table 1 and data transfer costs according to Table 2. The boot time of each VM was set to 97 seconds [22]. The number of cloud providers considered is six, two from each type of _MA_, _AWS_, and _GCP_. Two cloud providers with the same types are assumed to be located in different centers. The encryption levels and overhead is chosen as in Table 3. We assume that \(w_{k,p}=1\). The security constraints are set as follows [16]: \(UV_{req}=\eta\cdot V_{\max}\), where \(\eta\in[0.1,0.7]\) in steps of 0.1. \(UV_{v_{i},v_{j}}\) is a randomly chosen security cipher from Table 3 and the weights \(W_{v_{i},v_{j}}\in[0.1,1]\). The parameter of the Poisson Distribution is set uniformly at random \(\lambda\in[10^{-8},10^{-7}]\). We use Algorithm 5 on top of each algorithm to see the improvement obtained. Each experiment is performed 15 times, and the results are reported on average.
Figure 1: Epigenomics graph with \(n=24\)
1. _Epigenomics:_ For Small size, GSA gives the least makespan with LBS having slightly more makespan, both perform much better than FCWS by \(71.93\%\) as seen from Fig. 1 (a). From Fig. 1 (b) LBS gives much lesser cost than GSA by \(59.37\%\) followed by FCWS by \(77.18\%\). From Fig. 1 (c), in terms of reliability, all algorithms perform similarly with LBS performing slightly better than GSA by \(0.15\%\) and FCWS by \(0.2\%\). The application of LS gives a slight improvement in makespan for GSA and LBS and a significant improvement in FCWS by \(52.74\%\). The improvement in cost is
Figure 4: Cybershake graph with \(n=100\)
Figure 3: Cybershake graph with \(n=30\)
Figure 2: Epigenomics graph with \(n=100\)
significant in all algorithms: 78.09% for GSA, 90.52% for LBS, and 58.86% for FCWS. The improvement in reliability is not so much. For Large size, LBS gives the least makespan outperforming GSA by 13.6% followed by FCWS by 85.5% from Fig. 2 (a). From Fig. 2 (b) in terms of cost, LBS performs significantly better than GSA by 74.66% and FCWS by 94.04%. In terms of reliability, GSA and FCWS perform similarly and LBS outperforms them by 1.06% as seen from Fig 2 (c). The application of LS gives a slight improvement in makespan for LBS and significant improvement in GSA by 13.47% and FCWS by 52.74%. The improvement in cost is significant in all algorithms: 96.84% for GSA, 92.02% for LBS, and 85.09% for FCWS.
2. _Cybershake:_ For Small size, LBS gives the least makespan outperforming GSA by 33.81% followed by FCWS by 84.27% as seen in Fig. 3 (a). From Fig 3 (b) LBS gives the least cost outperforming GSA by 29.92% and FCWS by 69.24%. From Fig. 3 (c), in terms of reliability, all algorithms perform similarly. The application of LS gives a degraded makespan of LBS by 20.71% while giving a significant improvement in GSA by 7.34% and FCWS by 58.78%. The improvement in cost is significant in all algorithms: 26.39% for GSA, 26.24% for LBS, and 94.44% for FCWS. The improvement in reliability is considerable only in GSA by 0.53%. For Large size, LBS gives the least makespan outperforming GSA by 69.12% followed by FCWS by 81.16% from Fig. 2 (a). From Fig. 2 (b) in terms of cost, LBS performs significantly better than GSA by 53.02% and FCWS by 68.47%. In terms of reliability, LBS gives the most reliability outperforming FCWS by 0.84% and GSA by 3.92% as seen from Fig 2 (c). The application of LS gives a degraded makespan for LBS by 33.83% while giving a significant improvement in GSA by 30.72% and FCWS by 54.26%. The improvement in cost, and reliability is significant in all algorithms: 42.69%, 2.77% for GSA, 27.73%, 0.84% for LBS, and 89.75%, 1.63% for FCWS.
In summary, as the task graph size increases, performance degrades because we have more task nodes and hence more processing time is needed. As the value of \(\eta\) increases, makespan, and cost decrease while reliability increases. This is because when the security constraint becomes looser, we can use ciphers with less overhead to encrypt data reducing the processing time. LBS always outperforms the other algorithms in terms of cost and reliability and makespan for the majority of the cases. The application of LS brings improvement in makespan, cost, and reliability of GSA and FCWS. For LBS, the application of LS always gives improvements in cost and reliability. The makespan also improves in some cases.
## 5 Conclusion and Future Research Directions
In this paper, we have studied the problem of scheduling the tasks present in a scientific workflow in a multi-cloud system. The goal of the scheduling
is to minimize the makespan and cost and to maximize reliability subject to security constraints. For this problem, we have developed an efficient solution methodology. This has been validated with real-life scientific workflows. In the future, we would like to include fault tolerance in our model and look for more efficient solution methodologies.
|
2310.13596 | MarineGPT: Unlocking Secrets of Ocean to the Public | Large language models (LLMs), such as ChatGPT/GPT-4, have proven to be
powerful tools in promoting the user experience as an AI assistant. The
continuous works are proposing multi-modal large language models (MLLM),
empowering LLMs with the ability to sense multiple modality inputs through
constructing a joint semantic space (e.g. visual-text space). Though
significant success was achieved in LLMs and MLLMs, exploring LLMs and MLLMs in
domain-specific applications that required domain-specific knowledge and
expertise has been less conducted, especially for \textbf{marine domain}.
Different from general-purpose MLLMs, the marine-specific MLLM is required to
yield much more \textbf{sensitive}, \textbf{informative}, and
\textbf{scientific} responses. In this work, we demonstrate that the existing
MLLMs optimized on huge amounts of readily available general-purpose training
data show a minimal ability to understand domain-specific intents and then
generate informative and satisfactory responses. To address these issues, we
propose \textbf{MarineGPT}, the first vision-language model specially designed
for the marine domain, unlocking the secrets of the ocean to the public. We
present our \textbf{Marine-5M} dataset with more than 5 million marine
image-text pairs to inject domain-specific marine knowledge into our model and
achieve better marine vision and language alignment. Our MarineGPT not only
pushes the boundaries of marine understanding to the general public but also
offers a standard protocol for adapting a general-purpose assistant to
downstream domain-specific experts. We pave the way for a wide range of marine
applications while setting valuable data and pre-trained models for future
research in both academic and industrial communities. | Ziqiang Zheng, Jipeng Zhang, Tuan-Anh Vu, Shizhe Diao, Yue Him Wong Tim, Sai-Kit Yeung | 2023-10-20T15:45:39Z | http://arxiv.org/abs/2310.13596v1 | # MarineGPT: Unlocking Secrets of "Ocean" to the Public
###### Abstract
Large language models (LLMs), such as ChatGPT/GPT-4, have proven to be powerful tools in promoting the user experience as an AI assistant. The continuous works are proposing multi-modal large language models (MLLM), empowering LLMs with the ability to sense multiple modality inputs through constructing a joint semantic space (e.g. visual-text space). Though significant success was achieved in LLMs and MLLMs, exploring LLMs and MLLMs in domain-specific applications that required domain-specific knowledge and expertise has been less conducted, especially for **marine domain**. Different from general-purpose MLLMs, the marine-specific MLLM is required to yield much more **sensitive**, **informative**, and **scientific** responses. In this work, we demonstrate that the existing MLLMs optimized on huge amounts of readily available general-purpose training data show a minimal ability to understand domain-specific intents and then generate informative and satisfactory responses. To address these issues, we propose **MarineGPT**, the first vision-language model specially designed for the marine domain, unlocking the secrets of the ocean to the public. We present our **Marine-5M** dataset with more than 5 million marine image-text pairs to inject domain-specific marine knowledge into our model and achieve better marine vision and language alignment. Our MarineGPT not only pushes the boundaries of marine understanding to the general public but also offers a standard protocol for adapting a general-purpose assistant to downstream domain-specific experts. We pave the way for a wide range of marine applications while setting valuable data and pre-trained models for future research in both academic and industrial communities. Code and data will be available at [https://github.com/hkust-vgd/MarineGPT](https://github.com/hkust-vgd/MarineGPT)
## 1 Introduction
Large language models (LLMs) (Ouyang et al., 2022; OpenAI, 2023) demonstrated an impressive ability for a large range of user-tailored tasks, endowing them with immense potential applications (Fu et al., 2023; Cox et al., 2023; Tu et al., 2023; Hong et al., 2023). As a general-purpose assistant, ChatGPT/GPT-4 (OpenAI, 2023; Ouyang et al., 2022) could understand human intents and complete various real-world tasks. However, existing LLMs only focus on unimodal text inputs. Obviously, a text-only Chatbot is less optimal for a powerful AI assistant. The multi-modal large language models (MLLMs) (Li et al., 2023; Liu et al., 2023; Zhao et al., 2023; Zang et al., 2023; Wu et al., 2023a) empower the LLMs to sense multiple modality inputs, follow multi-modal instructions, and align with human intent to complete various real-world tasks in the wild. A vision-language multi-modal model (Zhu et al., 2023) connects a vision encoder and LLM (Touvron et al., 2023a) for general-purpose visual-and-language understanding. There has been rapid progress in the MLLM field by leveraging billions of image-text pairs (Byeon et al., 2022; Schuhmann et al., 2022) from the public web. Vision-language pre-training (Li et al., 2022; 2023b) is conducted for bridging the frozen pre-trained vision encoders and frozen language decoders. However, such general-domain vision-language models still lack sophistication and conception coverage in understanding and delivering domain-specific knowledge. MiniGPT-4 (Zhu et al., 2023) proposed to align a frozen visual encoder with a frozen LLM by optimizing the linear layer to bridge the two modalities. LLaV (Liu et al., 2023) conducted the first attempt to perform the visual instruction tuning, enabling the LLMs to understand the user intent and generate corresponding responses.
To enable MLLMs to understand the user intent and generate domain-specific answers, further efforts and elaborate designs are specially required. The dominant way is to collect moderate-scale but high-quality image-text pairs from the specific domains for downstream fine-tuning (Zhang et al., 2023b). LLAVa-Med (Li et al., 2023a) proposed to fine-tune the LLAVa (Liu et al., 2023) to the medicine domain to generate the domain-specific responses. The instruction-following training samples have also been micticulously curated for the medical domain to enable the model to understand the user intent from the specific domains. In this work, we propose **MarineGPT** presented in Fig. 1, the first marine-specific vision-to-language expert model, which could recognize marine objects from the given visual inputs and yield corresponding _sensitive_, _informative_, and _scientific_ responses as a powerful marine AI assistant. Developing an effective marine AI assistant poses three non-trivial challenges: 1) the existing general-purpose LLMs are _less-efficient_ to generate reasonable and scientific responses for the marine domain; 2) _limited concept coverage_ for the marine domain present in existing image-text pairs; and 3) limited ability to perform the _fine-grained object recognition_ since many marine species share very similar appearance.
To address these challenges, we first scale up the marine knowledge collection through crowdsourcing multiple marine sources for collecting huge amounts of training data. We present our **Marine-5M** dataset that contains more than 5 million marine image-text pairs with _redundant conceptions from the marine domain_. We utilize our Marine-5M for **marine-specific continuous pre-training**, which could effectively adapt the general-purpose MLLM to the domain-specific expert model, aligning images with domain expertise flexibly defined and managed based on the language descriptions. We then design **50** different **marine-specific instructions** based on the expertise and requirements from marine biologists, which could help MarineGPT understand the user intent. We scalably generate instruction-following training data based on ChatGPT/GPT-4, following our constructed marine-specific instructions. Besides, we also summarized **129** diverse and comprehensive attributes (e.g., distribution, habitat, morphology, reproduction, and _etc_) of marine objects. We retrieved and collected corresponding descriptions based on summarized attributes and crawled category annotation from reliable marine knowledge sources (eol, 2018; Yang et al., 2020; E. & L.M, 2016; ree). Through these ways, we can effectively inject marine knowledge into the model and enable MarineGPT to generate informative and domain-specific responses. The constructed instruction-following training data (with 1.12 million high-quality marine image-text pairs) are utilized for **instruction-following fine-tuning** demonstrated in Fig. 2. Finally, we empirically demonstrate that only designing the linear layer (Zhu et al., 2023) cannot effectively align the visual signals and the textual descriptions for fine-grained marine object recognition. The Q-Former should also be optimized for more effective and fine-grained marine vision-language alignment.
Figure 1: MarineGPT could perform auto-recognition of various marine objects and yield _diverse_, _domain-specific_, _informative_, and _scientific_ responses associated with the recognized marine object. Best viewed in color.
MarineGPT is unlocking the secrets of the ocean to the public. It provides an effective way to perform non-invasive automated species recognition, reducing human labor from the domain experts. We observe that MarineGPT could effectively identify marine organisms even under a fine-grained setting, enabling scientific image annotation and decision-making processes. Meanwhile, MarineGPT could also enable complex visual reasoning, knowledge-grounded image description, and multi-turn conversations, allowing citizens, scientists, and the general public to participate actively in marine research and conservation. In summary, our main contributions are listed as follows.
* MarineGPT empowers marine object recognition with question-answering capabilities, which could encourage public participation in marine research and data collection. Both citizens and scientists could contribute to biodiversity monitoring and research efforts, helping to gather valuable information for marine studies.
* We propose the largest marine image-text pairs as far as we know for promoting aligning visual-and-language modalities and enhancing the model's perception capabilities. The collected and constructed diverse and comprehensive image-text pairs could significantly change the way of marine perception, reasoning, and causal inference.
* We present a marine-specific data generation pipeline to create diverse (image, instruction, output) training data, by sampling marine image-text pairs from broad-coverage and open-source websites and using ChatGPT/ChatGPT-4 to create instruction-following data.
* We provide a unified and valuable perspective for performing domain-specific cross-modal vision-language tasks, which require professional knowledge injection and user intent understanding. Similar approaches could be adopted in other fields like botany, entomology, or ornithology, enabling species recognition and comprehensive data analysis.
## 2 Related Work
### LLMs
LLMs have demonstrated remarkable progress across various Natural Language Processing (NLP) tasks. The impressive performance of ChatGPT (OpenAI, 2022) and GPT-4 (OpenAI, 2023) has motivated the research community to reproduce their success, resulting in a rapidly increasing number of foundation models. These efforts include GPT-3 (Brown et al., 2020), T5 (Raffel et al., 2020), BLOOM (Scao et al., 2022), OPT (Zhang et al., 2022), LLAMA (Touvron et al., 2023), and LLAMA-2 (Touvron et al., 2023), all of which have get worldwide significant interest.
**Pre-training LLMs for domain-specific applications**. In addition to general-purpose LLMs, several LLMs are specifically pre-trained from scratch to cater to domain-specific tasks. These domain-specific pre-trained LLMs, such as Galactic (Taylor et al., 2022) and CodeGen (Nijkamp et al., 2022), are designed to address specific domains like science and program synthesis (Lu et al., 2022).
**Fine-tuning LLMs for domain-specific applications**. These foundation models have resulted in the exponential growth of fine-tuned variants in task-oriented settings, such as chatbots. In order to enable these LLMs to follow human commands and complete tasks, instruction tuning is introduced to obtain these models' instruction-following versions like InstructGPT (Ouyang et al., 2022), Flan-T5 (Chung et al., 2022), and Vicuna (Chiang et al., 2023). Additionally, various domain-specific applications have emerged based on these foundation models, including Dr.LLaMA (Guo et al., 2023) and CancerGPT (Li et al., 2023) for medical applications, as well as FinGPT (Yang et al., 2023) and BloombergGPT (Wu et al., 2023) for finance. Also, several toolboxes for fine-tuning LLM, like LMFlow (Diao et al., 2023), have been developed. The explosive emergence of those task-oriented LLMs leads to impressive performance gains and opens up new possibilities for various LLM applications.
### MLLMs
As LLMs grow more intelligent and powerful, there is increasing attention on extending the frozen LLMs to multi-modal tasks, empowering the LLMs with the ability to sense multi-modality inputs. Flamingo (Alayrac et al., 2022) pioneered to harness web-scale image-text data, capitalizing on both vision and language models. This unveiling showcased remarkable zero-shot image-text capabilities
in a conversational format. BLIP (Li et al., 2022; 2023b) bootstraps vision-language pre-training from frozen pre-trained image encoders and frozen language decoders. Based on BLIP-2 (Li et al., 2023b), MiniGPT-4 (Zhu et al., 2023) proposed a projection layer to align pre-trained vision encoder to frozen LLMs (e.g. Vicuna (Chiang et al., 2023)), and exhibited respectable zero-shot image comprehension in dialogues. Since only linear layers are required to be optimized, MiniGPT-4 offers high computation efficiency and flexibility for being fine-tuned to different downstream tasks. However, we noticed that only optimizing the linear layers cannot effectively adapt the general-purpose MLLM to a domain-specific expert model.
**MLLMs for domain-specific applications**. A growing body of research has emerged to extend general-purpose MLLMs to domain-specific applications. LLaVa-Med (Li et al., 2023a) proposed to fine-tune the LLAVa to the medicine domain. Pathast (Sun et al., 2023) collected over 142K high-quality pathology image-text pairs from various reliable sources. Pmc-vqa (Zhang et al., 2023b) proposed to sample biomedical image-text pairs from PMC-15M and adopt GPT-4 to create instructions from the text alone. EgoCOT (Mu et al., 2023) craft a large-scale embodied planning dataset, consisting of carefully selected videos from the Ego4D dataset for effective embodied planning. In this work, we aim to propose the first marine-specific MLLM MarineGPT, and a large collection of high-quality marine image-text pairs, which covers a broad spectrum of marine and marine-relative conceptions.
## 3 Approaches
### Framework Overview
The framework overview of MarineGPT is demonstrated in Fig. 2. To better utilize the powerful zero-shot ability of LLMs and adapt them to domain-specific experts, we design a two-stage training approach: 1) a marine-specific continuous pre-training on a large collection of aligned marine image-text pairs for acquiring marine vision-language alignment; 2) a further instruction-following fine-tuning based on a moderate scale but high-quality self-constructed image-text pairs with specifically designed instructions to generate more informative, reliable and scientific answers, enhancing the professional ability and usability of MarineGPT. The former continuous pre-training procedure could effectively inject marine knowledge into the LLM. Meanwhile, broad coverage of marine conceptions that have been overlooked in existing general-purpose LLMs and MLLMs, could be introduced and emphasized through continuous pre-training. Moreover, the constructed instruction-following data could promote generating domain-specific responses with the required information delivered and a better understanding of the user intent.
### Marine-Specific Continuous Pre-training
The vision-language pre-training based on redundant image-text pairs is an _essential_, _important_, and _foundational_ step to acquiring vision-language knowledge and constructing a joint feature space. To scale up data collection, the readily available image-text pairs from open-source websites are used for vision-language pre-training. We adopt the pre-trained ViT encoder of BLIP-2 (Li et al., 2023b) optimized by CC12M (Changpinyo et al., 2021), COCO captions (Lin et al., 2014), Visual Genome (Krishna et al., 2017) and sub-part of LAION dataset (Schuhmann et al., 2021, 2022). In total, there are 14 million image-text pairs during the initial pre-training stage in (Li et al., 2022). The existing general-domain LLMs are less effective for marine scenarios because marine image-text pairs drastically differ from general web content. The current MLLMs cannot generate or answer the required or accurate descriptions based on the given image. There are two main drawbacks when directly utilizing the pre-trained models for instruction-following fine-tuning: 1) the existing large collection of aligned image-text pairs does **not** involve too much marine knowledge or marine visual observations and the conception coverage is not enough to include most marine organisms; 2) the description of images are usually short (especially for the LAION dataset (Schuhmann et al., 2021, 2022)), which cannot reveal and deliver too much detail information. Thus, the pre-trained VIT encoder would still be unable to extract effective feature representations from the marine images. Meanwhile, the language decoder will also fail to generate long and meaningful descriptions. The existing general-purpose LLMs are less effective for marine scenarios because marine image-text pairs drastically differ from general web content. To address these two issues, we present **Marine-5M** dataset with 5 million marine image-text pairs to promote the ability to extract reliable marine features.
Our constructed Marine-5M is the largest marine image-text dataset specially designed for marine research.
Different from the pre-training stage of BLIP-2 (Li et al., 2023), we optimize the Q-Former and the linear layer architectures to bootstrap the marine vision-language pre-training based on our Marine-5M dataset. Considering the annotation of crawled images from public websites only provides the category annotation, we propose to expand the caption of images according to \(<\)**category annotation\(>\)**. We summarized **129** diverse, comprehensive, and hierarchical attributes of the marine objects, including size, color, shape, feeding diet, distribution, habitat, morphology, reproduction, and _etc_. We generate different attribute descriptions based on texts crawled from reliable marine websites (e.g., EOL (eol, 2018), FishDB (Yang et al., 2020), Reeflex (ree) and so on) for images with the same category annotation. In this way, we could generate longer image descriptions and perform the marine knowledge injection. After marine-specific continuous pre-training, the trained model demonstrates the capacity to possess a wealth of knowledge in the marine field and offer more accurate and reasonable responses to human inquiries. To further promote the ability of MarineGPT to generate more fine-grained and scientific responses, we construct **1.12 million** high-quality image-text pairs with a large range of instruction-following templates, presenting various tasks of describing marine organisms in the given image.
Figure 2: The framework overview of the proposed MarineGPT. There are two main procedures in our MarineGPT: 1) marine-specific continuous pre-training on 5 million marine image-text pairs; 2) instruction-following fine-tuning based on constructed high-quality instruction-following image-text pairs to generate _sensitive_, _informative_ and _scientific_ responses.
### Instruction-Following Fine-tuning
Performing continuous pre-training on the marine image-text pairs alone is **insufficient** to build a domain-specific and scientific Chatbot for answering diverse marine questions, as it lacks the ability to follow diverse marine-specific instructions even though marine concept coverage is highly promoted after the continuous pre-training. Terribly, the existing LLMs would refrain from answering biology and marine questions, or worse, produce **incorrect responses** or **complete hallucinations**. In most scenarios, the existing general-purpose visual assistants may behave like a layperson for domain-specific questions and cannot understand the user intent. A simple and effective method of aligning LLMs to human intent is to learn from instruction-following data generated by state-of-the-art LLMs.
#### 3.3.1 Instruction-Following Image-Text Data Construction
MarineGPT adopts a wide range of vision-language instruction data, covering both template-based converted image captions and ChatGPT-generated question-answer data following the user intent demonstrated in Fig. 2. Asking insightful marine questions is crucial for acquiring domain-specific knowledge and expanding the understanding of the marine world. We propose to prompt MarineGPT to extend the generation based on different user intent and domain-specific questions about the identified marine organism, such as its habitat, behavior, ecological role, conservation status, and _etc._
**Instruction-following data construction**. We generate diverse and comprehensive multi-modal vision-language instruction-following data. We construct **50** different instructions and scalably invoke instruction-following data based on crawled marine knowledge from both open-source public **marine websites** and **ChatGPT/GPT-4**. We provide some designed instructions as follows:
* **Instruction 1.** "_please describe the species **richness** and **distribution** of \(<\)**image category\(>\)**."
* **Instruction 2.** "_please answer what are the **predator-prey relationships** for the \(<\)**image category\(>\)** and how they influence population dynamics."_
* **Instruction 3.** "_please answer how this \(<\)**image category\(>\) interacts with other species in marine ecosystems._"
* **Instruction 4.** "_please answer the **conservation status** of \(<\)**image category\(>\)**, including their **population trends**, **threats** they face, and the **effectiveness of existing conservation measures**."_
The constructed instruction-following data will be utilized for optimizing our MarineGPT to generate more user-centric and scientific responses.
**High-quality image-text pair formulation**. Besides leveraging the capabilities of ChatGPT/GPT-4 to formulate the instruction-following training data. We also follow our summarized attribute-based image description generation pipeline (refer to Fig. 3). By combining web-scraped image-text data from open-source marine websites and domain-specific knowledge, we could enable MarineGPT to generate more _reliable_, _accurate_, and _scientific_ answers aligned with the user intent. We dedicate substantial efforts to gathering data from diverse sources, including books, biology blogs, scientific articles, and open-source marine websites. The constructed image-text pairs from a broad spectrum of sources, including public websites, various open-source books, guidebooks, and our privately collected diving and surveying data. After sophisticated and extensive data cleaning and caption alignment, we generate longer and more domain-specific image captions.
We generate **1.12 million** scientifically constructed instruction-following question-answer pairs for marine knowledge foundation model training. This marine-specific data generation approach enables us to generate more image-text pairs with detailed and informative image descriptions. To facilitate marine multi-modal research, we will release our constructed marine-specific instruction-following data and the trained MarineGPT model. The marine and biology researchers could then further formulate their own high-quality vision-language datasets, thereby promoting advancements in the marine foundation model community.
#### 3.3.2 Instruction-Following Fine-tuning
MarineGPT could produce more detailed and scientific image descriptions with much more diversity and align with the user intent. We provide some prompt templates as shown below:
* **Prompt 1.**: "_Can you answer what **ecological** roles does the marine organism in the \(<\)**image\(>\)** play in their **ecosystems**?"
* **Prompt 2.**: "_Could you describe how do **climate changes** affect the **distribution**, **reproduction**, and **survival** of the object in the \(<\)**image\(>\)**?"
* **Prompt 3.**: "_Please determine the **scientific name** of object in this \(<\)**image\(>\)**, classification within the **taxonomic hierarchy**, and its **relationships** to **other known species**."
* **Prompt 4.**: "_Take a look at this image and describe how can we **mitigate** and **human-induced threats** to the object in this \(<\)**image\(>\)**."
Please note that the image prompts are aligned with the previous 50 constructed instructions. We conduct instruction-following fine-tuning to the marine domain for end-to-end training of a marine visual-to-language conversational assistant, which could promote the capabilities and usability of AI-powered systems, making them more versatile, context-aware, and user-centric.
## 4 Experiments
### Implementation Details
MarineGPT aims to achieve cross-modality visual-and-language alignment between the visual observations and LLMs. To achieve effective visual perception, we adopt the same visual encoder as used in BLIP-2 (Li et al., 2023), a ViT backbone with a pre-trained Q-Former. For the large language model, we adopt LLaMA-13B (Touvron et al., 2023) as the decoder to generate responses. Both language and vision models are open-sourced. We target to bridge the gap between the visual encoder and LLM based on the Q-Former along with additional linear layers for computing the similarity between the visual content and captions. Through such pre-training, we could perform the marine vision-language alignment.
Additionally, we convert the data type of parameters of the frozen ViT (Dosovitskiy et al., 2020) and language model to FP16 during the continuous pre-training to increase computational efficiency. At the first continuous marine-specific pre-training, we focus on marine image-text alignment, which involves using our constructed Marine-5M dataset with a large range of marine image-text pairs. To relieve the computation burden brought by large-scale image-text pairs, we adopt a low-resolution input (\(224\times 224\)) for projecting image-text pairs into the same semantic space. At the second instruction-following fine-tuning procedure, we adopt a larger image resolution (\(384\times 384\)) to provide more detailed information. In the second fine-tuning procedure, we only adopt the constructed 1.12 million high-quality marine image-text pairs (with both the template-based image captions and our instruction-following ChatGPT/GPT-4 generated captions) to enable MarineGPT more powerful.
### Marine-5M Dataset
**Dataset construction**. To promote the image diversity of our **Marine-5M** dataset, we include marine image-text pairs from multiple different sources, including YouTube, Flickr, Google image
Figure 3: The procedure of our attribute-based image description generation pipeline.
engine, open-source marine websites (e.g., FishDB (Yang et al., 2020), Fishes-of-Australia (Bray, 2018), Corals-of-World (E. & L.M, 2016), Reeflex (ree), _etc_) and publicly available marine datasets. Our Marine-5M dataset contains about **100K** marine and marine-relative object conceptions. For video sequences from YouTube, we especially download marine documentary videos, which contain high-quality marine image-text pairs. We utilize the subscripts to clip the keyframes and generate the aligned marine image-text pairs. The human-based checking and refinement are conducted to get more accurate and reliable text descriptions. The diving, surveying, and transect videos are also included to promote the diversity of our Marine-5M dataset. We extract one keyframe every 10 seconds for those videos without subscripts. The images from our Marine-5M dataset have a large diversity, in which the _appearance_, _boundary_, _shape_, and _texture_ of marine objects vary significantly. Some images are captured in the wild and with _low visibility_, _background clutter_, _motion blur_, _occlusion_, _dynamic illumination_, _color distortion_, and _optical artifacts_. The **Marine-5M** dataset also contains web-scraped image-text data from open-source marine websites. These web-scraped image-text pairs usually contain the category annotation and some brief summary of the marine object present in the visual images. Such pairs are usually high-quality and provide clean object conceptions centralized in images, which could promote the marine vision-language alignment.
As for the text descriptions, we expand the text description based on the provided category annotation considering that the category annotation or the image caption is over-simple. We summarize **129 different attributes** from the category annotation as demonstrated in Fig. 3. We generate corresponding captions based on these diverse attributes and category annotations. It is worth noting that not all marine object conceptions contain all these attribute annotations. The images with the same category annotation are assigned different attribute descriptions to promote the diversity of text captions and inject marine knowledge. For those images without any annotation, we propose to borrow the BLIP-2 (Li et al., 2023b) to generate longer image captions for the marine images. We generate **5** diversified image caption candidates through Gaussian sampling in the latent space and then we rank all the generated image captions based on the caption length. Finally, we compute the sentence-level similarity between these caption candidates. Only the caption with a similarity less than **0.85** with the longest caption is appended for concatenation. In this manner, we could generate more diversified and longer captions for the marine images, describing more fine-grained color, texture, shape, and geometry information.
### Results
**Comparison with MiniGPT-4 and GPT-4V**. We compare our MarineGPT with MiniGPT-4 and GPT-4V in Fig. 4. Compared with MiniGPT-4 and GPT-4V, our MarineGPT could generate long and detailed responses with corresponding biology information delivered (e.g., **scientific name** and **common name** of recognized objects). We observed that MiniGPT-4 will generate some descriptions ("_a few plants visible in the foreground_" and "_in a coral reef_") that are **not** correct in the given marine images. We attribute this hallucination to the ineffective marine visual-language alignment. Directly optimizing the linear layer based on general-purpose image-text pairs will result in wrong descriptions for the marine images. GPT-4V cannot yield detailed marine-specific responses, either, while it tends to follow some fixed templates to describe the physical appearances of given images. In contrast, our MarineGPT could generate long and meaningful responses for domain-specific knowledge delivery. Both GPT-4V and MarineGPT could achieve accurate scientific name generation ("**Arothronigropunctatus**") and common name generation ("**Black-spotted Pufferfish**") about the given marine object. However, MarineGPT is confused with dog-faced puffer fish (another common name of Arothronigropunctatus) and generates a wrong scientific name for dog-faced puffer fish. In contrast, due to that GPT-4V could access the Internet, there is less possibility for GPT-4V to make such mistake. We attribute this failure of MarineGPT to the ineffectiveness of frozen LLM (LLaMA-13B (Touvron et al., 2023a) used in this work). It may still make mistakes for the domain-specific knowledge even if we performed the marine knowledge injection during the continuous vision-language pre-training. The further fine-tuning of LLMs to domain-specific domain may effectively alleviate such issues. Besides, Both GPT-4V and MarineGPT demonstrate **an aesthetic metric** summarized from common sense. Finally, compared with MiniGPT-4 and GPT-4V, MarineGPT could generate diverse and relative information associated with the recognized objects.
**Recognizing a wide range of marine objects**. We then provide more recognition results produced by our MarineGPT in Fig. 5. MarineGPT could recognize a wide range of marine creatures and provide the corresponding common and scientific names of recognized objects. It is worth noting that
our method could generate diverse and comprehensive image descriptions for the different recognized marine objects. MarineGPT could also generate the corresponding references to provide additional user information, as demonstrated in Fig. 5 a). It can generate informative responses describing how the recognized object benefits from the physical appearance shown in Fig. 5 b), how can we protect the recognized threatened marine species demonstrated in Fig. 5 c), the spatial distribution of "sarpa salpa" in Fig. 5 d), the physical characteristics of "black-tip reef shark" in Fig. 5 e) and the feeding diet and social behavior of "bottlenose dolphin" in Fig. 5 f). In this way, MarineGPT could deliver marine knowledge to the users.
**Fine-grained marine object recognition**. MarineGPT could discriminate very similar marine organisms and generate different responses based on them. We report the fine-grained object recognition results of MarineGPT in Fig. 6. As demonstrated, MarineGPT could generate the corresponding scientific names for these three different fish species and provide further detailed descriptions from various aspects. This novel fine-grained object recognition ability of MarineGPT introduced by our Marine-5M dataset could enable diversity monitoring and reduce the human labor from the fish experts. The existing general-purpose MLIMs will regard all these three fish species as "fish" and fail to perform fine-grained recognition.
**Multi-round conversation**. Users could upload images of different marine objects and ask different questions, as demonstrated in Fig. 7. MarineGPT could recognize the marine objects present in marine images and generate corresponding responses aligned with the user intent. The generated responses cover the detailed information, addressing the user query. Through a user-friendly interface, MarineGPT could enable individuals to contribute data and increase public awareness about marine biodiversity and its significance.
**Comprehensive analysis**. One specific feature of our MarineGPT is that users can comprehensively understand the recognized marine object. We provide the illustration by choosing one fish image
Figure 4: The comparison between MiniGPT-4, GPT-4V and our MarineGPT. MarineGPT could recognize both the common and scientific names of marine objects and provide diverse information associated with the recognized objects. Best viewed in color.
with "Abramites hypselonotus" as demonstrated in Fig. 8. In the multi-round conversations, the users could ask any question they are interested in the recognized marine species. MarineGPT could generate sensitive, domain-specific, and scientific responses for the users. Through training based on the instruction-following multi-modal data, our model could understand the user intent and generate required correspondences.
**Failure cases**. There are still some failure cases in our MarineGPT. It will refrain from generating long and informative responses for images that contain instances, that are not strictly "marine creatures" as reported in Fig. 9 a), b) and c). We have also explored whether MarineGPT suffers from knowledge forgetting in Fig. 9 d) and e). In d), MarineGPT could effectively recognize the horse and describe its behavior. But it will generate bizarre responses. As illustrated in Fig. 9 e), MarineGPT could recognize the instance accurately, but it misattributes the recognized organism as another species during the explanation procedure, leading to the in-context hallucination. With the feedback from the user, MarineGPT could provide the corresponding informative responses for the users. MarineGPT will also make similar mistakes for the marine images as demonstrated in Fig. 9 f). We attribute this failure to the inefficient ability of LLMs in the marine field and the lack of instruction-following data for such conceptions. Fine-tuning the LLMs to the marine domain with corresponding instruction-following data may effectively alleviate such issues.
Figure 5: MarineGPT could recognize a wide range of marine objects and yield comprehensive marine and biological knowledge delivered to the users so that the users could obtain a full understanding of the recognized marine objects.
### Specialized Features of MarineGPT
Our MarineGPT has the following potential applications and broader impact on the marine field.
* **Scale up marine organism recognition**. By integrating machine learning algorithms, MarineGPT could continually learn from user interactions, enhancing its species recognition accuracy and knowledge base over time. Scientists in marine biology and related fields can use the system for
Figure 6: Our MarineGPT could accurately recognize similar marine objects under the fine-grained setting.
Figure 7: Our MarineGPT could understand diverse the user intent and yield corresponding informative responses.
species identification, data retrieval, and ecological research. This accelerates the acquisition of biodiversity data, supporting research and decision-making processes.
* **Monitoring**. Our MarineGPT could be effectively integrated for accessing the marine species diversity, abundance data, and community structure. Furthermore, we could also perform the long-term marine species morning, which provides a unique opportunity to quantitatively examine changes in benthic habitat over time. It not only aids in marine object identification but also offers valuable insights into marine organisms' conservation status and ecological importance. This information could be used to prioritize conservation actions and monitor endangered or threatened species.
* **Centralized platform**. Our MarineGPT provides a centralized platform for marine researchers, biologists, and institutions to share data, enable collaboration on marine research, and contribute to a global repository of marine species information. As the system recognizes and answers questions in a nearly real-time manner, it would facilitate on-the-spot data collection and recognition during field research or citizen science efforts. In this way, we can enable rapid and large-scale data collection, surpassing the limitations of manual expert identification efforts. Such a large-scale marine database is crucial for understanding species interactions, feeding diets, and ecosystem dynamics, leading to better conservation and management strategies for marine ecosystems.
* **Interdisciplinary research**. The system serves as an educational tool, fostering greater understanding and appreciation for marine life among users. It encourages citizen science participation and promotes environmental awareness and stewardship. Our method and MarineGPT empower users to access information on endangered or threatened species, aiding conservationists and policymakers in implementing targeted conservation measures to protect marine biodiversity. MarineGPT could also be integrated into educational tools, enhancing marine biology education and fostering a deeper appreciation for marine life among students and the public. Students, educators, and the general public could engage with the system, fostering a deeper understanding and appreciation for
Figure 8: MarineGPT could yield comprehensive marine and biological knowledge delivered to the users so that the users could obtain a full understanding of the recognized marine organism.
marine life and conservation. Collaborative efforts would improve the quality and accessibility of marine object information globally.
* **General public access**. Researchers and enthusiasts could quickly access species information without requiring manual identification or extensive literature searches. Conservationists can access information on endangered species and use the system's data to make informed decisions for marine conservation efforts. Teachers and students in marine biology or environmental studies can utilize the system as an educational tool to enhance learning about marine species and ecosystems. Enthusiastic individuals interested in marine life can actively contribute to biodiversity monitoring and research by using the system during their marine observations or expeditions. Scuba divers, snorkelers, and ocean enthusiasts can utilize the system to identify marine species encountered during their underwater adventures.
## 5 Discussions
**Hallucination**. Our MarineGPT inherits the hallucination problem from the frozen LLMs. Future advancements in more powerful LLMs are expected to alleviate this issue. Moreover, it is highly worthwhile to explore appropriate approaches to promote the ability of LLMs for domain-specific understanding. Through this, we could generate more domain-specific descriptions and knowledge for the images. We leave this as our future work.
**Video-centric MarineGPT**. The marine videos have not been considered integrated into chatting yet. The reasons probably come from the difficulty of accurately understanding non-static visual scenes. Besides, mitigating the modality gap between video and text (Li et al., 2023; Zhang et al., 2023a), which typically requires handling the temporary information while simultaneously considering both visual signals and audio signals. It is more challenging than bridging the gap between image and text. Our next work is to analyze the video with temporary collections, on how to summarize the relationships between different marine organisms. The massive video-caption pairs, as well as higher-quality and domain-specific visual-instruction-tuning datasets, are emergently required.
Figure 9: The failure cases of proposed MarineGPT.
## 6 Conclusion
In this paper, we propose the first marine-specific multi-modal large language model, MarineGPT, which could generate more sensitive, informative, and scientific responses than existing general-purpose MLLMs. We present the largest marine image-text dataset with 5 million pairs and instruction-following cross-modality vision-language data for adapting a general-purpose AI assistant to a marine expert model. We propose a standard and valuable pipeline to develop a domain-specific expert model that involves specific knowledge. We will release our trained models and designed instructions to the research community to foster future research in both academic and industrial communities.
|
2308.09784 | Dynamics and Geometry of Entanglement in Many-Body Quantum Systems | A new framework is formulated to study entanglement dynamics in many-body
quantum systems along with an associated geometric description. In this
formulation, called the Quantum Correlation Transfer Function (QCTF), the
system's wave function or density matrix is transformed into a new space of
complex functions with isolated singularities. Accordingly, entanglement
dynamics is encoded in specific residues of the QCTF, and importantly, the
explicit evaluation of the system's time dependence is avoided. Notably, the
QCTF formulation allows for various algebraic simplifications and
approximations to address the normally encountered complications due to the
exponential growth of the many-body Hilbert space with the number of bodies.
These simplifications are facilitated through considering the patterns, in lieu
of the elements, lying within the system's state. Consequently, a main finding
of this paper is the exterior (Grassmannian) algebraic expression of many-body
entanglement as the collective areas of regions in the Hilbert space spanned by
pairs of projections of the wave function onto an arbitrary basis. This latter
geometric measure is shown to be equivalent to the second-order Renyi entropy.
Additionally, the geometric description of the QCTF shows that characterizing
features of the reduced density matrix can be related to experimentally
observable quantities. The QCTF-based geometric description offers the prospect
of theoretically revealing aspects of many-body entanglement, by drawing on the
vast scope of methods from geometry. | Peyman Azodi, Herschel A Rabitz | 2023-08-18T19:16:44Z | http://arxiv.org/abs/2308.09784v1 | # Dynamics and Geometry of Entanglement in Many-Body Quantum Systems
###### Abstract
A new framework is formulated to study entanglement dynamics in many-body quantum systems along with an associated geometric description. In this formulation, called the Quantum Correlation Transfer Function (QCTF), the system's wave function or density matrix is transformed into a new space of complex functions with isolated singularities. Accordingly, entanglement dynamics is encoded in specific residues of the QCTF, and importantly, the explicit evaluation of the system's time dependence is avoided. Notably, the QCTF formulation allows for various algebraic simplifications and approximations to address the normally encountered complications due to the exponential growth of the many-body Hilbert space with the number of bodies. These simplifications are facilitated through considering the _patterns_, in lieu of the elements, lying within the system's state. Consequently, a main finding of this paper is the exterior (Grassmannian) algebraic expression of many-body entanglement as the collective areas of regions in the Hilbert space spanned by pairs of projections of the wave function onto an arbitrary basis. This latter geometric measure is shown to be equivalent to the second-order Renyi entropy. Additionally, the geometric description of the QCTF shows that characterizing features of the reduced density matrix can be related to experimentally observable quantities. The QCTF-based geometric description offers the prospect of theoretically revealing aspects of many-body entanglement, by drawing on the vast scope of methods from geometry.
## I Introduction
Entanglement, the unique and intriguing feature of the quantum realm, has been subject to extensive theoretical and experimental studies in a variety of fields. Entanglement is the cornerstone of quantum information science [1; 2; 3; 4] while also forming a basis to explain quantum thermalization [5; 6; 7; 8; 9; 10] as well as being closely related to the geometry of spacetime [11; 12; 13]. The evolution of entanglement in pure many-body quantum systems is concealed in the cooperative dynamics amongst the exponentially large number of modes of the system. Thus, except in rare cases, e.g., one-dimensional integrable models with conformal symmetry [14; 15; 16], obtaining a microscopic view of entanglement dynamics is generally not feasible in interacting many-body quantum systems. The most common approximation tools to explore many-body entanglement are variations of tensor-network-based numerical simulations, including Matrix Product States (MPS), which are mainly useful in the low-entanglement-density regimes [17; 18]. This paper will introduce a new means to study multipartite entanglement dynamics in many-body quantum systems, which we refer to as the Quantum Correlation Transfer Function (QCTF) formulation. Additionally, we will use the QCTF to provide a geometric interpretation of entanglement.
The QCTF provides a means to study correlation dynamics in many-body quantum systems. We exploit the fact that correlations between the constituents of a quantum system can also be obtained from the _patterns_ in its wave function or density matrix, rather than their elements. The investigation of such patterns is enabled through the employment of the Z-transformation, wherein the unitary time-evolution of a pure quantum system is mapped into a new complex-valued function, that constitutes the QCTF. As a result, the dynamics of entanglement is encoded in the analytic properties (e.g., poles, zeros, and residues) of the QCTF. In this general framework, various mathematical tools can be used to simplify the analysis. Most importantly, the QCTF permits bypassing the direct evaluation of the time evolution while nevertheless permitting the study of entanglement dynamics.
Employing the QCTF, can either lead to a formulation of the quantum system which is fully dependent on the initially chosen basis, or alternatively, to a completely basis-independent (i.e., tensorial) description, where tensor indices represent the degrees of freedom in the Hilbert space. The basis-dependent approach may be implemented to study entanglement, given the Hamiltonian and the initial state of the system. In this case, by properly choosing the basis used in the Z-transformation, a closed-form complex function, or it's Laurent expansion, can be obtained upon making the QCTF transformation, from which entanglement dynamics is revealed by finding the QCTF residues. For example, this approach has been used to obtain the exact single-magnon entanglement dynamics in integrable Heisenberg chains [19].
The basis-independent approach will be employed to present an extrinsic geometric portrayal of the QCTF-expressed entanglement (the second objective of the paper). In particular, based on the tensorial description of entanglement in the QCTF formulation, a _natural_ geometrical description follows. We show that the collective total squared areas spanned by the _marginal_ wave functions is a geometric measure of entanglement between
parts of a pure quantum system. A few clarifications on the terminology above are called for. The word "natural" reflects that this description mirrors the nature of entanglement between subsystems, which is, how different degrees of freedom in the subsystem evolve distinctly due to interaction with the remainder of the system. By "marginal", we refer to the projection of the wave function onto an arbitrary basis of the vector space spanning the subsystem of interest. This formulation can enable employing various tools from geometry to better understand entanglement. Since the focus of this paper is on constant Hamiltonians, with the formulation being \(U(1)\) gauge-invariant, the geometric analysis is simply carried out on \(\mathbb{C}^{n}\) with Euclidean metric (which highly simplifies the theory), and not the complex projective space, \(\mathbb{CP}^{n}\), with Fubini-Study metric [20; 21; 22]. We note that viewing quantum systems through the lens of geometry has often been fruitful in different areas of physics [23; 24; 25; 26; 27; 28; 29; 30; 31], and it has led to links between the entropy of entanglement and the geometry of the space wherein the system's state lies [32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45].
The paper is organized as follows. Section II introduces the transformation producing the QCTF. In Section III a particular form of the QCTF transformation is presented for two-level subsystems along with its resultant entanglement measure formulation. Section IV starts with a basic introduction to the geometry of the Hilbert space involved. Subsection IV.1 gives the main results on the geometric structure of entanglement, and subsection IV.2, demonstrates the relation between the QCTF entanglement measure and the second-order Renyi entropy. Section V presents concluding remarks and suggestions for future research building on the foundation set out in the present work. The Appendices are referred to in the text for mathematical details.
## II Basics of the QCTF formulation
The motivation behind the QCTF framework is to analyze a quantum system through the analytic behavior of a corresponding complex-valued function, referred to as QCTF. This function is obtained by employing the Z-transformation (initially proposed by Ragazzini and Zadeh to study sampled data [46]) to either the entire or a portion of the wave function (\(\ket{\tilde{\psi}(s)}=\mathcal{L}\{\ket{\psi(t)}\}\)) in the Laplace domain; the density matrix (\(\tilde{\rho}(s)=\mathcal{L}\{\rho(t)\}\)) may be analogously treated, as will be explained. The general case of transforming the wave function is given first to illustrate the use of the QCTF. We consider the unitary evolution of the initial state \(\ket{\psi_{0}}\) governed by the time-independent Hamiltonian \(\mathbf{H}\). Given any arbitrary basis set \(\{\ket{j}\ket{j=0:d-1}\}\) that spans the Hilbert space associated with \(\ket{\psi}\), then one form of the QCTF transformation can be defined as:
\[\begin{split}\bar{\mathcal{K}}(z,s)&=\sum_{j=0}^{d- 1}z^{j}\bra{j}\tilde{\psi}(s)\end{split} \tag{1}\]
where \(\mathbf{G}(s)=(s+\frac{i}{\hbar}\mathbf{H})^{-1}\) is the resolvent of \(\mathbf{H}\) (the propagator of the Schrodinger's equation in the Laplace domain). The transformation (1) implicitly depends on the basis \(\{\ket{j}\}\) and how it is labeled (or numbered) due to the exponents \(z^{j}\); nevertheless, as will be shown later, one can rewrite the QCTF fully independent of the choice of basis. We note that the transformation (1) is an equivalent description of the quantum system's wave function, therefore the properties of the system are retained within the QCTF. The main purpose of this paper is to show that the entanglement evolution of subsystems can be effectively studied along with new insights via the QCTF formulation.
The density matrix of a pure quantum system can be equivalently described in the QCTF formulation. The desired QCTF is a function of three complex variables, \(z_{d},z_{a}\) and \(s\); Using (1), it can be defined as
\[\mathcal{K}(z_{d},z_{a},s)\doteq\bar{\mathcal{K}}(z_{d}z_{a},s)\star\bar{ \mathcal{K}}^{*}\big{(}(z_{a}/z_{d})^{*},s^{*}\big{)}, \tag{2}\]
where \(\star\) denotes the ordinary product operation in the \(z_{d}\) and \(z_{a}\) domains along with the following convolution operation in the \(s\) domain. If \(\mathbf{F}_{1}(s)\) and \(\mathbf{F}_{2}(s)\) are functions in the Laplace domain, then
\[\mathbf{F}_{1}(s)\ast\mathbf{F}_{2}(s)\doteq\frac{1}{2\pi i}\int_{-\infty}^{ \infty}\mathbf{F}_{1}(\sigma+i\omega)\mathbf{F}_{2}(s-\sigma-i\omega)d\omega, \tag{3}\]
for some real \(\sigma\) in the region of convergence of \(\mathbf{F}_{1}(s)\). This operation is the frequency-domain equivalent of the product of these functions in the time domain. In the important special case of simple poles, we have \((s+i\omega_{1})^{-1}\star(s+i\omega_{2})^{-1}=(s+i(\omega_{1}+\omega_{2}))^{-1}\). The QCTF (2) can be interpreted as a two-dimensional Z-transformation of the density matrix in the Laplace domain (see Appendix A for details).
More generally, the dynamics of a pure quantum system evolving under the Hamiltonian \(\mathbf{H}\) can be represented by an operator function \(\mathcal{H}(\mathbf{H},z_{d},z_{a},s)\), which includes its dynamical correlation properties, independent of its initial state. To see this feature, using (2), the QCTF transformation can be rewritten as the expectation value of \(\mathcal{H}\) with respect to the arbitrary initial state (\(\ket{\psi_{0}}\)),
\[\mathcal{K}(z_{d},z_{a},s)=\bra{\psi_{0}}\mathcal{H}(\mathbf{H},z_{d},z_{a},s) \ket{\psi_{0}}, \tag{4a}\] \[\mathcal{H}=\mathbf{G}^{\dagger}(s^{*})\left(\sum_{i,j}z_{a}{}^{i+j}z_{d}{}^{i -j}\ket{j}\bra{i}\right)\star\mathbf{G}(s). \tag{4b}\]
Operator \(\mathcal{H}\) captures the dynamical properties of the quantum system, including the evolution of correlation between its constituent subsystems. This property, which will be discussed in detail in the next section, can be employed to understand how the correlation behavior varies for different initial states of the quantum system.
Equation (4) is the dual Laurent series expansions of \(\mathcal{K}\), in both \(z_{d}\) and \(z_{a}\) variables, centered at the origins of the respective spaces. Based on the labeling \(j=0,\cdots,d-1\) for the Hilbert space basis, the origin \(z_{d}=0\) is a pole of order (at most) \(d-1\), while the origin \(z_{a}=0\) is a removable singularity in the \(z_{a}\) space. Also, for finite \(d\), the function \(\mathcal{K}\) is holomorphic in any punctured neighborhood of the origin, in both spaces.
The QCTF (2) provides an equivalent description for the evolution of a quantum system in terms of its density matrix, \(\rho(t)\). Therefore, desired matrix operations can be equivalently carried out on either \(\mathcal{K}\) or functions of \(\mathcal{K}\) by a variety of means from complex analysis (e.g., finding residues, etc.). As an elementary example, given another arbitrary basis set \(\{|j^{\prime}\rangle\}\), the element \(\langle i^{\prime}|\hat{\rho}|j^{\prime}\rangle\) can be obtained by taking the inverse Laplace transform after utilizing the following residues (see Appendix B for the proof)
\[\underset{\begin{subarray}{c}z_{d}=0\\ z_{a}=0\end{subarray}}{\mathbf{Res}}\Big{(}\sum_{i,j}\left(j|j^{\prime} \right)\langle i^{\prime}|i\rangle\,z_{d}^{-(i-j)-1}z_{a}^{-(i+j)-1}\mathcal{ K}(z_{d},z_{a},s)\Big{)}. \tag{5}\]
To summarize, in the QCTF formulation, the goal is to study aspects of a quantum system from the _patterns_ in the elements of the system's state. These patterns reveal themselves through the encoding in the analytical properties of the QCTF (or functions of the QCTF) and can be found either by closed-contour integration or through the properties of the dual Laurent series expansion involved. The patterns arise through the Z-transformation's inherent summation over _all_ elements of the system's state. The main goal is to simplify the analysis of entanglement and express the results in a new perspective by either strategically choosing the QCTF basis labels or constructing a basis-independent formulation.
## III Entanglement dynamics for two-level subsystems
The application of the generic QCTF many-body framework in Section II is now focused on the entanglement dynamics of two-level subsystems when the overall system is initialized in a pure state. Section IV will build upon this scenario to consider entanglement dynamics for subsystems with more than two energy levels. We first introduce the entanglement measure used in the analysis and then its time evolution will be obtained from the system's QCTF.
Here we consider a closed, discrete, bipartite quantum system, consisting of a two-level subsystem (referred to as subsystem \(\mathcal{M}\)) interacting with an accompanying \(d\)-dimensional quantum subsystem \(\mathcal{R}\), that evolves according to the Hamiltonian \(\mathbf{H}\) from the initial state \(|\psi_{0}\rangle=|\psi(t=0)\rangle\). If we denote the reduced density matrix of the subsystem \(\mathcal{M}\) by \(\rho_{\mathcal{M}}(t)=\mathrm{Tr}_{\mathcal{R}}\{|\psi(t)\rangle\!\langle \psi(t)|\}\), then \(\mathcal{Q}_{\mathcal{M}}(t)=\det(\rho_{\mathcal{M}}(t))\) is a time-dependent entanglement measure of subsystem \(\mathcal{M}\), which is also monotonically related (i.e., they concurrently increase, decrease, or don't change with an infinitesimal change in the density matrix) to the second-order Renyi entanglement entropy through \(\mathcal{S}_{2}(\mathcal{M})=-\log_{2}(1-2\mathcal{Q}_{\mathcal{M}})\). Equivalently, we will use its Laplace transformation \(\tilde{\mathcal{Q}}_{\mathcal{M}}(s)=\mathcal{L}\{\mathcal{Q}_{\mathcal{M}}(t)\}\) as the dynamical entanglement measure in the analysis.
The entanglement measure \(\tilde{\mathcal{Q}}_{\mathcal{M}}(s)\) can be obtained from the QCTF. Given any basis for the quantum system which is constructed from the arbitrary basis vectors \(\{|+\rangle,|-\rangle\}\) for \(\mathcal{M}\) and \(\{|j\rangle,j=0,...,d-1\}\) (\(d\) can be countably infinite) for \(\mathcal{R}\), we define the QCTF on the off-diagonal block \(\langle+|\tilde{\rho}(s)|-\rangle\) (which fully describes the bipartite entanglement between the subsystems) as,
\[\mathcal{H}=\mathbf{G}^{\dagger}(s^{*})\Bigg{(}\sum_{i,j}z_{a}^{i+j}z_{d}^{i-j }\,|-\otimes j\rangle\,\langle+\otimes i|\Bigg{)}\star\mathbf{G}(s), \tag{6a}\] \[\mathcal{K}(z_{d},z_{a},s)=\langle\psi_{0}|\mathcal{H}|\psi_{0}\rangle\,. \tag{6b}\]
Having introduced the form of QCTF for the purpose of this section, Appendix C proves that the dynamical entanglement measure (\(\tilde{\mathcal{Q}}_{\mathcal{M}}(s)\)) can be obtained from the QCTF (6b) as,
\[\begin{split}\tilde{\mathcal{Q}}_{M}(s)=&\underset{ \begin{subarray}{c}z_{d}=0\\ z_{a}=0\end{subarray}}{\mathbf{Res}}\big{(}(z_{d}z_{a})^{-1}\mathcal{K}(z_{d },z_{a},s)\star\mathcal{K}^{*}(1/z_{d}^{*},1/z_{a}^{*},s^{*})\big{)}\\ &-\mathcal{K}_{d}(s)\star\mathcal{K}_{d}^{*}(s^{*}),\end{split} \tag{7}\]
with \(\mathcal{K}_{d}(s)=\underset{\begin{subarray}{c}z_{d}=0\\ z_{a}=1\end{subarray}}{\mathbf{Res}}\big{(}z_{d}^{-1}\mathcal{K}(z_{d},z_{a},s) \big{)}\Big{|}_{z_{a}=1}\), where \(\underset{\begin{subarray}{c}z_{a}=0\\ z_{a}=0\end{subarray}}{\mathbf{Res}}(f(z))\) denotes the residue of \(f\) at \(z=a\). The first term in (7) corresponds to the Frobenius norm of the off-diagonal sub-matrix \(\langle+|\tilde{\rho}(s)|-\rangle\), while the second term corresponds to the summation of the cross-correlation of its diagonal. These two quantities are identical when the subsystems \(\mathcal{M}\) and \(\mathcal{R}\) are not entangled. In this formulation, direct evaluation of system's time evolution is avoided, and entanglement is directly obtained from the Hamiltonian's features and the initial wave function, which are reflected in (6). Similar to the discussion in the previous section, the present QCTF formulation can depend on the choice of basis sets \(\{j\}\) and \(\{|+\rangle,|-\rangle\}\). Nevertheless, as will be shown shortly, upon finding the residues in (7), one is left with a basis-independent expression that is geometrically meaningful. Concomitantly, strategically choosing the basis sets and their labels can simplify the
QCTF formulation, similar to the theoretical analysis in [47].
Denoting \(c_{k}\doteq\left\langle k|\psi_{0}\right\rangle\) and the eigenmodes of the system's Hamiltonian by \(\left\{\left|k\right\rangle,E_{k}\right\}\), the resolvent is
\[\mathbf{G}(s)=\sum_{k}\left(s+\frac{i}{\hbar}E_{k}\right)^{-1}\left|k\right\rangle \left\langle k\right|. \tag{8}\]
Using the QCTF transformation (6a), and then (7), the entanglement measure for the two-level subsystem of interest (\(\mathcal{M}\)), is simplified to:
\[\tilde{\mathcal{Q}}_{M}(s)=\sum_{k,l,k^{\prime},l^{\prime}}c_{l}c_{k^{\prime} }^{*}c_{l^{\prime}}^{*}\frac{\left\langle+k|+k^{\prime}\right\rangle\left\langle -l^{\prime}|-l\right\rangle-\left\langle+k|-l\right\rangle\left\langle-l^{ \prime}|+k^{\prime}\right\rangle}{s+\frac{i}{\hbar}(E_{l}-E_{l^{\prime}}-E_{k} +E_{k^{\prime}})}, \tag{9}\]
where the inner products in the numerator, referred to as _local overlaps_, are defined through the following decomposition:
\[\left|k\right\rangle =\left|+k\right\rangle\otimes\left|+\right\rangle+\left|-k \right\rangle\otimes\left|-\right\rangle, \tag{10}\] \[\left(\text{equivalently}\right)\left|\pm k\right\rangle =\Big{(}\sum_{j}\big{(}\left|j\right\rangle\left(\left\langle\pm \right|\otimes\left\langle j\right|\right)\right)\big{)}\left|k\right\rangle.\]
The vectors \(\left|\pm k\right\rangle\) and \(\left|\pm l\right\rangle\), which are not necessarily normalized, are in \(\mathbb{C}^{d}\) and lie in the vector space underlying subsystem \(\mathcal{R}\). Equation (9) involves three different components contributing to the entanglement dynamics. The first component is the product of the wave-function components multiplying the fraction, which describes the contribution from each eigenmode. The second component consists of the products of local overlaps in the numerator of the fraction, which is related to the geometric structure of the eigenstates and determines the strength of each eigenmode. The third component consists of the denominator of the fraction which determines the Laplace poles in the entanglement measure.
Equation (9) is a basis-invariant description of entanglement since the dependence on labels \(i,j\) is no longer present. Also, the summation is independent of the choice of \(\left|\pm\right\rangle\); to see this latter point, note that the first and third components (i.e., the coefficients and the denominator of the fraction) in equation (9) are invariant under the permutations \(k\leftrightarrow l^{\prime}\), \(k^{\prime}\leftrightarrow l\) and \((k\leftrightarrow l^{\prime},k^{\prime}\leftrightarrow l)\). Using this symmetry, one can show the basis-invariance of (9) over the choice of \(\left|\pm\right\rangle\) (by considering a unitary change of basis \(\left|\pm^{\prime}\right\rangle=U\left|\pm\right\rangle\)). This observation enables using tensor algebra in the next section to give geometric insight into the entanglement measure (9).
## IV Geometry of the QCTF entanglement
In this section, a geometric interpretation of the QCTF entanglement measure (9) is presented. Toward this goal, we will start by introducing the following notation. Since the quantum system's Hilbert space is isomorphic to \(\mathbb{C}^{2d}\), we will treat \(\left\langle z\right|\) as the (covariant) dual of \(\left|z\right\rangle=\sum_{i}z^{i}\left|i\right\rangle\) with the Euclidean metric \(g_{ij}=\delta_{ij}\), i.e. \(\left\langle z|z\right\rangle=\delta_{ij^{2}}z^{i}z^{*j}=z^{i}z^{*}_{i}\), where in the last term we have used the Einstein summation convention. In this treatment, \(\left\langle z\right|\) is a _1-form_.
### Exterior algebraic description of entanglement
Using the above terminology, the numerator of the fraction in the QCTF entanglement measure (9) can be written as a _2-form_ acting on the tensor product of two contravariant (ket) vectors, all of which belong to \(\mathbb{C}^{d}\), as follows:
\[\text{Numerator in \eqref{eq:QCTF}}=(\left\langle+k|\wedge\left\langle-l^{ \prime}\right\rangle\right)(\left|+k^{\prime}\right\rangle\otimes\left|-l \right\rangle), \tag{11}\]
where the wedge operator (\(\wedge\)) is the alternating product, i.e., \(\left\langle z_{1}\right|\wedge\left\langle z_{2}\right|=(\left\langle z_{1} \right|\otimes\left\langle z_{2}\right|-\left\langle z_{2}\right|\otimes \left\langle z_{1}\right|)\right\). Note that in the remainder of the paper, any product between functions in the Laplace domain is understood to be carried out using the \(\star\) operator defined in (3). Therefore, the _evolution_ of entanglement between \(\mathcal{M}\) and \(\mathcal{R}\) is given by the exterior product between the components of the Hamiltonian's eigenstates in directions corresponding to an orthonormal basis for subsystem \(\mathcal{M}\). Building on (11), we can write the entire entanglement measure \(\tilde{\mathcal{Q}}_{\mathcal{M}}(s)\) in (9) as:
\[\tilde{\mathcal{Q}}_{\mathcal{M}}(s)=(\left\langle+\psi|\wedge\left\langle- \psi\right\rangle\right)(\left|+\psi\right\rangle\otimes\left|-\psi\right\rangle), \tag{12}\]
where \(\left|\psi\right\rangle=\left|+\psi\right\rangle\otimes\left|+\right\rangle+ \left|-\psi\right\rangle\otimes\left|-\right\rangle\), which is \(\left|\pm\psi\right\rangle=\Big{(}\sum_{j}\big{(}\left|j\right\rangle(\left\langle \pm\right|\otimes\left\langle j\right|)\big{)}\Big{)}\mathbf{G}(s)\left|\psi_{ 0}\right\rangle\). This description of entanglement is fully basis-independent (with respect to \(\left\{\left|\pm\right\rangle\right\}\) and \(\left\{\left|j\right\rangle\}\)) and \(U(1)\) gauge invariant, i.e., \(\left|\psi\right\rangle\rightarrow\left|\tilde{\psi}\right\rangle=e^{i\theta} \left|\psi\right\rangle\) for any real \(\theta\). Geometrically, in the case of a two-level subsystem \(\mathcal{M}\), the QCTF entanglement measures the squared area of the parallelogram spanned by \(\left|+\psi\right\rangle\) and \(\left|-\psi\right\rangle\) in \(\mathbb{C}^{d}\).
This geometric observation can be extended to the QCTF entanglement measure for subsystem \(\mathcal{M}\) with
Figure 1: The area spanned by pairs of marginal (or projected) wave functions is depicted. These vectors, given by (13), belong to \(\mathbb{C}^{d}\), which is the vector-space underlying subsystem \(\mathcal{R}\). Equivalently, up to normalization, marginal wave-function \(\left|\tilde{\psi}\right\rangle\) is equal to the post-measurement (i.e., the projected wave function of subsystem \(\mathcal{M}\), after projective measurement of subsystem \(\mathcal{M}\), in the \(\left\{\left|\tilde{j}\right\rangle\right\}\) basis) wave-function of subsystem \(\mathcal{R}\). The total squared areas (i.e., the three shaded regions in the Figure) gives the entanglement between the subsystems.
more than two energy levels. In this case, we denote an arbitrary basis for the sub-vector spaces underlying subsystem \(\mathcal{M}\) (\(n\)-dimensional) and \(\mathcal{R}\), respectively by \(\{\ket{\hat{j}};j=1,\cdots,n\}\) and \(\{\ket{i}:i=1,\cdots,d\}\). Denoting the projected wavefunctions into its components along \(\{\ket{\hat{j}}\}\) as
\[\begin{split}\ket{\hat{j}\psi}&=(\bra{\hat{j}} \otimes I_{\mathcal{R}})\ket{\psi}\\ &=\Big{(}\sum_{i}\big{(}\ket{i}\big{(}\bra{\hat{j}}\otimes\ket{i }\big{)}\big{)}\Big{)}\mathbf{G}(s)\ket{\psi_{0}}=\sum_{i}\psi_{\hat{j}i}\ket{i },\end{split} \tag{13}\]
the following expression in (14) is a measure of entanglement between subsystems \(\mathcal{M}\) and \(\mathcal{R}\) : The total collective squared areas spanned by pairs of projected wave functions (see Figure 1):
\[\tilde{\mathcal{Q}}_{\mathcal{M}}(s)=\sum_{1\leq j_{1}<j_{2}\leq n}(\bra{\hat{ j}_{1}\psi}\wedge\bra{\hat{j}_{2}\psi})(\ket{\hat{j}_{1}\psi}\otimes\ket{\hat{j}_{2} \psi}). \tag{14}\]
In the next subsection, it is shown that this entanglement measure is monotonically related to the second-order Renyi entropy.
Given this extension of \(\tilde{\mathcal{Q}}_{\mathcal{M}}(s)\) to subsystems with arbitrary energy levels, the QCTF transformation (6) can be modified by considering an extra complex variable, \(z_{c}\). Since the summation in (14) goes over pairs of basis vectors for subsystem \(\mathcal{M}\), one can use an arbitrary labeling \(h\) (i.e., by assigning a unique non-negative integer \(h\) to each of the \(\frac{n(n-1)}{2}\) pairs) for the set \(\{(\hat{j}_{1},\hat{j}_{2})|1\leq\hat{j}_{1}<\hat{j}_{2}\leq n\}\) (we denote the pair labeled \(h\) by \((h_{-},h_{+})\)). Therefore, the QCTF can be written as follows:
\[\mathcal{H}=\mathbf{G}^{\dagger}(s^{*})\Bigg{(}\sum_{i,j,h}{z_{a}}^{i+j}{z_{d }}^{i-j}{z_{c}}^{h}\ket{h_{-}\otimes j}\bra{h_{+}\otimes i}\Bigg{)}\star \mathbf{G}(s), \tag{15a}\] \[\mathcal{K}(z_{d},z_{a},z_{c},s)=\bra{\psi_{0}|\mathcal{H}|\psi_{0}}. \tag{15b}\]
Given this transformation, the entanglement measure \(\tilde{\mathcal{Q}}_{M}(s)\) is:
\[\begin{split}\tilde{\mathcal{Q}}_{M}(s)=&\underset{ \begin{subarray}{c}z_{c}=0\\ z_{c}=0\end{subarray}}{\text{Res}}\big{(}\frac{\mathcal{K}(z_{d},z_{a},z_{c}, s)\star\mathcal{K}^{*}(1/z_{d}^{*},1/z_{a}^{*},1/z_{c}^{*},s^{*})}{z_{d}z_{a}z_{c}} \big{)}\\ &-\underset{z_{c}=0}{\text{Res}}\big{(}z_{c}^{-1}\mathcal{K}_{d} (z_{c},s)\star\mathcal{K}^{*}_{d}(1/z_{c}^{*},s^{*})\big{)},\end{split} \tag{16}\]
with \(\mathcal{K}_{d}(z_{c},s)=\underset{z_{d}=0}{\text{Res}}\big{(}z_{d}^{-1} \mathcal{K}(z_{d},z_{a},z_{c},s)\big{)}\Big{|}_{z_{a}=1}\). By employing the variable \(z_{c}\), and finding the residue (at the origin) in the last step, this process sums up the contribution from each pair of the projected wave functions.
### Relation with the second-order Renyi entropy
The entanglement measure (14) is monotonically related to the second-order Renyi entropy \((\mathbf{S}^{(2)}=-\log_{2}\big{(}\text{Tr}\big{(}\tilde{\rho}^{2}\big{)} \big{)}\big{)}\) of subsystem \(\mathcal{M}\). To show this property, note that for each pair of \((j_{1},j_{2})\) in (14), the argument of the summation, \(((\bra{\hat{j}_{1}\psi}\wedge\bra{\hat{j}_{2}\psi})(\ket{\hat{j}_{1}\psi} \otimes\ket{\hat{j}_{2}\psi})))\), is the collection of principal minors
\[\begin{vmatrix}\bra{\hat{j}_{1}\psi}\ket{\hat{j}_{1}\psi}\bra{\hat{j}_{1}\psi} \ket{\hat{j}_{2}\psi}\\ \bra{\hat{j}_{2}\psi}\ket{\hat{j}_{1}\psi}\bra{\hat{j}_{2}\psi}\end{vmatrix}\]
of the reduced density matrix \(\rho_{\mathcal{M}}\). Thus,
\[\tilde{\mathcal{Q}}_{\mathcal{M}}(s)=\sum\text{all }2\star 2\text{ principal minors of }\rho_{\mathcal{M}}. \tag{17}\]
Additionally, from matrix algebra, we have the following theorem [48]: The total sum of \(2\star 2\) principal minors of any square matrix (here, the reduced density matrix \(\rho_{\mathcal{M}}\)) is equal to the second elementary symmetric polynomial of its eigenvalues \(\{\lambda_{i}\}\):
\[s_{2}=\sum_{1\leq j_{1}<j_{1}\leq n}\lambda_{j_{1}}\lambda_{j_{2}}. \tag{18}\]
Thus,
\[\tilde{\mathcal{Q}}_{\mathcal{M}}(s)=s_{2}=\frac{1}{2}\big{(}1-\text{Tr}\big{(} \tilde{\rho}^{2}\big{)}\big{)}=\frac{1}{2}\big{(}1-2^{-\mathbf{S}^{(2)}}). \tag{19}\]
This equation relates the second-order Renyi entropy and the entanglement measure used in this paper. We should note that the quantity \(1-\text{Tr}\big{(}\tilde{\rho}^{2}\big{)}\) is sometimes referred to as the _linear entropy_ of \(\mathcal{M}\)[49].
## V Summary and conclusion
The new description of quantum many-body dynamics through the QCTF provides an equivalent representation of the wave-function or the density matrix that allows for obtaining entanglement dynamics in a quantum system _directly_ from its Hamiltonian. Therefore, the QCTF circumvents the bottleneck of evaluating the many-body system's evolution. The structure of the QCTF transformation enables using various forms of simplifications and algebraic manipulations. For example, the vector spaces' basis used in the QCTF transformation (denoted by \(\{\ket{j}\}\) and \(\{\ket{i^{\prime}}\)) in the paper) can be picked and labeled efficiently to give a closed-form function or a collection of them as sub-series of the transformation, e.g., each series might correspond to a perturbation order (similar to the application of QCTF in [47]). Another form of such simplifications offered by the QCTF is given in [19], wherein basis kets are labeled according to the single-magnon states that they are representing.
Given the Hamiltonian and the initial state of the system, entanglement dynamics of subsystems can be obtained by finding the residues of the QCTF. Alternatively, the final QCTF entanglement can be described independently of the basis sets for each subsystem. In this case, entanglement was shown to have an exterior algebraic structure. More precisely, it is equal to the collective addition of norms of 2-forms constructed by the
projected (i.e., marginal) wave functions. These norms quantify the squared area spanned by the corresponding marginal wave functions.
In this paper, a geometric portrayal of entanglement was presented as a measure of this quantity by its direct and equivalent consequences on the geometry of the system's state. This feature opens up the prospect of employing additional tools from geometry to understand and analyze entanglement in different fields and settings in future research. In particular, considering non-constant Hamiltonians (i.e., either with time-dependence or with Hamiltonians described by independent parameters) should lead to a _differential geometric_ description of the QCTF entanglement dynamics, which can be advantageous as it allows for using the generalized Stokes theorem (given the 2-differential form structure of the QCTF formulation) to study entanglement in these more complex scenarios. Additionally, the link between the QCTF entanglement formulation and the second-order Renyi entropy, discussed in subsection IV.2, which is established through a relation between the principal minors of the density matrix and the second elementary symmetric polynomials of its eigenvalues, may lead to novel approaches to measure entanglement in experimental settings. This opportunity arises because the elementary symmetric polynomials (18), fully describing the Renyi entropy, can equivalently be obtained in terms of local overlaps of the wavefunction, which are accessible experimentally [50; 51].
###### Acknowledgements.
P.A acknowledges support from the Princeton Program in Plasma Science and Technology (PPST). H.R acknowledges support from the U.S Department Of Energy (DOE) grant (DE-FG02-02ER15344).
## Appendix A Derivation of equation (2)
Here, we show that the QCTF in (2) can be interpreted as a transformation of the system's density matrix by considering one chronological frequency, \(s\), and two structural complex variables, \(z_{a}\) and \(z_{d}\). Using (1) and the definition of operation \(\ast\), we can rewrite (2) as
\[\begin{split}&\mathcal{K}(z_{d},z_{a},s)\doteq\bar{\mathcal{K}}(z_{d }z_{a},s)\star\bar{\mathcal{K}}^{\ast}\big{(}(z_{a}/z_{d})^{\ast},s^{\ast} \big{)}\\ &=\sum_{l,k=0}^{d-1}\ \langle l|\mathbf{G}(s)|\psi_{0}\rangle \star\langle\psi_{0}|\mathbf{G}^{\dagger}(s^{\ast})|k\rangle z_{d}^{l-k}z_{a}^ {l\ast k}.\end{split} \tag{19}\]
We define \(\tilde{c}_{l}(s)\doteq\langle l|\mathbf{G}(s)|\psi_{0}\rangle=\mathcal{L}\{ \langle l|\psi(t)\rangle\}=\mathcal{L}\{c_{l}(t)\}\). Then using definition (3) we have:
\[\begin{split}\tilde{c}_{l}(s)\star\tilde{c}_{k}^{\ast}(s^{\ast})& =\frac{1}{2\pi i}\int_{-\infty}^{\infty}d\omega\tilde{c}_{l}( \sigma+i\omega)\tilde{c}_{k}^{\ast}\big{(}(s-\sigma-i\omega)^{\ast}\big{)}\\ &=\int_{-\infty}^{\infty}dt_{1}\int_{-\infty}^{\infty}dt_{2}\frac {1}{2\pi i}\int_{-\infty}^{\infty}d\omega c_{l}(t_{1})c_{k}^{\ast}(t_{2})e^{- st_{2}-\sigma(t_{1}-t_{2})-i\omega(t_{1}-t_{2})}\\ &=\int_{-\infty}^{\infty}dt_{1}\int_{-\infty}^{\infty}dt_{2}c_{l }(t_{1})c_{k}^{\ast}(t_{2})e^{-st_{2}-\sigma(t_{1}-t_{2})}\delta(t_{1}-t_{2}) \\ &=\int_{-\infty}^{\infty}dtc_{l}(t)c_{k}^{\ast}(t)e^{-st}=\langle l |\tilde{\rho}(s)|k\rangle\,.\end{split} \tag{20}\]
Therefore, substituting \(\langle l|\mathbf{G}(s)|\psi_{0}\rangle\ast\langle\psi_{0}|\mathbf{G}^{\dagger }(s^{\ast})|k\rangle\) with \(\langle l|\tilde{\rho}(s)|k\rangle\) in (19), gives:
\[\mathcal{K}(z_{d},z_{a},s)=\sum_{l,k=0}^{d-1}\ \langle l|\tilde{\rho}(s)|k \rangle\,z_{d}^{l-k}z_{a}^{l+k}. \tag{21}\]
## Appendix B Derivation of equation (5)
Using equation (4), we can rewrite the R.H.S of equation (5) as follows
\[\frac{1}{(2\pi i)^{3}}\oint_{\partial C_{s}}ds\oint_{\partial C_{d}}dz_{d} \oint_{\partial C_{a}}dz_{a}e^{st^{\prime}}\sum_{l,k}\sum_{l\prime,k^{\prime }}\langle k|k^{\prime}\rangle\,\langle l^{\prime}|l\rangle\,z_{d}^{(l^{\prime \prime}-k^{\prime\prime})-(l-k)-1}z_{a}^{(l^{\prime\prime}+k^{\prime\prime})-(l +k)-1}\,\langle l^{\prime\prime}|\tilde{\rho}(s)|k^{\prime\prime}\rangle\,. \tag{22}\]
Note that the three integrals are independent and can be evaluated interchangeably. We consider the \(z_{d}\) and \(z_{a}\) integrals first. Using the fact that the integrand is holomorphic inside the closed contours \(\partial C_{d}\) and \(\partial C_{a}\), except at the origin, by employing Cauchy's residue theorem the value of these two integrations are equal to the residue of the integrand at the origin \((z_{d}=0,z_{a}=0)\) times \((2\pi i)^{2}\). Also note that since \(\langle l^{\prime\prime}|\tilde{\rho}(s)|k^{\prime\prime}\rangle\) has no
\(z_{d},z_{a}\) dependence, the residue of the integrand in the \(\mathbb{C}^{2}\) space is equal to the coefficient of \((z_{a}z_{d})^{-1}\) in the summation, which corresponds to the terms with \(k^{\prime\prime}=k,l^{\prime\prime}=l\). Therefore, after evaluating the structural integrations, equation (5) is
\[(5)=\frac{1}{2\pi i}\oint_{\partial C_{a}}dse^{st^{\prime}}\sum_{l,k}\langle l ^{\prime}|l\rangle\,\langle l|\tilde{\rho}(s)|k\rangle\,\langle k|k^{\prime} \rangle=\langle l^{\prime}|\frac{1}{2\pi i}\oint_{\partial C_{a}}dse^{st^{ \prime}}\tilde{\rho}(s)|k^{\prime}\rangle=\langle l^{\prime}|\rho(t^{\prime})| k^{\prime}\rangle\,, \tag{10}\]
where we have used the definition of the inverse Laplace transform, and the linearity of the transform.
## Appendix C Proof of equation (7)
We start from the system's state in the time domain, \(\ket{\psi(t)}\), and expand it in the product basis vectors of each subsystem, \(\{\ket{+},\ket{-}\}\), and \(\{\ket{l},l=0,...,d-1\}\), as follows
\[\ket{\psi(t)}=\sum_{a\in\{\ast,-\}}\sum_{l=0,...,d-1}c_{al}(t)|a \otimes l\rangle; \tag{11}\] \[c_{al}(t)=\langle a\otimes l|\psi(t)\rangle\,.\]
Based on this expansion, we may construct the matrix \(M_{2\times d}\) such that its first and second rows consist of \(\{c_{\ast l}(t)\}\) and \(\{c_{-l}(t)\}\) respectively (the time dependence of the c's is not shown below for simplicity):
\[M(t)\stackrel{{\pm}}{{=}}\begin{pmatrix}c_{+0}&c_{+1}&\cdots&c_ {+l}&\cdots&c_{+d-1}\\ c_{-0}&c_{-1}&\cdots&c_{-l}&\cdots&c_{-d-1}\end{pmatrix}. \tag{12}\]
If subsystem \(\mathcal{M}\) is not entangled to subsystem \(\mathcal{R}\) at \(t=t^{\prime}\), then \(\ket{\psi(t^{\prime})}\) is a product state and the rows of \(M(t^{\prime})\) are linearly dependent, i.e. \(\text{rank}\left(M(t^{\prime})\right)=1\). This condition on the rank of \(M\) is necessary and sufficient for the subsystem \(\mathcal{M}\) to not be entangled to subsystem \(\mathcal{R}\). Thus, if \(rank(M)=2\) these subsystems are entangled. To construct a smooth indicator of entanglement, consider the following square sub-matrices \(M_{2\times 2}^{ij}\), formed from the \(i\)th and \(j\)th columns of \(M\),
\[M^{ij}(t)\stackrel{{\pm}}{{=}}\begin{pmatrix}c_{+i}&c_{+j}\\ c_{-i}&c_{-j}\end{pmatrix}, \tag{13}\]
and use them to define the entanglement measure \(\mathcal{Q}_{\mathcal{M}}(t)\) as follows:
\[\mathcal{Q}_{\mathcal{M}}(t)\stackrel{{\pm}}{{=}}\sum_{0\leq i< j\leq d-1}|\det\big{(}M^{ij}(t)\big{)}|^{2}. \tag{14}\]
More generally, \(\mathcal{Q}_{\mathcal{M}}(t)=0\) is a necessary and sufficient condition for the subsystem \(\mathcal{M}\) not to be entangled at \(t\). Expanding the summations in \(\mathcal{Q}_{\mathcal{(}}\mathcal{M})(t)\) will result in
\[\mathcal{Q}_{\mathcal{M}}(t) =\sum_{0\leq i<j\leq d-1}\det\!\big{(}M^{ij}\big{)}\big{(}\det\! \big{(}M^{ij}\big{)}\big{)}^{\ast} \tag{15a}\] \[=\sum_{0\leq i<j\leq d-1}(c_{+i}c_{-j}-c_{+j}c_{-i})(c_{+i}^{\ast }c_{-j}^{\ast}-c_{+j}^{\ast}c_{-i}^{\ast})\] \[=\sum_{0\leq i\neq j\leq d-1}|c_{+i}|^{2}|c_{-j}|^{2}-\sum_{0\leq i \neq j\leq d-1}c_{+i}c_{-j}c_{+j}^{\ast}c_{-i}^{\ast}\] \[=\sum_{0\leq i,j\leq d-1}|c_{+i}|^{2}|c_{-j}|^{2}-\sum_{0\leq i,j \leq d-1}c_{+i}c_{-j}c_{+j}^{\ast}c_{-i}^{\ast}\] \[=\Big{(}\sum_{0\leq i\leq d-1}|c_{+i}|^{2}\Big{)}\big{(}\sum_{0 \leq i\leq d-1}|c_{-i}|^{2}\big{)}\] (15b) \[-\big{(}\sum_{0\leq i\leq d-1}c_{+i}c_{-i}^{\ast}\big{)}\big{(} \sum_{0\leq i\leq d-1}c_{+i}^{\ast}c_{-i}\big{)}\] (15c) \[=\begin{vmatrix}\sum_{i}|c_{+i}|^{2}&\sum_{i}\big{(}c_{+i}c_{-i}^ {\ast}\big{)}\\ \sum_{i}\big{(}c_{+i}^{\ast}c_{-i}\big{)}&\sum_{i}|c_{-i}|^{2}\end{vmatrix}= \det\big{(}\rho_{\mathcal{M}}(t)\big{)},\]
which is the determinant of the time dependent reduced density matrix of subsystem \(\mathcal{M}\).
As the second step, here we prove that the R.H.S of equation (7) is the Laplace transform of \(\mathcal{Q}_{\mathcal{M}}(t)\). We start from the first term in (7). Before proceeding, we introduce the notation \(\tilde{c}_{al}(s)\stackrel{{\pm}}{{=}}\mathcal{L}\{c_{al}(t)\}= \langle a\otimes l|\mathbf{G}(s)|\psi_{0}\rangle\). Therefore, by using the definition of \(\mathcal{K}(z_{a},z_{d},s)\) in (6b), the first term in (7) and the fact that the \(\ast\) operation is associative and commutative, we have
\[(2\pi i)^{-2}\sum_{l,k}\sum_{l^{\prime},k^{\prime}}\oint_{\partial C_{d}}\mathbf{ d}z_{d}\oint_{\partial C_{a}}\mathbf{d}z_{a}\tilde{c}_{+l}(s)\star\tilde{c}_{-k}^{ \ast}(s^{\ast})\star\tilde{c}_{+l^{\prime}}(s^{\ast})\star\tilde{c}_{-k^{\prime} }(s)z_{a}^{l+k-l^{\prime}-l^{\prime}-1}z_{d}^{l-k+k^{\prime}-l^{\prime}-1}. \tag{16}\]
Upon employing Cauchy's residue theorem, based on the definition of the closed contours \(\partial C_{d}\) and \(\partial C_{a}\), then the
value of the double integral is equal to \((2\pi i)^{2}\) times the residue of the integrand at the origin of the structural space \((z_{d},z_{a})\), which is the coefficient of \(z_{d}^{-1}z_{a}^{-1}\) in the double summation. Therefore, the only remaining terms after the double integration must satisfy the following set of conditions:
\[\begin{cases}l+k-l^{\prime}-k^{\prime}-1=-1\\ l-k+k^{\prime}-l^{\prime}-1=-1\end{cases}\iff\begin{cases}l=l^{\prime}\\ k=k^{\prime}\end{cases}. \tag{10}\]
Consequently, the first term in (7) is:
\[\sum_{l,k}\tilde{c}_{+l}(s)\star\tilde{c}_{+l}^{*}(s^{*})\star \tilde{c}_{-k}(s)\star\tilde{c}_{-k}^{*}(s^{*})=\mathcal{L}\{\sum_{l,k}|c_{+l} |^{2}|c_{-k}|^{2}\}, \tag{11}\]
which is the Laplace-dual of (10b). In the last step, we operated with \(\star\) similar to our calculation in equation (10). Based on this equality, by showing that the second term in (7) corresponds to (10c), the proof is accomplished. To this end, we first consider \(\mathcal{K}_{d}(z_{a},s)\). Based on the definitions (6b) and (7) we have:
\[\mathcal{K}_{d}(z_{a},s)=(2\pi i)^{-1}\sum_{l,k}\oint_{\partial C_{d}}\text{ d}z_{d}\tilde{c}_{+l}(s)\star\tilde{c}_{-k}^{*}(s^{*})z_{a}^{l+k}z_{d}^{l-k-1}. \tag{12}\]
Consequently, the first term in (7) is:
\[\sum_{l,k}\tilde{c}_{+l}(s)\star\tilde{c}_{-l}^{*}(s^{*})\star \tilde{c}_{+k}^{*}(s^{*})\star\tilde{c}_{-k}(s)z_{a}^{2(l-k)}\bigg{|}_{z_{a}=1 }=\mathcal{L}\{\sum_{l,k}c_{+l}c_{-k}c_{+k}^{*}c_{-l}^{*}\}, \tag{13}\]
which is the Laplace-dual of the second term in (10c). Therefore, using the linearity of the Laplace transformation, the R.H.S of (7) is equal to the Laplace transformation of \(\mathcal{Q}_{\mathcal{M}}(t)\), which proves the assertion (7).
|
2305.18675 | History Repeats: Overcoming Catastrophic Forgetting For Event-Centric
Temporal Knowledge Graph Completion | Temporal knowledge graph (TKG) completion models typically rely on having
access to the entire graph during training. However, in real-world scenarios,
TKG data is often received incrementally as events unfold, leading to a dynamic
non-stationary data distribution over time. While one could incorporate
fine-tuning to existing methods to allow them to adapt to evolving TKG data,
this can lead to forgetting previously learned patterns. Alternatively,
retraining the model with the entire updated TKG can mitigate forgetting but is
computationally burdensome. To address these challenges, we propose a general
continual training framework that is applicable to any TKG completion method,
and leverages two key ideas: (i) a temporal regularization that encourages
repurposing of less important model parameters for learning new knowledge, and
(ii) a clustering-based experience replay that reinforces the past knowledge by
selectively preserving only a small portion of the past data. Our experimental
results on widely used event-centric TKG datasets demonstrate the effectiveness
of our proposed continual training framework in adapting to new events while
reducing catastrophic forgetting. Further, we perform ablation studies to show
the effectiveness of each component of our proposed framework. Finally, we
investigate the relation between the memory dedicated to experience replay and
the benefit gained from our clustering-based sampling strategy. | Mehrnoosh Mirtaheri, Mohammad Rostami, Aram Galstyan | 2023-05-30T01:21:36Z | http://arxiv.org/abs/2305.18675v1 | History Repeats: Overcoming Catastrophic Forgeting For Event-Centric Temporal Knowledge Graph Completion
###### Abstract
Temporal knowledge graph (TKG) completion models typically rely on having access to the entire graph during training. However, in real-world scenarios, TKG data is often received incrementally as events unfold, leading to a dynamic non-stationary data distribution over time. While one could incorporate fine-tuning to existing methods to allow them to adapt to evolving TKG data, this can lead to forgetting previously learned patterns. Alternatively, retraining the model with the entire updated TKG can mitigate forgetting but is computationally burdensome. To address these challenges, we propose a general continual training framework that is applicable to any TKG completion method, and leverages two key ideas: (i) a temporal regularization that encourages repurposing of less important model parameters for learning new knowledge, and (ii) a clustering-based experience replay that reinforces the past knowledge by selectively preserving only a small portion of the past data. Our experimental results on widely used event-centric TKG datasets demonstrate the effectiveness of our proposed continual training framework in adapting to new events while reducing catastrophic forgetting. Further, we perform ablation studies to show the effectiveness of each component of our proposed framework. Finally, we investigate the relation between the memory dedicated to experience replay and the benefit gained from our clustering-based sampling strategy.
## 1 Introduction
Knowledge graphs (KGs) provide a powerful tool for studying the underlying structure of multi-relational data in the real world Liang et al. (2022). They present factual information in the form of triples, each consisting of a subject entity, a relation, and an object entity. Despite the development of advanced extraction techniques, knowledge graphs often suffer from incompleteness, which can lead to errors in downstream applications. As a result, the task of predicting missing facts in knowledge graphs, also known as knowledge graph completion, has become crucial. Wang et al. (2022); Huang et al. (2022); Shen et al. (2022)
KGs are commonly extracted from real-world data streams, such as newspaper texts that change and update over time, making them inherently dynamic. The stream of data that emerges every day may contain new entities, relations, or facts. As a result, facts in a knowledge graph are usually accompanied by time information. A fact in a semantic knowledge graph, such as Yago Kasneci et al. (2009), may be associated with a time interval, indicating when it appeared and remained in the KG. For example, consider _(Obama, President, United States, 2009-2017)_ in a semantic KG. A link between _Obama_ and _United states_ appears in the graph after 2009, and it exists until 2017. On the other hand, a fact in a Temporal event-centric knowledge graph (TKGs), such as ICEWS Boschee et al. (2015), is associated with a single timestamp, indicating the exact time of the interaction between the subject and object entities. For example, in an event-centric TKG, _(Obama, meet, Merkel)_ creates a link between _Obama_ and _Merkel_ several times within 2009 to 2017 since the temporal links only show the time when an event has occurred. Therefore, event-centric TKGs exhibit a high degree of dynamism and non-stationarity in contrast to semantic KGs.
To effectively capture the temporal dependencies within entities and relations in TKGs, as well as new patterns that may emerge with new data streams, it is necessary to develop models specifically designed for TKG completion. A significant amount of research has been dedicated to developing evolving models Messner et al. (2022); Mirtaheri et al. (2021); Jin et al. (2020); Garg et al. (2020) for TKG completion. These models typically assume evolving vector representations for
entities or relations. These representations change depending on the timestep, and they can capture temporal dependencies between entities. However, these models often assume that the entire dataset is available during training. They do not provide a systematic method for updating model parameters when new data is added. One potential solution is to retrain the model with new data. However, this approach can be resource-intensive and impractical for large-scale knowledge graphs. An alternative approach is to fine-tune the model with new data, which is more time and memory efficient. However, this approach has been shown to be susceptible to overfitting to the new data, resulting in the model forgetting previously learned knowledge, a phenomenon known as catastrophic forgetting (Fig. 1). A limited number of studies Song and Park (2018); Daruna et al. (2021); Wu et al. (2021) have addressed this problem for semantic knowledge graphs using continual learning approaches, with TIE Wu et al. (2021) being the most closely related work to current research. Nevertheless, the development of efficient and effective methods for updating models with new data remains a significant challenge in event-centric Temporal Knowledge Graphs.
We propose a framework for incrementally training a TKG completion model that consolidates the previously learned knowledge while capturing new patterns in the data. Our incremental learning framework employs regularization and experience replay to alleviate catastrophic forgetting. We propose a temporal regularization method based on elastic weight consolidation Kirkpatrick et al. (2017). By estimating an importance weight for every model parameter at each timestep, the regularization term in the objective function 'freezes' the more important parameters from past timesteps, encouraging the use of less important parameters for learning the current task. Additionally, an exponentially decaying hyperparameter in the objective function further emphasizes the importance of the most recent tasks over older ones. Our selective experience replay method uses clustering over the representation of the data points to first capture the underlying structure of the data. The points closest to the clusters' centroid are selected for experience replay. We show that the temporal regularization combined with clustering-based experience replay outperforms all the baselines in alleviating catastrophic forgetting. Our main contributions include:
1. A novel framework for incremental training and evaluation of event-centric TKGs, which addresses the challenges of efficiently updating models with new data.
2. A clustering-based experience replay method, which we show to be more effective than uniform sample selection. We also demonstrate that careful data selection for experience replay is crucial when memory is limited.
3. An augmentation of the training loss with a consolidation loss, specifically designed for TKG completion, which helps mitigate forgetting effects. We show that assigning a decayed importance to the older tasks reduces forgetting effects.
4. A thorough evaluation of the proposed methods through extensive quantitative experiments to demonstrate the effectiveness of our full training strategies compared to baselines.
## 2 Related Work
Our work is related to TKG completion, continual learning methods, and recent developments of continual learning for knowledge graphs.
### Temporal Knowledge Graph Reasoning
TKG completion methods can be broadly categorized into two main categories based on their approach for encoding time information: translation-based methods and evolving methods.
Translation-based methods, such as those proposed by Leblay and Chekol (2018); Garcia-Duran et al. (2018); Dasgupta et al. (2018); Wang and Li (2019); Jain et al. (2020), and Sadeghian et al. (2019).
Figure 1: Catastrophic forgetting effect of fine-tuning. A TKG completion model is fine-tuned with the graph data at time \(t_{i}\) and achieves the highest MRR score for \(G_{i}\). The MRR scores decrease for \(G_{1},...,G_{i-1}\).
2021), utilize a lower-dimensional space, such as a vector (Leblay and Chekol, 2018; Jain et al., 2020), or a hyperplane (Dasgupta et al., 2018; Wang and Li, 2019), for event timestamps and define a function to map an initial embedding to a time-aware embedding.
On the other hand, evolving models assume a dynamic representation for entities or relations that is updated over time. These dynamics can be captured by shallow encoders (Xu et al., 2019; Mirtaheri et al., 2019; Han et al., 2020) or sequential neural networks (Trivedi et al., 2017; Jin et al., 2020; Wu et al., 2020; Zhu et al., 2020; Han et al., 2020, 2020). For example,(Xu et al., 2019) model entities and relations as time series, decomposing them into three components using adaptive time series decomposition. DyERNIE (Han et al., 2020) propose a non-Euclidean embedding approach in the hyperbolic space. (Trivedi et al., 2017) represent events as point processes, while (Jin et al., 2020) utilizes a recurrent architecture to aggregate the entity neighborhood from past timestamps.
### Continual Learning
Continual learning (CL) or lifelong learning is a learning setting where a set of tasks are learned in a sequence. The major challenge in CL is overcoming catastrophic forgetting, where the model's performance on past learned tasks is degraded as it is updated to learn new tasks in the sequence. Experience replay (Li and Hoiem, 2018) is a major approach to mitigate forgetting, where representative samples of past tasks are replayed when updating a model to retain past learned knowledge. To maintain a memory buffer storage with a fixed size, representative samples must be selected and discarded. (Schaul et al., 2016) propose selecting samples that led to the maximum effect on the loss function when learning past tasks.
To relax the need for a memory buffer, generative models can be used to learn generating pseudo-samples. (Shin et al., 2017) use adversarial learning for this purpose. An alternative approach is to use data generation using autoencoders(Rostami et al., 2020; Rostami and Galstyan, 2023). Weight consolidation is another important approach to mitigate catastrophic forgetting (Zenke et al., 2017; Kirkpatrick et al., 2017). The idea is to identify important weights that play an important role in encoding the learned knowledge about past tasks and consolidate them when the model is updated to learn new tasks. As a result, new tasks are learned using primarily the free learnable weights. In our framework, we combine both approaches to achieve optimal performance.
### Continual Learning for Graphs
CL in the context of graph structures remains an under-explored area, with a limited number of recent studies addressing the challenge of dynamic heterogeneous networks (Tang and Matteson, 2021; Wang et al., 2020; Zhou and Cao, 2021) and semantic knowledge graphs (Song and Park, 2018; Daruna et al., 2021; Wu et al., 2021). In particular, (Song and Park, 2018; Daruna et al., 2021) propose methods that integrate class incremental learning models with static translation-based approaches, such as TransE (Bordes et al., 2013), for addressing the problem of continual KG embeddings. Additionally, TIE (Wu et al., 2021) develops a framework that predominantly focuses on semantic KGs, and generates yearly graph snapshots by converting a fact with a time interval into multiple timestamped facts. This process can cause a loss of more detailed temporal information, such as the month and date, and results in a substantial overlap of over 95% between consecutive snapshots. TIE's frequency-based experience replay mechanism operates by sampling a fixed set of data points from a fixed-length window of past graph snapshots; for instance, at a given time \(t\), it has access to the snapshots from \(t-1\) to \(t-5\). This contrasts with the standard continual learning practice, which involves sampling data points from the current dataset and storing them in a continuously updated, fixed-size memory buffer. When compared to Elastic Weight Consolidation (EWC), the L2 regularizer used by TIE proves to be more rigid when learning new tasks over time. Furthermore, their method's evaluation is confined to shallow KG completion models like Diachronic Embeddings (Goel et al., 2020) and HyTE (Dasgupta et al., 2018).
## 3 Problem Definition
This section presents the formal definition of continual temporal knowledge graph completion.
### Temporal Knowledge Graph Reasoning
A TKG is a collection of events represented as a set of quadruples \(G=\{(s,r,o,\tau)|s,o\in\mathcal{E},r\in\mathcal{R}\}\), where \(\mathcal{E}\) and \(\mathcal{R}\) are the set of entities and relations, and \(\tau\) is the timestamp of the event occurrence.
These events represent one-time interactions between entities at a specific time. The task of temporal knowledge graph completion is to predict whether there will be an interaction between two entities at a given time. This can be done by either predicting the object entity, given the subject and relation at a certain time, or by predicting the relation between entities, given the subject and object at a certain time. In this case, we will focus on the first method which can be formally defined as a ranking problem. The model will assign higher likelihood to valid entities and rank them higher than the rest of the candidate entities.
### Continual Learning Framework For Tempporal Knolwedge Graphs
A Temporal knowledge graph \(G\) can be represented as a stream of graph snapshots \(G_{1},G_{2},\ldots,G_{T}\) arriving over time, where \(G_{t}=\{(s,r,o,\tau)|s,o\in\mathcal{E},r\in\mathcal{R},\tau\in[\tau_{t},\tau_ {t+1})\}\) is a set of events occurred within time interval \([\tau_{t},\tau_{t+1})\).
The continual training of a TKG completion method involves updating the parameters of the model \(\mathcal{M}\) as new graph snapshots, consisting of a set of events, become available over time. This process aims to consolidate previously acquired information while incorporating new patterns. Formally, we define a set of tasks \(\langle\mathcal{T}_{1},\ldots,\mathcal{T}_{T}\rangle\), where each task \(\mathcal{T}_{t}=\left(D_{t}^{train},D_{t}^{test},D_{t}^{val}\right)\) is comprised of disjoint subsets of the \(G_{t}\) events, created through random splitting. A continually trained model \(\mathcal{M}\) can then be shown as a stream of models \(\mathcal{M}=\langle\mathcal{M}_{1},\ldots,\mathcal{M}_{T}\rangle\), with corresponding parameter sets \(\theta=\langle\theta_{1},\theta_{2},...,\theta_{T}\rangle\), trained incrementally as a stream of tasks arrive \(\mathcal{T}=\langle\mathcal{T}_{1},\mathcal{T}_{2},...,\mathcal{T}_{T}\rangle\).
### Base Model
In this paper, we utilize RE-Net[10], a state-of-the-art TKG completion method, as the base model. RE-Net is a recurrent architecture for predicting future interactions, which models the probability of an event occurrence based on temporal sequences of past knowledge graphs. The model incorporates a recurrent event encoder to process past events and a neighborhood aggregator to model connections at the same time stamp. Although RE-Net was initially developed for predicting future events (extrapolation), it can also be used to predict missing links in the current state of the graph (interpolation), which is the focus of this study. The model parameterizes the probability of an event \(p(\mathrm{o}_{\tau}|\mathrm{s},\mathrm{r})\) as follows:
\[p(\mathrm{o}_{\tau}|\mathrm{s},\mathrm{r})\propto\exp\left([\mathbf{e}_{s}:\mathbf{e}_ {t}:\mathbf{h}_{\tau-1}(\mathrm{s},\mathrm{r})]^{\top}\cdot\mathbf{w}_{\mathrm{o}_{ \tau}}\right), \tag{1}\]
where \(\mathbf{e}_{\mathrm{s}},\mathbf{e}_{\mathrm{r}}\in\mathbb{R}^{d}\) are learnable embedding vectors for the subject entity \(\mathrm{s}\) and relation \(\mathrm{r}\). \(\mathbf{h}_{\tau-1}(\mathrm{s},\mathrm{r})\in\mathbb{R}^{d}\) represents the local dynamics within a time window \((\tau-\ell,\tau-1)\) for \((\mathrm{s},\mathrm{r})\). By combining both the static and dynamic representations, RE-Net effectively captures the semantics of \((\mathrm{s},\mathrm{r})\) up to time stamp \((\tau-1)\). The model then calculates the probability of different object entities \(\mathrm{o}_{\tau}\) by passing the encoding through a multi-layer perceptron (MLP) decoder, which is defined as a linear softmax classifier parameterized by \(\mathbf{w}_{\mathrm{o}_{\tau}}\).
## 4 Methodology
Our proposed framework is a training approach that can be applied to any TKG completion model. It enables the incremental updating of model parameters with new data while addressing the issues of catastrophic forgetting associated with fine-tuning. To achieve this, we utilize experience replay and regularization techniques - methodologies commonly employed in image processing and reinforcement learning to mitigate forgetting. Additionally, we introduce a novel experience replay approach that employs clustering to identify and select data points that best capture the underlying structure of the data. Furthermore, we adopt the regularization method of EWC, as proposed in [13], which incorporates a decay parameter that assigns higher priority to more recent tasks. Our results demonstrate that the incorporation of a decay parameter into the EWC loss and prioritizing more recent tasks leads to improved performance.
### Experience Replay
In the field of neuroscience, the hippocampal replay, or the re-activation of specific trajectories, is a crucial mechanism for various neurological functions, including memory consolidation. Motivated by this concept, the use of experience replay in Continual Learning (CL) for deep neural networks aims to consolidate previously learned knowledge when a new task is encountered by replaying previous experiences, or training the model on a limited subset of previous data points. However, a challenge with experience replay, also known as memory-based methods, is the requirement for a large memory size to fully consolidate previous tasks [14]. Thus, careful selection of data points that effectively represent the distribution of previous data becomes necessary.
In this work, we propose the use of experience replay for continual TKG completion. Specifically, we maintain a memory buffer \(\mathcal{B}\) which, at time \(t\), contains a subset of events sampled from \(D_{1}^{train},D_{2}^{train},\ldots,D_{t-1}^{train}\). When Task \(\mathcal{T}_{t}\) is presented to the model, it is trained on the data points in \(D_{t}^{train}\cup\mathcal{B}\). After training, a random subset of events in the memory buffer, \(\frac{|\mathcal{B}|}{t}\), are discarded and replaced with a new subset of events sampled from \(D_{t}^{train}\). In this way, at time \(t\), where \(t\) tasks have been observed, equal portions of memory with size \(\frac{|\mathcal{B}|}{t}\) are dedicated to each task. A naive approach for selecting a subset of events from a task's training set at time \(t\) would be to uniformly sample \(\frac{|\mathcal{B}|}{t}\) events from \(D_{t}^{train}\). However, we propose a clustering-based sampling method that offers a more careful selection algorithm, which is detailed in the following section.
#### 4.1.1 Clustering-based Sampling
When dealing with complex data, it is likely that various subspaces exist within the data that must be represented in the memory buffer. To address this issue, clustering methods are employed to diversify the memory buffer by grouping data points into distinct clusters. The centroids of these clusters can be utilized as instances themselves or as representatives of parts of the memory buffer.(Shi et al., 2018; Hayes et al., 2019; Korycki and Krawczyk, 2021). In this study, clustering is applied to the representation of events in the training set in order to uncover the underlying structure of the data and select data points that effectively cover the data distribution. The Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN) algorithm (McInnes et al., 2017) is utilized for this purpose. HDBSCAN is a hierarchical, non-parametric, density-based clustering method that groups points that are closely packed together while identifying points in low-density regions as outliers.
The use of HDBSCAN over other clustering methods is advantageous due to its minimal requirements for hyperparameters. Many clustering algorithms necessitate and are sensitive to the number of clusters as a hyperparameter. However, HDBSCAN can determine the appropriate number of clusters by identifying and merging dense space regions. Additionally, many clustering algorithms are limited to finding only spherical clusters. HDBSCAN, on the other hand, is capable of uncovering more complex underlying structures in the data. As a result of its ability to identify clusters with off-shaped structures, HDBSCAN generates a set of exemplar points for each cluster rather than a single point as the cluster centroid.
We represent each event \((s,r,o,\tau)\in D_{t}^{train}\) as a vector \([\mathbf{e}_{\mathrm{s}}:\mathbf{e}_{\mathrm{o}}]\in\mathbb{R}^{2d}\), where \(\mathbf{e}_{\mathrm{s}}\) and \(\mathbf{e}_{\mathrm{o}}\) represent the \(d\)-dimensional embeddings of \(s\) and \(o\) at time \(t\), respectively. The notation \([\cdot]\) denotes concatenation, creating a \(|D_{t}^{train}|\times 2d\) matrix that represents the training data at time \(t\). In our initial experiments, we found that data representations such as \([\mathbf{e}_{\mathrm{s}}:\mathbf{e}_{\mathrm{r}}]\), where \(\mathbf{e}_{\mathrm{r}}\) is the relation embeddings, did not significantly affect the results. Moreover, representing the data as \([\mathbf{e}_{\mathrm{s}}:\mathbf{e}_{\mathrm{r}}:\mathbf{e}_{\mathrm{o}}]\) led to a bias towards relation representation, causing data points with identical relation types to cluster together.
```
input:\(\mathcal{C}_{t}=\mathcal{C}_{t}^{1},\mathcal{C}_{t}^{2},\ldots,\mathcal{C}_{t}^ {m}\) (clusters generated with hdbscan from \(D_{t}^{train}\) sorted in decreasing order of their size; \(D_{t}^{train}\) (training set at time \(t\)); \(s\) (sample size); \(\texttt{FindExemplars}(\mathcal{C}^{i},k)\) (Takes a cluster and returns \(k\) points closets to the cluster exemplars.)
1defSelectPoints(\(\mathcal{C}_{t}\), \(D_{t}^{train}\), \(s\)):
2\(Q\leftarrow\emptyset\)for\(i\gets 1\)to\(m\)do
3\(r\leftarrow\lceil\frac{|\mathcal{C}^{i}|}{\sum_{j}|\mathcal{C}^{j}|}\times s\rceil\)
4\(\mathcal{X}\leftarrow\texttt{FindExemplars}(\mathcal{C}^{i},r)\)
5\(Q\gets Q\cup(\mathcal{X},r)\)
6\(S\leftarrow\emptyset\)while\(Q\neq\emptyset\And|S|<s\)do
7\(\mathcal{X},r\gets Q.\texttt{pop}()\)
8\(S\gets S\cup[\mathcal{X}[0]]\)
9\(Q\gets Q\cup(\mathcal{X}[1:],r-1)\)
10return\(S\)
```
**Algorithm 1**Cluster Experience Replay
We obtained clusters \(\mathcal{C}^{1},\mathcal{C}^{2},\ldots,\mathcal{C}^{m}\) by running HDBSCAN. Our algorithm then selects \(\frac{|\mathcal{B}|}{t}\) events from these clusters by prioritizing the data points closest to the exemplars and giving precedence to larger clusters. If \(\frac{|\mathcal{B}|}{t}<m\), data points are chosen only from the first \(\frac{|\mathcal{B}|}{t}\) clusters. Conversely, if \(\frac{|\mathcal{B}|}{t}>m\), the number of points selected from each cluster will depend on the cluster size, with a minimum of one data point chosen from each cluster. The specifics of this procedure are detailed further in Algorithm 1.
### Regularization
Regularization-based approaches for CL incorporate a regularization term in the objective function to discourage changes in the weights that are crucial for previous tasks, while encouraging the utilization of other weights. One such approach, Elastic Weight Consolidation (EWC) Kirkpatrick et al. (2017), estimates the importance of weights using the Fisher Information Matrix. Given a model with parameter set \(\theta\) previously trained on task \(A\), and a new task B, EWC optimizes the following loss function:
\[\mathcal{L}(\theta)=\mathcal{L}_{B}(\theta)+\sum_{i}\frac{\lambda}{2}F_{i}( \theta_{i}-\theta_{A,i}^{*})^{2} \tag{2}\]
Where \(\mathcal{L}_{B}\) is the loss over task \(B\) only and \(\lambda\) determines the importance of the previous task compared to task \(B\). We extend this loss function for continual TKG completion. Given a stream of tasks \(\langle\mathcal{T}_{1},\mathcal{T}_{2}\dots,\mathcal{T}_{t}\rangle\) and incrementally obtained parameter sets \(\langle\theta_{1},\theta_{2}\dots,\theta_{t}\rangle\), we define the temporal EWC loss functions as follows:
\[\mathcal{L}(\theta_{t})=\mathcal{L}_{\mathcal{T}_{t}}(\theta_{t})+\sum_{\tau= 1}^{t-1}\sum_{i}\frac{\lambda}{2}F_{\tau}(\theta_{i}-\theta_{\tau,i}^{*})^{2} \tag{3}\]
Where \(\mathcal{L}_{\mathcal{T}_{t}}\) is the model loss calculated only using \(\mathcal{M}_{t}\) and \(D_{t}^{train}\), \(F_{\tau}\) is the Fisher Information Matrix estimated for \(\mathcal{M}_{\tau}\) and \(\mathcal{T}_{\tau}\) and \(\theta_{\tau,i}\) is \(i\)-th parameter of \(\mathcal{M}_{\tau}\). The \(\lambda\) parameter in Equation 3 assigns equal importance to all the tasks from previous time steps, however, in practice, and depending on the application, different tasks might have different effect on the current task making plausibility of adaptive \(\lambda_{\tau}\):
\[\mathcal{L}(\theta_{t})=\mathcal{L}_{\mathcal{T}_{t}}(\theta_{t})+\sum_{\tau= 1}^{t-1}\sum_{i}\frac{\lambda_{\tau}}{2}F_{\tau}(\theta_{i}-\theta_{\tau,i}^{ *})^{2}, \tag{4}\]
where \(\lambda_{\tau}=\lambda\alpha^{t-\tau}\), \(\lambda\) is the overall EWC loss importance, and \(\alpha<1\) is the decay parameter.
### Training and Loss Function
The final loss function of our framework, when trained with experience replay and EWC can be summarized as follows:
\[\mathcal{L}(\theta_{t})=\mathcal{L}_{expr}(\theta_{t})+\lambda \mathcal{L}_{ewc}(\theta_{t}), \tag{5}\] \[\mathcal{L}_{expr}(\theta_{t})=\mathcal{L}_{\mathcal{T}_{t}\cup \mathcal{B}}(\theta_{t}),\] \[\mathcal{L}_{ewc}=\sum_{\tau=1}^{t-1}\sum_{i}\frac{\alpha^{t-\tau }}{2}F_{\tau}(\theta_{i}-\theta_{\tau,i}^{*})^{2}\]
The replay loss \(\mathcal{L}_{expr}\) is the model loss trained over both the current task's training set \(D_{t}^{train}\) and the data points in the memory buffer \(\mathcal{B}\). For training in batches, the number of data points selected from \(D_{t}^{train}\) and \(\mathcal{B}\) is in proportion to their size.
## 5 Experiments
In this section, we explain the evaluation protocol to quantitatively measuring the model catastrophic forgetting. From know TKG datasets, we create two benchmarks for TKG continual learning. We evaluate our proposed training method using the benchmark, compare them with various baselines and show the effectiveness of our approach in alleviating catastrophic forgetting. Finally, we conduct ablation studies on different components of our training method to validate our model.
### Datasets
We use two datasets: the Integrated Crisis Early Warning System (ICEWS) and the Global Database of Events, Language, and Tone (GDELT). Both datasets contain interactions between geopolitical actors, with daily event dates in the ICEWS dataset and 15-minute intervals in the GDELT dataset. To create benchmarks, we use a one-year period of the ICEWS dataset starting from 01-01-2015 and consider each month as a separate graph snapshot (ICEWS-M). We also use a two-year period from 01-01-2015 to 02-01-2017, dividing it into 13 graph snapshots with 2-month windows (ICEWS-2M). We split the events in each snapshot into train, validation, and test sets with a 50/25/25 percent ratio. For the GDELT, we use a 20-day period, dividing it into 3-day windows and split the data into train/test/validation sets with a 60/20/20 percent ratio. Table 1 includes statistics for each benchmark. We assume that all relations and entities are known at all times during training, and no new entities or relations are presented to the model.
### Evaluation Setup
We start by training \(\mathcal{M}\) over \(D_{1}^{train}\) and use \(D_{1}^{val}\) for hyper-parameter tuning. The model \(\mathcal{M}_{t}\) with parameter set \(\theta_{t}\) at time step \(t\) is first initialized with
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline & & & & avg \#quads \\ Dataset & \#tasks & task period & split ratio & train/test \\ \hline ICEWS-M & 13 & 1 month & 50/25/25 & 27k/13k \\ ICEWS-2M & 13 & 2 month & 50/25/25 & 50k/25k \\ GDELT & 21 & 3 days & 60/20/20 & 38k/13k \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dataset statistics
parameters from the previous time step \(\theta_{t-1}\). Then \(\mathcal{M}_{t}\) parameters are updated by training the model over \(D_{t}^{train}\). The training step can be a simple fine-tuning, or it can be augmented with data points for experience replay or with the temporal EWC loss.
In order to assess the forgetting effect, at time \(t\), we report the average \(\mathcal{M}_{t}\) performance over the current and all the previous test sets \(D_{1}^{test},D_{2}^{test},\ldots,D_{t}^{test}\). Precisely, we report the performance at time \(t\) as \(P_{t}=\frac{1}{t}\sum_{j=1}^{t}p_{t,j}\), where \(p_{t,j}\) is the performance of \(\mathcal{M}_{t}\) measured by either MRR or Hit@10 over \(D_{j}^{test}\).
### Comparative Study
To evaluate the performance of our incremental training framework, we conduct a comparative analysis with several baseline strategies. These include:
* **FT**: This strategy fine-tunes the model using the original loss function and the newly added data points.
* **ER**: This method applies experience replay (Rolnick et al., 2019) with randomly chosen points. It then fine-tunes the model with both newly added events and events stored in the memory buffer.
* **EWC** (Kirkpatrick et al., 2017): In this strategy, the model is trained with a loss function augmented by an EWC (Elastic Weight Consolidation) loss, as defined in Equation 3.
* **TIE**(Wu et al., 2021): Drawing from TIE's methodology, we incorporated L2 regularization into our objective function and utilized their implementation of frequency-based experience replay.
* **Full**: Our comprehensive model is trained using a clustering-based experience replay mechanism, supplemented with a decayed EWC loss.
Additionally, we train an upper-bound model, denoted as **UPP**. During the \(t\)-th step of training, this model has access to all training data from all preceding time steps, \(1,\ldots,t\). Detailed information about hyperparameter selection and implementation is provided in Appendix A. The results of this experiment, summarized in Fig. 2, demonstrate that our full training framework outperforms all other incremental training strategies in alleviating catastrophic forgetting. The L2 regularization used with TIE proves to be overly restrictive, leading to an even greater performance drop than that observed
Figure 2: The overall performance comparison. Average Hit@10 and average MRR reported for RE-Net incrementally trained using three benchmarks: ICEWS-M, ICEWS-2M, and GDELT.
with the finetuning strategy. Table 2 summarizes the performance of the model at the final training time step on the last test dataset (referred to as 'current'), as well as its average performance across all previous test datasets (referred to as 'average'). Despite a slight dip in performance on the current task, our method consistently delivers a higher average performance. This discrepancy underscores the trade-off inherent in our approach, which is deliberately calibrated to strike a balance between maintaining high performance across all tasks and mitigating the forgetting of prior tasks.
### Ablation Study
In this section, we present an ablation study to evaluate the effectiveness of our proposed approach. Fig. 3 illustrates the results of various variations of our model, trained on ICEWS-M and evaluated using average MRR as the performance metric. The variations include: (1) Random Experience Replay (RER), where points are randomly sampled uniformly; (2) Clustering-based Experience Replay (CER), where points are sampled using the method described in Section 4.1.1; (3) Regular EWC outlined in Equation 3 (EWC); (4) Decayed Elastic Weight Consolidation (DEWC), using the decayed \(\lambda\) value outlined in Equation 4; and (5) DEWC + CER, which represents our full model.
Our results demonstrate that the individual components of our model play a role in enhancing the overall performance, with clustering-based experience replay showing superior performance compared to random experience replay. Additionally, the decayed EWC technique proves to be more effective than the traditional EWC when tasks are assigned equal importance coefficients. For a more in-depth understanding, the detailed results for all datasets used in the ablation study are provided in the Appendix B.
### EWC Variations
In order to demonstrate the effectiveness of the EWC loss with weight decay (as outlined in Equation 4), we are comparing it against three other variations of the EWC loss. We will train the RENet method incrementally, using each variation of the EWC loss separately. The results of this comparison can be seen in Fig. 4, which shows the average MRR score for a model trained incrementally with each loss variation, using the ICEWS-M dataset. The other variations of the EWC loss that we are comparing against include: (i) only using the parameters of the previous task for regularization, and only computing the Fisher Information Matrix for the previous task; (ii) using all previous task parameters for regularization, but giving all tasks the same importance coefficient value \(\lambda\), and computing the Fisher Information Matrix for each task separately (as outlined in Equation 3); and (iii) a variation similar to the second one, but with the decayed \(\lambda_{i}\) values of Equation 4 being assigned to each task randomly. The results in Fig. 4 indicate that using only the parameters of the previous task
\begin{table}
\begin{tabular}{l|c c c c|c c c|c c c|c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{4}{c}{ICEWS-M} & \multicolumn{4}{c}{ICEWS-2M} & \multicolumn{4}{c}{GDELT} \\ \cline{2-13} \multicolumn{1}{c}{} & \multicolumn{2}{c}{Current} & \multicolumn{2}{c}{Average} & \multicolumn{2}{c}{Current} & \multicolumn{2}{c}{Average} & \multicolumn{2}{c}{Current} & \multicolumn{2}{c}{Average} \\ \cline{2-13} \multicolumn{1}{c}{Model} & H@10 & MRR & H@10 & MRR & H@10 & MRR & H@10 & MRR & H@10 & MRR & H@10 & MRR \\ \hline FT & \(.503\) & \(.325\) & \(.390\pm 0.04\) & \(.239\pm 0.03\) & \(.517\) & \(.330\) & \(.411\pm 0.04\) & \(.256\pm 0.03\) & \(.421\) & \(.260\) & \(.351\pm 0.02\) & \(.214\pm 0.02\) \\ ER & \(.491\) & \(.314\) & \(.410\pm 0.03\) & \(.252\pm 0.02\) & \(.521\) & \(.331\) & \(.424\pm 0.04\) & \(.263\pm 0.03\) & \(.429\) & \(.263\) & \(.356\pm 0.02\) & \(.215\pm 0.02\) \\ EWC & \(.483\) & \(.299\) & \(.448\pm 0.03\) & \(.273\pm 0.02\) & \(.475\) & \(.294\) & \(.466\pm 0.02\) & \(.288\pm 0.02\) & \(.429\) & \(.262\) & \(.359\pm 0.02\) & \(.217\pm 0.02\) \\ TIE & \(.548\) & \(.354\) & \(.398\pm 0.05\) & \(.235\pm 0.04\) & \(.567\) & \(.362\) & \(.428\pm 0.05\) & \(.260\pm 0.04\) & \(.492\) & \(.309\) & \(.322\pm 0.05\) & \(.192\pm 0.03\) \\ \hline OURS & \(.565\) & \(.358\) & \(.462\pm 0.04\) & \(.280\pm 0.03\) & \(.555\) & \(.349\) & \(.473\pm.03\) & \(.291\pm.02\) & \(.416\) & \(.256\) & \(.365\pm.02\) & \(.222\pm.01\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance comparison (Hit@10 and MRR) of the RE-NET model incrementally trained using three benchmarks: ICEWS-M, ICEWS-2M, and GDELT. Performance is evaluated at the final training time step over the last test dataset(Current) and across all prior test datasets (Average).
Figure 3: Ablation study on different components of the model using ICEWS-M. RER and CER stand for random and clustering-based experience replay. DEWC is the EWC with decayed \(\lambda\) values.
for regularization performs the worst. Using the same \(\lambda\) value for all tasks has a smoothing effect on the Fisher Information Matrix, and this is why the decayed, permuted \(\lambda\) values perform better. Our proposed loss ultimately outperforms all variations, highlighting the importance of more recent tasks compared to older tasks. As a potential next step, we could investigate learning \(\lambda\) values based on task similarities.
### Memory size and Experience Replay
This experiment compares the effectiveness of clustering-based sampling and uniform sampling for experience replay when memory is limited. We use ICEWS-M and run RE-Net with two types of experience replay: (i) random (uniform) sampling (RER) and (ii) clustering-based sampling (CER) using buffer sizes from 2000 to 11000 data points. We evaluated the model performance for \(\mathcal{M}_{4}\), \(\mathcal{M}_{8}\), and \(\mathcal{M}_{12}\) which were trained incrementally with experience replay up to time 4, 8, and 12, respectively. We measure the performance of the model by taking the average MRR score over the first \(4,8,12\) test sets for \(\mathcal{M}_{4},\mathcal{M}_{8},\mathcal{M}_{12}\) respectively. Finally, we compare the performance of RER and CER methods by subtracting the RER model performance from the CER model performance, and the results are shown in Fig. 5. The results, shown in Fig. 5, indicate that when memory is very small or very large, there is no significant difference between RER and CER methods; when memory is too small, there is not enough information for the model to have a significant impact on performance, and when memory is too large, important data points are likely to be selected at random. However, when memory is sufficient, clustering-based sampling becomes more important.
## 6 Conclusion
We propose a framework for incrementally training a TKG completion model that consolidates the previously learned knowledge while capturing new patterns in the data. Our incremental learning framework employs regularization and experience replay techniques to alleviate the forgetting problem. Our regularization method is based on temporal elastic weight consolidation that assigns higher importance to the parameters of the more recent tasks. Our selective experience replay method uses clustering over the representation of the data points and selects the data points that best represent the underlying data structure. Our experimental results demonstrate the effectiveness of our proposed approach in alleviating the catastrophic forgetting for the event-centric temporal knowledge graphs. This work is the first step towards incremental learning for event-centric knowledge graphs. Potential future work might involve exploring, and taking into consideration the effect of time on task similarities which might differ for various applications.
## 7 Limitations
In this section, we examine the limitations of our approach. Even though our training methodology runs faster and uses less memory than retraining, there remains potential for further scalability optimization. One potential avenue for improvement could involve optimizing the estimation of the Fisher Information Matrix. Furthermore, op
Figure 4: Comparison of EWC loss variations on model performance. Blue line represents using only the previous task in EWC loss, showing a significant reduction compared to considering all tasks.
Figure 5: Comparison of average MRR for CER and RER. Results show no significant difference when memory is very small or large, but CER is more effective with sufficient memory.
timizing the parameters related to the incremental training such as buffer size and regularization coefficient is dependent on the entire time steps rather than the current time steps. Devising a time-efficient way for hyperparameter optimization could be extremely beneficial for this task. Additionally, while our full model has demonstrated some mitigation of the problem of catastrophic forgetting, a significant gap remains between the upper performance bound and the performance of our approach. Further research is necessary to bridge this gap and improve overall performance. Finally, our current focus on continual learning is limited to the emergence of new events and does not currently consider the possibility of new relations or entities. This limitation is in part due to the base model (RENet) not being inductive and is a problem that is inherent to the model itself. Future research in the field of continual learning may aim to address this limitation by considering new relations and entities, even in the context of base models that do not support these features.
|
2302.13787 | On image ideals of nice and quasi-nice derivations over a UFD | In this paper, for a field $k$ of characteristic zero and a finitely
generated $k$-algebra $R$, we give a set of generators for the image ideals of
irreducible nice and quasi-nice $R$-derivations on the polynomial ring
$R[X,Y]$, where $R$ is a UFD. | Nikhilesh Dasgupta, Animesh Lahiri | 2023-02-27T14:08:07Z | http://arxiv.org/abs/2302.13787v4 | # On image ideals of nice and quasi-nice derivations over a UFD
###### Abstract
In this paper, for a field \(k\) of characteristic zero and a finitely generated \(k\)-algebra \(R\), we give a set of generators for the image ideals of irreducible nice and quasi-nice \(R\)-derivations on the polynomial ring \(R[X,Y]\), where \(R\) is a UFD.
**Keywords**. Locally nilpotent derivation, polynomial ring, filtration.
**2022 MSC**. Primary: 14R20; Secondary: 13A50.
## 1 Introduction
We will assume all rings to be commutative containing unity. By \(k\), we will always denote a field of characteristic zero. The set of all non-negative integers will be denoted by \(\mathbb{N}\). The notation \(R^{[n]}\) will be used to denote an \(R\)-algebra isomorphic to a polynomial algebra in \(n\) variables over \(R\). Unless otherwise stated, capital letters like \(X_{1},\ldots,X_{n},Z_{1},\ldots,Z_{m},X,Y,Z,U,V,W\) will be used as variables in the polynomial ring.
Let \(R\) be a \(k\)-algebra and \(B\) an \(R\)-algebra. An \(R\)-linear map \(D:B\longrightarrow B\) is said to be an \(R\)_-derivation_, if it satisfies the Leibnitz rule: \(D(ab)=aDb+bDa,\ \forall a,b\in B\). An \(R\)-derivation \(D\) is said to be a _locally nilpotent derivation_ (abbrev. \(R\)-Ind) if, for each \(b\in B\), there exists \(n(b)\in\mathbb{N}\) such that \(D^{n(b)}b=0\). The set of all locally nilpotent \(R\)-derivations on \(B\) will be denoted by \(\mathrm{LND}_{R}(B)\). The _kernel_ of \(D\), denoted by \(\mathrm{Ker}(D)\), is defined to be the set \(\{b\in B\ |\ Db=0\}\). The _image ideals_ of \(D\) are the ideals of \(\mathrm{Ker}(D):=A\), defined by
\[I_{n}=A\cap D^{n}B\ \mathrm{for}\ n\geqslant 1.\]
\(I_{1}\) is called the _plinth ideal_ of \(D\).
An important problem in the area of locally nilpotent derivations is to study the minimal number of generators of the image ideals \(I_{n}\). When \(B=k^{[3]}\), it is well-known that the plinth ideal \(I_{1}\) is principal ([9, Theorem 5.12], [6, Theorem 1]). Freudenburg conjectured that all the image ideals are also principal for \(D\in\mathrm{LND}_{k}(k^{[3]})\) ([9, 11.2]). This conjecture is called the _Freeness Conjecture_.
Recently, Khaddah, Kahoui and Ouali showed that if \(R\) is a PID containing \(k\), \(B=R^{[2]}\) and \(D\in\mathrm{LND}_{R}(B)\), then all the image ideals of \(D\) are principal ([2, Theorem 4.1]) and as a consequence the freeness conjecture of Freudenburg is true for locally nilpotent derivations on \(k^{[3]}\) having rank at most two. However, there has not been much study about how a set of generators for each of the image ideals would look like. Therefore, it becomes interesting to find a explicit set of generators for each of the image ideals when \(B\) is a polynomial algebra over a ring (atleast for the case when \(D\) is of some special type and \(R\) is a UFD or PID).
In this paper, we consider affine \(k\)-algebras \(R\) which are UFDs and investigate the image ideals of nice and quasi-nice \(R\)-derivations (see Definition 2.2) on \(R^{[2]}\). For any \(k\)-domain \(B\) and an \(\mathrm{Ind}\)\(D\) having a slice (i.e., \(\exists\ s\in B\) such that \(Ds=1\)), it is easy to observe (see Lemma 2.8) that all the image ideals are equal to the whole ring and hence principal. In fact, if \(R\) is an affine \(k\)-domain, then any fixed point free \(R\)-\(\mathrm{Ind}\)\(D\) (i.e., \((DB)B=B\)) on \(B=R^{[2]}\) has a slice in \(B\) and hence all the image ideals are principal (Corollary 2.10). So, it is interesting to investigate the situation when \(D\) is not fixed point free.
In Theorem 3.5 of this paper, for a UFD \(R\), we study the structure of the image ideals of an \(R\)-\(\mathrm{Ind}\)\(D\) on \(R[X_{1},X_{2}]\) satisfying \(D^{2}X_{1}=D^{2}X_{2}=0\) and show that if \(D\) is not fixed point free, then \(I_{n}\) is generated by \(n+1\) elements. As a corollary, when \(R\) is a PID, we also obtain similar kind of results for \(D\in\mathrm{LND}_{R}(R[X,Y,Z])\) satisfying \(D^{2}X=D^{2}Y=D^{2}Z=0\) (see Theorem 3.7). It follows from our results that _over a_ UFD \(R\)_, the image ideals of irreducible nice \(R\)-derivations on \(R^{[2]}\) are principal if and only if the derivation is fixed point free_.
In Theorem 3.12, we show that over a UFD \(R\), an \(R\)-\(\mathrm{Ind}\)\(D\) on \(R^{[2]}\) which is not fixed point free and \(D^{2}X_{1}=0\) with \(DX_{1}\) is irreducible, have all the image ideals principal if and only if \(D\) is not a nice derivation. In fact, each image ideal is generated by a power of \(DX_{1}\). We have also showed that when \(R\) is a PID, the condition "\(DX_{1}\) is irreducible" can be removed under some additional hypotheses (see Theorem 3.16).
Corollary 3.3 and Proposition 3.4 are the main tools used to calculate the higher image ideals in case of both nice and quasi-nice derivations. Corollary 3.3 gives an explicit set of generators for each \(I_{n}\) under the primality of an ideal \(\widetilde{J}\), and Proposition 3.4 is used to find the generators of \(\widetilde{J}\). In fact, given an ideal \(I\) in a polynomial ring over an affine \(k\)-domain, equipped with a weighted degree map, using Proposition 3.4 one can compute the ideal generated by the top degree terms of elements of \(I\), provided at most one of the generators of \(I\) is non-homogeneous. Proof of Corollary 3.3 uses some techniques of LND-filtration introduced by B. Alhajjar in [1].
## 2 Preliminaries
First, we will recall some useful definitions.
**Definition 2.1**.: A derivation \(D\) on \(B\) is said to be _irreducible_ if there does not exist any non-unit \(b\) in \(B\) such that \(DB\subseteq bB\).
**Definition 2.2**.: Let \(R\) be a \(k\)-domain, \(B=R^{[n]}\) and \(m\) (\(\leqslant n\)) be a positive integer. An \(R\)-lnd \(D\) on \(B\) is said to be _m-quasi-nice_ if there exists a coordinate system \((X_{1},X_{2},\ldots,X_{n})\) in \(B\) such that \(D^{2}(X_{i})=0\) for all \(i\in\{1,\ldots,m\}\). If \(m=n\), then \(D\) is called _nice_.
**Definition 2.3**.: An \(m\)-quasi-nice derivation is said to be _strictly \(m\)-quasi-nice_ if it is not \(r\)-quasi-nice for any positive integer \(r>m\).
**Definition 2.4**.: Let \(B\) be a ring. A family of additive subgroups \(\{\mathcal{B}_{i}\}_{i\in\mathbb{N}}\) of \(B\) is said to be an \(\mathbb{N}\)_-filtration_ of \(B\) if the following conditions hold:
1. \(\mathcal{B}_{i}\subseteq\mathcal{B}_{i+1}\) for each \(i\in\mathbb{N}\)
2. \(\bigcup\limits_{i\in\mathbb{N}}\mathcal{B}_{i}=B\),
3. \(\mathcal{B}_{i}\mathcal{B}_{j}\subseteq\mathcal{B}_{i+j}\) for all \(i,j\in\mathbb{N}\).
An \(\mathbb{N}\)-filtration \(\{\mathcal{B}_{i}\}_{i\in\mathbb{N}}\) of \(B\) is said to be _proper_ if the following additional conditions hold:
1. \(\bigcap_{i\in\mathbb{N}}\mathcal{B}_{i}=\{0\}\),
2. \(a\in\mathcal{B}_{i}\setminus\mathcal{B}_{i-1}\), \(b\in\mathcal{B}_{j}\setminus\mathcal{B}_{j-1}\) will imply \(ab\in\mathcal{B}_{i+j}\setminus\mathcal{B}_{i+j-1}\).
**Definition 2.5**.: Let \(B\) be a ring. A map \(\theta:B\longrightarrow\mathbb{N}\cup\{-\infty\}\) is said to be an \(\mathbb{N}\)_-semi-degree map_ on \(B\) if the following conditions hold:
1. \(\theta(a)=-\infty\) if and only if \(a=0\),
2. \(\theta(ab)\leqslant\theta(a)+\theta(b)\) for all \(a,b\in B\),
3. \(\theta(a+b)\leqslant\mathrm{Max}\ \{\theta(a),\theta(b)\}\) for all \(a,b\in B\).
An \(\mathbb{N}\)-semi-degree map \(\theta\) is called a _degree map_ if in condition (iii) equality occurs for all \(a,b\in B\).
There is a one-one correspondence between proper \(\mathbb{N}\)-filtrations of \(B\) and \(\mathbb{N}\)-degree maps on \(B\). In fact, if \(\mathcal{B}:=\{\mathcal{B}_{i}\}_{i\in\mathbb{N}}\) is a proper \(\mathbb{N}\)-filtration of \(B\), then it induces a degree map \(\theta_{\mathcal{B}}\) on \(B\) defined by \(\theta_{\mathcal{B}}(a)=\begin{cases}-\infty,&\text{if }a=0,\\ n,&\text{if }a\in\mathcal{B}_{n}\setminus\mathcal{B}_{n-1}.\end{cases}\)
Conversely, if \(\theta\) is a degree map on \(B\), then \(B=\bigcup\limits_{i\in\mathbb{N}}\mathcal{B}_{i}\), where \(\mathcal{B}_{i}=\{a\in B:\theta(a)\leqslant i\}\). It is easy to check that \(\{\mathcal{B}_{i}\}_{i\in\mathbb{N}}\) is a proper \(\mathbb{N}\)-filtration of \(B\).
**Definition 2.6**.: Let \(R\) be a ring and \(B=R[X_{1},\ldots,X_{n}]\). A real valued degree map \(\theta\) on \(B\) is said to be a _weighted degree map_ on \(B\), if, for any \(p\in B\),
\[\theta(p)=\mathrm{Max}\{\theta(m):m\in M(p)\},\]
where \(M(p)\) is the set of all monomials occurring in the expression of \(p\).
If \(M^{\prime}(p):=\{m\in M(p):\theta(m)=\theta(p)\}\), then we will denote \(\sum\limits_{m\in M^{\prime}(P)}m\) by by \(\widetilde{p}\). For an ideal \(I\) of \(B\), \(\widetilde{I}\) will denote the ideal of \(B\) generated by \(\{\widetilde{p}:p\in I\}\).
Now, we will record some elementary facts and some important results which will be used throughout the rest of the paper. First, we state and prove an elementary result which follows from the higher product rule for a derivation ([9, Proposition 1.6]).
**Lemma 2.7**.: _Let \(R\) be a \(k\)-algebra, \(B\) an \(R\)-algebra, \(D\in\operatorname{LND}_{R}(B)\) and \(f_{1},\ldots,f_{n}\in B\) with \(\deg_{D}(f_{i})=m_{i}\). If \(m=\sum_{i=1}^{n}m_{i}\), then we have the following:_
1. \(D^{m}(f_{1}\ldots f_{n})=\frac{m!}{\prod_{i=1}^{n}m_{i}!}\prod_{i=1}^{n}D^{m_{ i}}f_{i}\)_,_
2. \(D^{m+t}(f_{1}\ldots f_{n})=0\)_, where_ \(t\geqslant 1\)_. In particular,_ \(\deg_{D}(f_{1}\ldots f_{n})=m\)_._
Proof.: Proof of (i) follows from the higher product rule, \(D^{m}(fg)=\sum\limits_{i+j=m}\binom{m}{i}D^{i}fD^{j}g\).
The proof of (ii) follows from the fact that for each \(i\in\{1,\ldots,n\}\), \(D^{l}f_{i}=0\) if and only if \(l>m_{i}\).
Next, we state an observation describing the image ideals of a locally nilpotent derivation with a slice.
**Lemma 2.8**.: _Let \(R\) be a \(k\)-algebra, \(B\) an \(R\)-algebra, \(D\in\operatorname{LND}_{R}(B)\) and \(A=\operatorname{Ker}(D)\). Then the following are equivalent._
1. \(D\) _has a slice._
2. \(I_{1}=A\)_._
3. \(I_{n}=A\) _for all_ \(n\in\mathbb{N}\)_._
4. \(I_{n}=A\) _for some_ \(n\in\mathbb{N}\)_._
Proof.: (i) \(\Longrightarrow\) (iii): Let \(s\in B\) be such that \(Ds=1\). Then, for each \(n\in\mathbb{N}\), we have
\[1=D^{n}(\frac{s^{n}}{n!})\in A\cap D^{n}B=I_{n}.\]
(ii) \(\Longrightarrow\) (i): If \(I_{1}=A\), then \(1\in I_{1}\subseteq DB\).
The implications (iii) \(\Longrightarrow\) (ii), (iii) \(\Longrightarrow\) (iv) and (iv) \(\Longrightarrow\) (ii) are obvious.
For a Noetherian \(k\)-domain \(R\) and an irreducible \(D\in\operatorname{LND}_{R}(R[X,Y])\), the structure of \(\operatorname{Ker}(D)\) is given in [4, Theorem 4.7].
**Theorem 2.9**.: _Let \(R\) be a Noetherian domain containing \(k\) and \(B=R[X,Y]\). Let \(D\) be an irreducible \(R\)-lnd on \(B\) and \(A=\operatorname{Ker}(D)\). Then \(A=R^{[1]}\) if and only if one of the following conditions hold:_
1. \(DX\) _and_ \(DY\) _form a regular sequence in_ \(B\)_._
2. \((DX,DY)B=B\)_._
_Moreover, if_ (ii) _holds, then \(B=A^{[1]}\)._
As an immediate corollary to Theorem 2.9 we have the following result.
**Corollary 2.10**.: _Let \(R\) be a Noetherian domain containing \(k\), \(B=R^{[2]}\) and \(D\in\operatorname{LND}_{R}(B)\) be fixed point free. Then \(I_{n}=A\) for all \(n\in\mathbb{N}\)._
Proof.: The proof follows from Lemma 2.8 and Theorem 2.9.
The following result of Z. Wang ([11, Lemma 4.2]) describes \(\operatorname{Ker}(D)\) for an irreducible nice (or quasi-nice) \(R\)-lnd \(D\) on \(R[X,Y]\), when \(R\) is a UFD containing \(k\).
**Lemma 2.11**.: _Let \(R\) be a_ UFD _containing \(k\), \(B=R[X,Y]\) and \(D\) an irreducible \(R\)-lnd. Then, the following hold:_
* _If_ \(D^{2}X=0\)_, then_ \(\operatorname{Ker}(D)=R[bY+f(X)]\)_, where_ \(b\in R\) _and_ \(f(X)\in R[X]\)_. Moreover,_ \(DX\in R\) _and_ \(DY\in R[X]\)_._
* _If_ \(D^{2}X=D^{2}Y=0\)_, then_ \(D=b\frac{\partial}{\partial X}-a\frac{\partial}{\partial Y}\) _for some_ \(a,b\in R\)_. Moreover,_ \(\operatorname{Ker}(D)=R[aX+bY]\)_._
* _If_ \(R\) _is a_ PID _and_ \(D^{2}X=D^{2}Y=0\)_, then_ \(D\) _has a slice._
The following result ([7, Theorem 3.6]) describes the structure of the kernels of nice derivations on \(R^{[3]}\), where \(R\) is a PID containing \(k\).
**Theorem 2.12**.: _Let \(R\) be a_ PID _containing \(k\) and \(B=R[X,Y,Z]\). Let \(D\) be an irreducible \(R\)-lnd on \(B\) with \(D^{2}X=D^{2}Y=D^{2}Z=0\). Let \(A=\operatorname{Ker}(D)\). Then there exists a coordinate system \((U,V,W)\) in \(B\) related to \((X,Y,Z)\) by a linear change such that the following hold:_
* \(A\) _contains a non-zero linear form of_ \(\{X,Y,Z\}\)_._
* \(\operatorname{rank}(D)\leq 2\)_. In particular,_ \(A=R^{[2]}\)_._
* \(A=R[U,gV-fW]\)_, where_ \(DV=f\)_,_ \(DW=g\) _and_ \(f,g\in R[U]\) _be such that_ \(\gcd_{R[U]}(f,g)=1\)_._
* _Either_ \(f\) _and_ \(g\) _are comaximal in_ \(B\) _or they form a regular sequence in_ \(B\)_. Moreover, if they are comaximal, then_ \(B=A^{[1]}\) _and_ \(\operatorname{rank}(D)=1\)_; and if they form a regular sequence, then_ \(B\) _is not_ \(A\)_-flat and_ \(\operatorname{rank}(D)=2\)_._
Next, we state a generalization of a result proved by S. Kaliman and L. Makar-Limanov ([10]). The proof given by the authors in [10] for \(R=\mathbb{C}\) works for any affine \(k\)-domain \(R\) with a slight modification of the hypotheses.
**Lemma 2.13**.: _Let \(R\) be an affine \(k\)-domain, \(C=R[X_{1},\ldots,X_{n}]\) and \(\lambda\) a weighted degree map on \(C\). Suppose \(J\) is an ideal of \(C\) contained in the ideal \((X_{1},\ldots,X_{n})C\) and \(\pi:C\to C/J\) is the canonical map. Define \(\eta:C/J\longrightarrow\mathbb{R}\cup\{-\infty\}\) by_
\[\eta(\alpha)=\begin{cases}\inf\{\lambda(h):h\in\pi^{-1}(\alpha)\},&\text{if } \alpha\neq 0,\\ -\infty,&\text{if }\alpha=0,\end{cases}\]
_Then, for \(\alpha\neq 0\), the following hold:_
* _there exists_ \(h\in\pi^{-1}(\alpha)\) _such that_ \(\widetilde{h}\notin\widetilde{J}\)_,_
* \(\eta(\alpha)=\lambda(h)\) _for some_ \(h\in\pi^{-1}(\alpha)\) _if and only if_ \(\widetilde{h}\notin\widetilde{J}\)_,_
* \(\eta\) _is a semi-degree map on_ \(C/J\)_. If_ \(\widetilde{J}\) _is a prime ideal of_ \(C\)_, then_ \(\eta\) _is a degree map._
Main Results
In this section we will prove our main results. _Throughout rest of the section \(R\) will denote a finitely generated \(k\)-domain_. Let \(B=R[X_{1},\ldots,X_{n}]\). Let \(D\in\operatorname{LND}_{R}(B)\setminus\{0\}\), \(A=\operatorname{Ker}(D)\) and \(I_{1}=(Ds_{1},\ldots,Ds_{m})A\), for some \(s_{1},\ldots,s_{m}\in B\). Without loss of generality, we can assume that \(s_{i}(0,\ldots,0)=0\), for each \(i\).
Consider the filtration \(\{\mathcal{F}_{j}\}_{j\in\mathbb{N}}\) defined by \(\mathcal{F}_{j}=\operatorname{Ker}(D^{j+1})\). Then it is easy to see the following:
1. \(\mathcal{F}_{0}=A\),
2. \(\mathcal{F}_{1}=\sum_{l=1}^{m}s_{l}\mathcal{F}_{0}+\mathcal{F}_{0}\),
3. \(I_{j}=\big{(}D^{j}(\mathcal{F}_{j})\big{)}A\) for each \(j\in\mathbb{N}\).
Let \(u_{l}:=\deg_{D}(X_{l})\) for \(l=1,\ldots,n.\) For each \(j\in\mathbb{N}\), define \(\mathcal{G}_{j}\) in the following way:
\[\mathcal{G}_{j}=\sum_{\begin{subarray}{c}j_{1}+\cdots+j_{m}+u_{1}j_{m+1}+ \cdots+u_{n}j_{m+n}=j\\ j_{1},\cdots,j_{m+n}\geqslant 0\end{subarray}}\big{(}s_{1}^{j_{1}}\ldots s_{m}^{j_ {m}}X_{1}^{j_{m+1}}\ldots X_{n}^{j_{m+n}}\big{)}\mathcal{F}_{0}.\]
We note the following easy observations:
1. \(\mathcal{G}_{0}=\mathcal{F}_{0}\),
2. \(\mathcal{G}_{i}\mathcal{G}_{j}\subseteq\mathcal{G}_{i+j}\), for all \(i,j\in\mathbb{N}\).
For each \(j\in\mathbb{N}\), let \(\mathcal{H}_{j}=\sum_{i=0}^{j}\mathcal{G}_{i}\). Clearly, for each \(j\in\mathbb{N}\), \(\mathcal{H}_{j}\subseteq\mathcal{H}_{j+1}\). For \(i,j,q,r\in\mathbb{N}\) with \(0\leqslant q\leqslant i\) and \(0\leqslant r\leqslant j\), since \(\mathcal{G}_{q}\mathcal{G}_{r}\subseteq\mathcal{G}_{q+r}\subseteq\mathcal{H} _{i+j}\), we have \(\mathcal{H}_{i}\mathcal{H}_{j}\subseteq\mathcal{H}_{i+j}\). Let \(\alpha=\sum a_{i_{1},\ldots,i_{n}}{X_{1}}^{i_{1}}\ldots{X_{n}}^{i_{n}}\in B\) and \(S=\{(i_{1},\ldots,i_{n})\in\mathbb{N}^{n}:a_{i_{1},\ldots,i_{n}}\neq 0\}\).
If \(n_{0}=\operatorname{Max}\,\{u_{1}i_{1}+\cdots+u_{n}i_{n}:(i_{1},\ldots,i_{n}) \in S\}\), then \(\alpha\in\mathcal{H}_{n_{0}}\). So, \(B=\bigcup_{j\in\mathbb{N}}\mathcal{H}_{j}\). Hence \(\{\mathcal{H}_{j}\}_{j\in\mathbb{N}}\) is an \(\mathbb{N}\)-filtration of \(B\). When \(\{\mathcal{H}_{j}\}_{j\in\mathbb{N}}\) is a proper \(\mathbb{N}\)-filtration, the following lemma gives a generating set for each \(I_{j}\).
**Lemma 3.1**.: _If \(\{\mathcal{H}_{j}\}_{j\in\mathbb{N}}\) is a proper \(\mathbb{N}\)-filtration, then \(I_{j}=\big{(}D^{j}(\mathcal{G}_{j})\big{)}A\) for all \(j\in\mathbb{N}\)._
Proof.: By Lemma 2.7, \(\mathcal{H}_{j}\subseteq\mathcal{F}_{j}\) for each \(j\in\mathbb{N}\). It is clear that \(\mathcal{H}_{0}=\mathcal{G}_{0}=\mathcal{F}_{0}\) and \(\mathcal{F}_{1}\) (\(=\sum_{l=1}^{m}s_{l}\mathcal{F}_{0}+\mathcal{F}_{0})\subseteq\mathcal{G}_{1}+ \mathcal{G}_{0}=\mathcal{H}_{1}\). Hence \(\mathcal{F}_{j}=\mathcal{H}_{j}\) for \(j=0,1\). Let \(\alpha\in\mathcal{F}_{j}\) for some \(j\in\mathbb{N}\) (\(j\geqslant 2\)) and \(\theta\) be the degree map on \(B\) corresponding to the filtration \(\{\mathcal{H}_{j}\}_{j\in\mathbb{N}}\) of \(B\). Let \(s\in\mathcal{F}_{1}\setminus\mathcal{F}_{0}\) and \(f=Ds\). Since \(B[1/f]=A[1/f][s]\), \(\exists\ \epsilon\in\mathbb{N}\) and \(a_{0},\ldots a_{j}\in A\) such that \(f^{\epsilon}\alpha=a_{0}+a_{1}s+\cdots+a_{j}s^{j}\). Since \(\mathcal{F}_{j}=\mathcal{H}_{j}\) for \(j=0,1\), we have \(\theta(s)=1\) and \(\theta(f)=0=\theta(a_{i})\) for all \(i=0,\ldots,j\). Hence \(\theta(\alpha)\leqslant j\). So, \(\alpha\in\mathcal{H}_{j}\). Thus \(\mathcal{H}_{j}=\mathcal{F}_{j}\) for each \(j\in\mathbb{N}\). Since \(\mathcal{H}_{j}=\mathcal{G}_{j}+\mathcal{H}_{j-1}\), we have \(\mathcal{F}_{j}=\mathcal{G}_{j}+\mathcal{F}_{j-1}\) for all \(j\in\mathbb{N}\). Hence \(I_{j}=\big{(}D^{j}(\mathcal{G}_{j})\big{)}A\) for all \(j\in\mathbb{N}\).
Now, we will consider the situation when \(\operatorname{Ker}(D)=R[z_{1},\ldots,z_{r}]\) for some \(z_{1},\ldots,z_{r}\) in \(R[X_{1},\ldots,X_{n}]\) with \(z_{i}(0,\ldots,0)=0\) for all \(i\), \(1\leqslant i\leqslant r\). We define a weighted \(\mathbb{N}\)-degree map \(\lambda\) on \(C:=R[X_{1},\ldots,X_{n},Z_{1},\ldots,Z_{r},S_{1},\ldots,S_{m}]\) (\(=R^{[n+r+m]}\)), by setting \(\lambda(r)=0\) for all \(r\in R\), \(\lambda(X_{l})=u_{l}\) for \(1\leqslant l\leqslant n\), \(\lambda(Z_{l})=0\) for \(1\leqslant l\leqslant r\) and \(\lambda(S_{l})=1\) for \(1\leqslant l\leqslant m\). Let \(J\) be the ideal of \(C\) generated by the set \(\{Z_{1}-z_{1},\ldots,Z_{r}-z_{r},S_{1}-s_{1},\ldots,S_{m}-s_{m}\}\). Then \(C/J\cong B\). By Lemma 2.13, \(\lambda\) induces a semi-degree map \(\eta\) on \(B\) defined for each \(\alpha\in B\setminus\{0\}\),
\[\eta(\alpha)=\inf\{\lambda(h):h\in\pi^{-1}(\alpha)\},\text{where $\pi:C\to B$ is the canonical map.}\]
The next result shows that the filtration induced by \(\eta\) is \(\{\mathcal{H}_{j}\}_{j\in\mathbb{N}}\).
**Lemma 3.2**.: \(\{\mathcal{H}_{i}\}_{i\in\mathbb{N}}\) _is the filtration induced by the semi-degree map \(\eta\) on \(B\)._
Proof.: Let \(\{\mathbb{H}_{i}\}_{i\in\mathbb{N}}\) is the filtration of \(C\) induced by the weighted degree map \(\lambda\) on \(C\). Then, \(\mathbb{H}_{i}=\bigoplus\limits_{j=0}^{i}\mathbb{G}_{j}\), where, for each \(j\) with \(0\leqslant j\leqslant i\),
\[\mathbb{G}_{j}=\bigoplus\limits_{\begin{subarray}{c}j_{1}+\cdots+j_{m}+u_{1} j_{m+1}+\cdots+u_{n}j_{m+n}=j\\ j_{1},\cdots,j_{m+n}\geqslant 0\end{subarray}}\big{(}S_{1}^{j_{1}}\ldots S_{m}^{ j_{m}}X_{1}^{j_{m+1}}\ldots X_{n}^{j_{m+n}}\big{)}R[Z_{1},\ldots,Z_{r}].\]
Clearly, for each \(j\) with \(0\leqslant j\leqslant i\), we have \(\pi(\mathbb{G}_{j})=\mathcal{G}_{j}\) and hence for each \(i\in\mathbb{N}\), \(\pi(\mathbb{H}_{i})=\mathcal{H}_{i}\).
Let \(\alpha\in B\setminus\{0\}\) be such that \(\eta(\alpha)\leqslant i\). By part (i) and (ii) of Lemma 2.13, there exists \(h\in\pi^{-1}(\alpha)\) such that \(\lambda(h)=\eta(\alpha)\) and hence \(h\in\mathbb{H}_{i}\). So, \(\alpha\bigm{(}=\pi(h)\big{)}\in\pi(\mathbb{H}_{i})=\mathcal{H}_{i}\).
Since \(\eta(X_{l})\leqslant\lambda(X_{l})=u_{l}\) for all \(l\in\{1,\ldots,n\}\), \(\eta(s_{l})\leqslant\lambda(S_{l})=1\) for all \(l\in\{1,\ldots,m\}\), \(\eta(z_{l})\leqslant\lambda(Z_{l})=0\) for all \(l\in\{1,\ldots,r\}\) and \(\eta\) is a semi-degree map on \(C\), we have \(\eta(\alpha)\leqslant i\) for all \(\alpha\in\mathcal{H}_{i}\).
From Lemma 2.13(iii), Lemma 3.2 and Lemma 3.1, we immediately deduce the following corollary.
**Corollary 3.3**.: _If \(\widetilde{J}\) is a prime ideal of \(C\), then for each \(j\geqslant 0\) we have \(I_{j}=\big{(}D^{j}(\mathcal{G}_{j})\big{)}A\)._
Suppose \(\lambda\) is a weighted degree map on \(B\)\((=R^{[n]})\) and \(J=(f_{1},\ldots,f_{m})\) is an ideal of \(B\). It is interesting to investigate conditions under which \(\widetilde{J}=(\widetilde{f_{1}},\ldots,\widetilde{f_{m}})\). If \(m=1\), then it is easy to see that \(\widetilde{J}=(\widetilde{f_{1}})\). But for \(m\geqslant 2\) this is not the case. The following proposition gives sufficient conditions under which \(\widetilde{J}=(\widetilde{f_{1}},\ldots,\widetilde{f_{m}})\).
**Proposition 3.4**.: _Let \(\lambda\) be a weighted degree map on \(B\)\((=R^{[n]})\) and \(J=(f_{1},\ldots,f_{m})\) an ideal of \(B\). Suppose the following conditions hold:_
1. \(\widetilde{f_{i}}=f_{i}\) _for_ \(1\leqslant i\leqslant m-1\)_,_
2. \(\widetilde{f_{m}}\) _is a non-zerodivisor in_ \(B/(f_{1},\ldots,f_{m-1})\)_._
_Then \(\widetilde{J}=(f_{1},\ldots,f_{m-1},\widetilde{f_{m}})\)._
Proof.: Since for each \(i\), \(1\leqslant i\leqslant m\), \(\widetilde{f_{i}}\in\widetilde{J}\), we clearly have \((f_{1},\ldots,f_{m-1},\widetilde{f_{m}})\subseteq\widetilde{J}\).
Let \(I=\{1,\ldots,m\}\) and \(0\neq g=\sum\limits_{i\in I}r_{i}f_{i}\), where \(r_{i}\in B\) for all \(i\in I\). Let \(J_{m}:=(f_{1},\ldots,f_{m-1})\) and for each \(i\in I\), \(\lambda(r_{i})=u_{i}\) and \(\lambda(f_{i})=v_{i}\). For \(f\in B\), by \(f^{(l)}\) we will denote the \(l\)-degree homogeneous component of \(f\) and for \(0\leqslant s_{i}\leqslant u_{i}\), \(0\leqslant t_{i}\leqslant v_{i}\), we define \(x(i;s_{i};t_{i}):=r_{i}^{(u_{i}-s_{i})}f_{i}^{(v_{i}-t_{i})}\). Then \(g=\sum\limits_{\begin{subarray}{c}i\in I\\ 0\leqslant s_{i}\leqslant u_{i}\\ 0\leqslant t_{i}\leqslant v_{i}\end{subarray}}x(i;s_{i};t_{i})\).
Let \(M=\{x(i;s_{i};t_{i}):i\in I,0\leqslant s_{i}\leqslant u_{i},0\leqslant t_{i} \leqslant v_{i}\}\), \(\mu_{0}:=\mathrm{Max}\{\lambda(x):x\in M\}\) and \(M_{0}:=\{x\in M:\lambda(x)=\mu_{0}\}\). Clearly, if \(x(i;s_{i};t_{i})\in M_{0}\), then \(s_{i}=0=t_{i}\).
If \(\sum\limits_{x\in M_{0}}x\neq 0\), then \(\widetilde{g}=\sum\limits_{x\in M_{0}}x=\sum\limits_{M_{0}}\widetilde{r_{i}} \widetilde{f_{i}}\in(J_{m},\widetilde{f_{m}}).\) If \(\sum\limits_{x\in M_{0}}x=0\), then we will define \(\mu_{1}\) and \(M_{1}\) in the following way:
\[\mu_{1}:=\text{Max}\{\lambda(x):x\in M\setminus M_{0}\}\text{ and }M_{1}:=\{x\in M \setminus M_{0}:\lambda(x)=\mu_{1}\}.\]
If \(\sum\limits_{x\in M_{1}}x\neq 0\), then \(\widetilde{g}=\sum\limits_{x\in M_{1}}x\). In this situation we note that for some \(t_{i}>0\), \(x(i;s_{i};t_{i})\in M_{1}\) will imply \(i=m\). Clearly, \(x(m;s_{m};t_{m})\in M_{1}\) for some \(t_{m}>0\) only if \(x(m;s_{m},0)\in M_{0}\) and hence \(s_{m}=0\). Since \(\sum\limits_{x\in M_{0}}x=0\), by hypothesis (ii) of the proposition, \(x(m;0;0)\in M_{0}\) only if \(r_{m}^{(u_{m})}\in J_{m}\). So, for some \(t_{m}>0\), \(x(m;s_{m};t_{m})\in M_{1}\) only if \(x(m;s_{m};t_{m})\ \big{(}=x(m;0;t_{m})=r_{m}^{(u_{m})}f_{m}^{(v_{m}-t_{m})} \big{)}\in J_{m}\). Hence, \(\sum\limits_{x\in M_{1}}x\neq 0\) will imply \(\widetilde{g}\in(J_{m},\widetilde{f_{m}})\).
If \(\sum\limits_{M_{1}}x=0\), then we will define \(\mu_{2}\) and \(M_{2}\) in the following way and proceed similarly.
\[\mu_{2}:=\text{Max}\{\lambda(x):x\in(M\setminus M_{0})\setminus M_{1}\}\]
\[\text{and }M_{2}:=\{x\in(M\setminus M_{0})\setminus M_{1}:\lambda(x)=\mu_{2}\}.\]
Now, we consider the following statement:
\[P(n):\text{ If }\sum\limits_{x\in M_{j}}x=0\text{ for all }j<n,\text{ then, for }t_{m}>0,x(m;s_{m};t_{m})\in M_{n}\text{ only if }x(m;s_{m};t_{m})\in J_{m}.\]
We have already observe that \(P(1)\) is true. Assume that \(P(1),P(2),\ldots,P(n)\) is true for some \(n\geqslant 2\). We will show that \(P(n+1)\) is also true. Let \(\sum\limits_{x\in M_{j}}x=0\) for all \(j\leqslant n\) and \(x(m;s_{m}^{{}^{\prime}};t_{m}^{{}^{\prime}})\in M_{n+1}\) for some fixed \(s_{m}^{{}^{\prime}},t_{m}^{{}^{\prime}}\) with \(0\leqslant s_{m}^{{}^{\prime}}\leqslant u_{m}\) and \(1\leqslant t_{m}^{{}^{\prime}}\leqslant v_{m}\). Then \(x(m;s_{m}^{{}^{\prime}};0)\in M_{j_{0}}\) for some \(0\leqslant j_{0}\leqslant n\). We observe the following:
1. \(0=\sum\limits_{M_{j_{0}}}x(i;s_{i};t_{i})=\sum\limits_{\begin{subarray}{c}M_{ j_{0}}\\ i\neq m\end{subarray}}x(i;s_{i};0)+\sum\limits_{M_{j_{0}}}x(m;s_{m};0)+\sum \limits_{\begin{subarray}{c}M_{j_{0}}\\ t_{m}>0\end{subarray}}x(m;s_{m};t_{m}),\)
2. \(\sum\limits_{\begin{subarray}{c}M_{j_{0}}\\ i\neq m\end{subarray}}x(i;s_{i};0)\in J_{m},\)
3. Since \(S_{j_{0}}\) is true, \(\sum\limits_{\begin{subarray}{c}M_{j_{0}}\\ t_{m}>0\end{subarray}}x(m;s_{m};t_{m})\in J_{m}\),
4. Since \(x(m;s_{m}^{{}^{\prime}};0)\in M_{j_{0}}\), \(x(m;s_{m};0)\in M_{j_{0}}\) if and only if \(s_{m}=s_{m}^{{}^{\prime}}\).
Hence \(x(m,s_{m}^{{}^{\prime}},0)\in J_{m}\) and by hypothesis (ii), \(r_{m}^{(u_{m}-s_{m}^{{}^{\prime}})}\in J_{m}\). So, \(x(m;s_{m}^{{}^{\prime}};t_{m}^{{}^{\prime}})\in J_{m}\). Thus, \(P(n+1)\) is true and hence \(P(n)\) is true for all \(n\in\mathbb{N}\).
Now, for \(n\in\mathbb{N}\), if \(\sum\limits_{x\in M_{j}}x=0\) for all \(j<n\) and \(\sum\limits_{x\in M_{n}}x\neq 0\), then \(\widetilde{g}=\sum\limits_{x\in M_{n}}x\in(J_{m},\widetilde{f_{m}})\).
If \(\sum\limits_{x\in M_{j}}x=0\) for all \(j\leqslant n\), then we will consider \(\mu_{n+1},M_{n+1}\) and repeat the arguments for \(M_{n+1}\). Since \(M\) is a finite set this process will stop after a finite step.
We now study the structure of the image ideals of an irreducible nice derivation on \(R^{[2]}\), where \(R\) is a UFD.
**Theorem 3.5**.: _Let \(R\) be a_ UFD _and \(B=R[X_{1},X_{2}]\). Let \(D\) be an irreducible \(R\)-lnd on \(B\) such that \(D^{2}X_{1}=D^{2}X_{2}=0\). Let \(A=\mathrm{Ker}(D)\). Then, for each \(j\in\mathbb{N},I_{j}=(DX_{1},DX_{2})^{j}A\)._
Proof.: By (ii) of Lemma 2.11, there exist \(f_{1},f_{2}\) in \(R\) such that the following hold:
* \(DX_{1}=f_{1}\) and \(DX_{2}=f_{2}\),
* \(A=R[f_{2}X_{1}-f_{1}X_{2}]\) (\(=R^{[1]}\)).
First, we will show that \(I_{1}=(f_{1},f_{2})A\). It is clear that \((f_{1},f_{2})A\subseteq I_{1}.\) For the converse, let \(h\in B\) be such that \(Dh\in A\). Let \(u:=f_{2}X_{1}-f_{1}X_{2}\). Since \(D\) is irreducible, \(f_{1},f_{2}\) are mutually coprime and hence \(u\) is irreducible in \(B\). Then \(Dh=P(u)\in R[u]\). Since \(D(f_{1}h-X_{1}P(u))=0=D(f_{2}h-X_{2}P(u))\), we have
\[f_{1}h=X_{1}P(u)+Q_{1}(u)\text{ and }f_{2}h=X_{2}P(u)+Q_{2}(u)\text{ for some }Q_{1}(u),Q_{2}(u)\in R[u].\]
Therefore, \(uh=(f_{2}h)X_{1}-(f_{1}h)X_{2}=X_{1}Q_{2}(u)-X_{2}Q_{1}(u)\).
Let \(c_{1},c_{2}\in R\) and \(P_{1}(u),P_{2}(u)\in R[u]\) be such that \(Q_{i}(u)=c_{i}+uP_{i}(u)\) for \(i=1,2\). Then \(uh=u(X_{1}P_{2}(u)-X_{2}P_{1}(u))+(c_{2}X_{1}-c_{1}X_{2})\) and hence \(u|(c_{2}X_{1}-c_{1}X_{2})\). Let \(c_{2}X_{1}-c_{1}X_{2}=cu\) for some \(c\in B\). Then \(h=X_{1}P_{2}(u)-X_{2}P_{1}(u)+c\). It is clear that \(c\in R\) and hence \(Dh=P_{2}(u)f_{1}-P_{1}(u)f_{2}\in(f_{1},f_{2})A\).
Let \(C=R[X_{1},X_{2},Z,S_{1},S_{2}]\) (\(=R^{[5]}\)) and \(\lambda\) a weighted degree map on \(C\) defined by \(\lambda(X_{1})=\lambda(X_{2})=\lambda(S_{1})=\lambda(S_{2})=1\), \(\lambda(Z)=0\). Suppose \(J\coloneqq(Z-u,S_{1}-X_{1},S_{2}-X_{2})C\). By Proposition 3.4, \(\bar{J}=(u,S_{1}-X_{1},S_{2}-X_{2})\). Since \(\bar{J}\) is a prime ideal of \(C\), by Corollary 3.3\(I_{j}=\sum\limits_{i_{1}+i_{2}=j}D^{j}(X_{1}^{i_{1}}X_{2}^{i_{2}})A\). Hence, by Lemma 2.7, \(I_{j}=\sum\limits_{i_{1}+i_{2}=j}{f_{1}}^{i_{1}}{f_{2}}^{i_{2}}A\). Thus, \(I_{j}=(f_{1},f_{2})^{j}A\).
The following results are immediate corollaries of Theorem 2.9 and Theorem 3.5.
**Corollary 3.6**.: _Let \(R\) be a_ UFD _and \(B=R[X,Y]\). Let \(D\) be an irreducible \(R\)-lnd on \(B\) such that \(D^{2}X=D^{2}Y=0\). Let \(A=\mathrm{Ker}(D)\). Then, for each \(n\geq 1\) the following hold._
* _If_ \(D\) _is fixed point free, then_ \(I_{n}=A\)_;_
* _If_ \(D\) _is not fixed point free, then_ \(I_{n}\) _is generated by_ \(n+1\) _elements of_ \(R\) _with_ \(\mathrm{grade}_{A}(I_{n})=2\)__1_, i.e.,_ \(I_{n}\) _is not principal._ Footnote 1: for a Noetherian ring \(R\) and an ideal \(I\) of \(R\), \(\mathrm{grade}_{A}(I)\) denotes the length of the maximal \(R\)-regular sequence contained in \(I\). _Footnote 2: For a Noetherian ring \(R\) and an ideal \(I\) of \(R\), \(\mathrm{grade}_{A}(I)\) denotes the length of the maximal \(R\)-regular sequence contained in \(I\)._
**Theorem 3.7**.: _Let \(R\) be a_ PID _and \(B=R[X,Y,Z]\). Let \(D\) be an irreducible \(R\)-lnd on \(B\) such that \(D^{2}X=D^{2}Y=D^{2}Z=0\). Let \(A=\mathrm{Ker}(D)\). Then, for each \(n\geqslant 1\) the following hold:_
* _If_ \(D\) _is fixed point free, then_ \(D\) _has a slice and hence_ \(I_{n}=A\)_;_
* _If_ \(D\) _is not fixed point free, then_ \(I_{n}\) _is generated by_ \(n+1\) _elements with_ \(\mathrm{grade}_{A}(I_{n})=2\)_, i.e.,_ \(I_{n}\) _is not principal._
Proof.: The proof follows from Theorem 2.12 and Theorem 3.5.
**Remark 3.8**.:
1. Part (i) of Theorem 3.7 holds even if \(R\) is a Dedekind domain. In fact, in this case too \(D\) has a slice (see [7, Proposition 3.8]).
2. Part (ii) of Theorem 3.7 need not be true when dimension of \(R\) is more than one, even when \(R\) is a UFD. Example 3.9, gives a nice \(R\)-lnd on \(R^{[3]}\), for \(R=k^{[2]}\), which is not fixed point free and hence has no slice in \(B\). It follows from Lemma 2.8 that \(I_{1}\neq A\). In fact, we will show that \(I_{1}\) is generated by three elements and has grade 2.
3. Part (ii) of Theorem 3.7 need not be true when \(R\) is a Dedekind domain. In Example 3.10, we have given an example of a nice \(R\)-lnd on \(R^{[3]}\), for \(R=\mathbb{R}[U,V]/(U^{2}+V^{2}-1)\), which is not fixed point free, and \(I_{1}\) is generated by three elements of \(A\) with \(\mathrm{grade}_{A}(I_{1})=1\).
**Example 3.9**.: Let \(R=k[a,b]\) (\(=k^{[2]}\)) and define an \(R\)-lnd \(D\) on \(B=R[X,Y,Z]\) by setting \(DX=a\), \(DY=b\) and \(DZ=bX-aY\). Clearly, \(D\) is a nice \(R\)-lnd, which is not fixed point free. Let \(A=\mathrm{Ker}(D)\), \(u=bX-aY\), \(v=bZ-uY\) and \(w=aZ-uX\). From [7, Example 3.10 ]), we have \(A=R[u,v,w]\).
It is easy to see that there exists an \(R\)-algebra isomorphism from \(R[u,v,w]\) onto \(R[U,V,W]/(aV-bW-U^{2})\) sending \(u,v,w\) to the respective images of \(U,V,W\). Let \(J=(a,b,u)A\). Then \(A/J\cong k[V,W]\) (\(=k^{[2]}\)). The canonical projection map \(\eta:B\longrightarrow B/(a,b)B\) induces a map \(\sigma:A/J\longrightarrow B/(a,b)B\) with \(\sigma(A/J)=\eta(A)\cong k\).
We will show \(I_{1}=J\). Clearly, \(J\subseteq I_{1}\). Let \(f\in I_{1}\setminus\{0\}\). For each \(g\in A\), let \(\bar{g}\) denote the image of \(g\) in \(A/J\). Enough to show that \(\bar{f}=0\) in \(A/J\). Otherwise, there exists \(P(V,W)\) (\(\neq 0\)) such that \(\bar{f}=P(V,W)\). Let \(h\in B\) be such that \(Dh=f\). Let \(G:=ZDh-hDZ\). Then \(G\in A\) and hence \(\eta(G)\) (\(=\eta(Zf))\in k\). So, \(\eta(Z)\sigma(f)\in k\), i.e., \(\eta(Z)\sigma(P(V,W))\in k\). We can choose \(\lambda,\mu\in k\) be such that \(P(\lambda,\mu)\in k\setminus\{0\}\). Then \(\eta(Z)\in k\), which is a contradiction. Therefore, \(I_{1}=(a,b,u)A\).
**Example 3.10**.: Let \(R=\dfrac{\mathbb{R}[U,V]}{(U^{2}+V^{2}-1)}\), \(B=R[X,Y,Z]\) and \(u,v\) denote the images of \(U,V\) respectively in \(R\). Define an \(R\)-lnd \(D\) on \(B\) by \(DX=u\), \(DY=v-1\) and \(DZ=(1-v)X+uY\). Clearly, \(D\) is not fixed point free and nice. Let \(A:=\mathrm{Ker}(D)\) and \(f=(1-v)X+uY\), \(g=(1+v)Y+uX\) and \(h=2Z+fY-gX\). In [7, Example 3.3 ]), it has been proved that \(A=R[f,g,h]\).
It is easy to see that there exists an \(R\)-algebra isomorphism from \(R[f,g,h]\) onto \(\dfrac{R[F,G]}{((1+v)F-uG)}[H]\) sending \(f,g,h\) to the respective images of \(F,G,H\). Let \(J=(u,v-1,f)A\). Then \(A/J\cong\mathbb{R}[G,H]\) (\(=\mathbb{R}^{[2]}\)). The canonical projection map \(\eta:B\longrightarrow B/(u,v-1)B\) induces a map \(\sigma:A/J\longrightarrow B/(u,v-1)B\) with \(\sigma(A/J)=\eta(A)\cong\mathbb{R}[Y,Z-XY]\). Since \(\eta(Z)\notin\mathbb{R}[Y,Z-XY]\), we can conclude as in Example 3.9 that \(I_{1}=J=(u,v-1,f)\).
We now turn to the case of 1-quasi-nice derivations on \(R^{[2]}\), where \(R\) is a UFD. If the derivation is fixed point free, it follows from Corollary 2.10 that each image ideal is
the whole ring. The following result describes the plinth ideal of a strictly \(1\)-quasi-nice derivation \(D\) on \(R[X_{1},X_{2}]\), which is not fixed point free and \(D^{2}X_{1}=0\) with \(DX_{1}\) irreducible.
**Proposition 3.11**.: _Let \(R\) be a_ UFD _and \(B=R[X_{1},X_{2}]\). Let \(D\) be an irreducible \(R\)-Ind on \(B\) which is not fixed point free. If \(D^{2}X_{1}=0\) and \(DX_{1}\) is irreducible then the following are equivalent._
* \(I_{1}\) _is principal._
* \(I_{1}=(DX_{1})A\)_._
* \(D\) _is strictly_ \(1\)_-quasi-nice._
Proof.: Since (II) \(\Longrightarrow\) (I) is obvious, it is enough to show that (I) \(\Longrightarrow\) (III) and (III) \(\Longrightarrow\) (II).
(I) \(\Rightarrow\) (III): Suppose that \(D\) is a nice derivation. Then, there exists a coordinate system \(\{U,V\}\) in \(B\) such that \(D^{2}U=D^{2}V=0\). By Lemma 2.11(ii), there exist \(p,q\in R\), such that \(DU=p\) and \(DV=q\). Since \(D\) is not fixed point free, by Theorem 2.9\(p,q\) form a regular sequence in \(B\) and hence in \(A\). So \(I_{1}\) is not principal.
(III) \(\Rightarrow\) (II): Assume (III) holds. By Lemma 2.11(i), there exist \(b\in R\) and \(f(X_{1})\in R[X_{1}]\) such that the following hold:
* \(DX_{1}=b\) and \(DX_{2}=-f^{\prime}(X_{1})\) (\(:=\frac{df}{dX_{1}}\)).
* \(A=R[bX_{2}+f(X_{1})]\) (\(=R^{[1]}\)).
Without loss of generality, we can assume that \(f\) has no constant term. Since \(D\) is not fixed point free, it follows from Theorem 2.9 that \(b,f^{\prime}\) form a regular sequence in \(B\) and hence \(b,f\) is a regular sequence in \(B\). So, there exists \(u(X_{1}),\ v(X_{1})\) in \(R[X_{1}]\) such that \(f(X_{1})=u(X_{1})+bv(X_{1})\) and none of the coefficients of \(u(X_{1})\) is divisible by \(b\). Since \(f(0)=0\), \(\deg_{X_{1}}u(X_{1})\geqslant 1\). Let \(\sigma:B\longrightarrow B/bB\) be the natural projection map and set \(\bar{A}:=\sigma(A),\bar{R}:=\sigma(R)\). Then \(\bar{A}=\bar{R}[\sigma(u)]\).
We will show that \(I_{1}\subseteq bA\). Otherwise, there exists \(h\in B\) be such that \(Dh\in A\setminus bA\). Since \(bB\cap A=bA\), \(\sigma(Dh)\neq 0\). Let \(\sigma(Dh)=P(\sigma(u))\in\bar{R}[\sigma(u)]\). Since \(X_{1}Dh-hDX_{1}\in A\), there exists \(0\neq Q(\sigma(u))\) such that \(\sigma(X_{1})P(\sigma(u))=Q(\sigma(u))\). Let \(K\) be the field of fractions of the integral domain \(R/bR\). Then, \(K(\sigma(X_{1}))=K(\sigma(u))\). Then \(\deg_{\sigma(X_{1})}(\sigma(u))=1\) and hence \(\deg_{X_{1}}(u)=1\). Let \(u(X_{1})=ax_{1}+c\) for some \(a,c\in R\). Setting \(X_{1}^{\prime}=X_{1}\) and \(X_{2}^{\prime}=X_{2}+v(X_{1})\) we get \(D^{2}X_{1}^{\prime}=D^{2}X_{2}^{\prime}=0\), contradicting the fact that \(D\) is strictly \(1\)-quasi-nice.
Now, we describe the higher image ideals of \(D\) under the hypotheses of Proposition 3.11.
**Theorem 3.12**.: _Let \(R\) be a_ UFD _and \(B=R[X_{1},X_{2}]\). Let \(D\) be an irreducible \(R\)-Ind on \(B\) such that the following conditions hold:_
* \(DX_{1}\)__\((=b)\in R\) _is irreducible,_
* \(DX_{2}=-f^{\prime}(X_{1})\) _and_ \(\deg_{X_{1}}\bigl{(}f(X_{1})\bigr{)}=d\)_,_
* \(D\) _is not fixed point free_
* \(D\) _is strictly_ \(1\)_-quasi-nice._
_Then, for each \(j\geqslant 1\), \(I_{j}=b^{m}A\), where \(m=\mathrm{Min}\{i_{1}+(d-1)i_{2}:i_{1},i_{2}\in\mathbb{N},i_{1}+di_{2}=j\}\)._
Proof.: The case \(j=1\) follows from Proposition 3.11. Let \(C=R[X_{1},X_{2},Z,S]\) and \(\lambda\) a weighted degree map on \(C\) defined by \(\lambda(X_{1})=1,\lambda(X_{2})=d,\lambda(S)=1\) and \(\lambda(Z)=0\). Suppose \(J:=(Z-bX_{2}-f(X_{1}),S-X_{1})C\). By Proposition 3.4, \(\bar{J}=(bX_{2}+f(X_{1}),S-X_{1})\). Since \(\bar{J}\) is a prime ideal of \(C\), by Corollary 3.3, we have \(I_{j}=\sum\limits_{i_{1}+di_{2}=j}D^{j}(X_{1}^{i_{1}}X_{2}^{i_{2}})A\). Hence \(I_{j}=\sum\limits_{i_{1}+di_{2}=j}(DX_{1})^{i_{1}}(D^{d}X_{2})^{i_{2}}A\) by Lemma 2.7. Since \(D^{d}X_{2}=D^{d-1}(f^{\prime}(X_{1}))\) and \(\deg_{X_{1}}(f^{\prime}(X_{1}))=d-1\), \(D^{d}X_{2}\in b^{d-1}R\). Thus, \(I_{j}=\sum\limits_{i_{1}+di_{2}=j}b^{i_{1}+(d-1)i_{2}}A\).
The next proposition shows that we can remove the condition "\(DX_{1}\) is irreducible" from Proposition 3.11 when \(R\) is a DVR.
**Proposition 3.13**.: _Let \((R,p)\) be a DVR with parameter \(p\) and \(B=R[X_{1},X_{2}]\). Let \(D\) be an irreducible \(R\)-lnd on \(B\) which is not fixed point free. If \(D^{2}X_{1}=0\), then \(I_{1}=(DX_{1})A\)._
Proof.: By Lemma 2.11(i), there exist \(b\in R\) and \(f(X_{1})\in R[X_{1}]\) such that the following hold:
* \(DX_{1}=b\) and \(DX_{2}=-f^{\prime}(X_{1})\) (\(:=\frac{df}{dX_{1}}\)).
* \(A=R[bX_{2}+f(X_{1})]\) (\(=R^{[1]}\)).
Without loss of generality, we can assume that \(f\) has no constant term. Since \(D\) is not fixed point free, we can assume that \(b=p^{n}\) (\(n\geqslant 1\)); and it follows from Theorem 2.9 that \(b,f^{\prime}\) form a regular sequence in \(B\) and hence \(b,f\) is a regular sequence in \(B\). Then \(p,f\) form a regular sequence in \(B\). So, there exists \(u(X_{1}),\ v(X_{1})\) in \(R[X_{1}]\) such that \(f(X_{1})=u(X_{1})+pv(X_{1})\) and none of the coefficients of \(u(X_{1})\) is divisible by \(p\). Since \(f(0)=0\), \(\deg_{X_{1}}u(X_{1})\geqslant 1\). Let \(\sigma:B\longrightarrow B/pB\) be the natural projection map and set \(\bar{A}:=\sigma(A),\bar{R}:=\sigma(R)\). Then \(\bar{A}=\bar{R}[\sigma(u)]\).
Let \(h\in B\) be such that \(Dh\in I_{1}\). Then \(Dh=p^{m}g\) where \(g\in A\setminus pA\) and \(m\geqslant 0\). If \(m\geqslant n\), then \(Dh\in bA\). If possible suppose that \(m<n\). Since \(XDh-bh\in A\), we have \(X_{1}g-p^{n-m}h\in A\). Let \(\sigma(g)=P(\sigma(u))\in\bar{R}[\sigma(u)]\). Since \(\sigma(g)\neq 0\), there exists \(Q(\sigma(u))\) (\(\neq 0\)) such that \(\sigma(X_{1})P(\sigma(u))=Q(\sigma(u))\). Then, \(\frac{R}{pR}(\sigma(X_{1}))=\frac{R}{pR}(\sigma(u))\) and hence \(\frac{R}{pR}[\sigma(X_{1})]=\frac{R}{pR}[\sigma(u)]\). So, \(B\otimes_{R}\frac{R}{pR}=\frac{R}{pR}[\sigma(F)]^{[1]}\), where \(F=bX_{2}+f(X_{1})\). Again, \(B\otimes_{R}R[\frac{1}{p}]=R[\frac{1}{p}][F,X_{1}]\). Hence, \(F\) is a residual coordinate in \(R[X_{1},X_{2}]\) and hence a coordinate in \(R[X_{1},X_{2}]\) (see [3, Theorem 3.2]). So, \(D\) has a slice in \(B\), contradicting the fact that \(D\) is not fixed point free.
**Remark 3.14**.: Let \(R\) be a PID and \(D\) an irreducible \(R\)-lnd on \(B=R[X_{1},X_{2}]\), which is not fixed point free. Then \(D\) is quasi-nice if and only if it is strictly \(1\)-quasi-nice.
The next proposition shows that Proposition 3.13 can be extended to a PID \(R\) under some additional hypotheses.
**Proposition 3.15**.: _Let \(R\) be a PID, \(B=R[X_{1},X_{2}]\) and \(D\) an irreducible \(R\)-lnd on \(B\). Suppose that \(DX_{1}=\prod_{i=1}^{n}{p_{i}}^{r_{i}}\)\((\in R)\), where each \(p_{i}\) is a prime element in \(R\). If, for each \(i\), \(1\leqslant i\leqslant n\), the induced \(R_{(p_{i})}\)-lnd \(D_{p_{i}}\) on \(B\otimes_{R}R_{(p_{i})}\) is not fixed point free, then \(I_{1}=(DX_{1})A\)._
Proof.: Let \(I_{1}(i)\) denote the plinth ideal of \(D_{p_{i}}\) for \(1\leqslant i\leqslant n\). Then \(I_{1}(i)=I_{1}A_{(p_{i})}\), where \(A_{(p_{i})}\) is the localisation of \(A\) under the multiplicatively closed set \(R\setminus p_{i}R\). By Proposition 3.13, \(I_{1}A_{(p_{i})}={p_{i}}^{r_{i}}A_{(p_{i})}\) for each \(i\). Let \(\mathpzc{m}\) be a maximal ideal of \(A\) and \(\mathpzc{p}=\mathpzc{m}\cap R\). If \(\mathpzc{p}=(0)\), then \(I_{1}A_{\mathpzc{m}}=A_{\mathpzc{m}}\). If \(p_{i}\notin\mathpzc{p}\) for any \(i\), then also \(I_{1}A_{\mathpzc{m}}=A_{\mathpzc{m}}\). If there exists \(i\) such that \(p_{i}\in\mathpzc{p}\), then \(\mathpzc{p}=p_{i}R\) and hence \(I_{1}A_{\mathpzc{m}}\) (\(=(I_{1}A_{\mathpzc{p}})A_{\mathpzc{m}}\)) \(={p_{i}}^{r_{i}}A_{\mathpzc{m}}\). Since \(I_{1}=\bigcap\limits_{m\in\max\mathrm{Spec}(A)}^{n}I_{1}A_{\mathpzc{m}}\), we have \(I_{1}=\bigcap\nolimits_{i=1}^{n}{p_{i}}^{r_{i}}A_{\mathpzc{m}}\). Since \(I_{1}\subseteq A\) and \({p_{i}}^{r_{i}}A_{\mathpzc{m}}\cap A={p_{i}}^{r_{i}}A\), we have \(I_{1}=\bigcap\nolimits_{i=1}^{n}{p_{i}}^{r_{i}}A\). Since \(A\) is a UFD, \(I_{1}=(\prod\nolimits_{i=1}^{n}{p_{i}}^{r_{i}})A\).
Now, we describe the higher image ideals of \(D\) under the hypotheses of Proposition 3.15.
**Theorem 3.16**.: _Let \(R\) be a PID and \(B=R[X_{1},X_{2}]\). Let \(D\) be an irreducible \(R\)-Ind on \(B\) such that the following conditions hold:_
1. \(DX_{1}=b=\prod_{i=1}^{n}{p_{i}}^{r_{i}}\)_, where_ \(p_{i}\) _is a prime element of_ \(R\) _for each_ \(i\)_,_
2. \(DX_{2}=-f^{\prime}(X_{1})\) _and_ \(\deg_{X_{1}}\bigl{(}f(X_{1})\bigr{)}=d\)_._
3. _For each_ \(i\)_,_ \(1\leqslant i\leqslant n\)_, the induced_ \(R_{(p_{i})}\)_-Ind_ \(D_{p_{i}}\) _on_ \(B\otimes_{R}R_{(p_{i})}\) _is not fixed point free._
_Then, for each \(j\geqslant 1\), \(I_{j}=b^{m}A\), where \(m=\mathrm{Min}\{i_{1}+(d-1)i_{2}:i_{1},i_{2}\in\mathbb{N},i_{1}+di_{2}=j\}\)._
Proof.: The case \(j=1\) follows from Proposition 3.15 and rest of the proof is similar to the proof of theorem 3.12.
The following example shows that the condition "for each \(i\), \(1\leqslant i\leqslant n\), the induced \(R_{p_{i}}\)-Ind \(D_{p_{i}}\) is not fixed point free" can't be removed from the Proposition 3.15.
**Example 3.17**.: Let \(R=k[t]\) (\(=k^{[1]}\)) and \(B=R[X_{1},X_{2}]\). \(D\in\mathrm{LND}_{R}(B)\) is defined by \(DX_{1}=t(1-t)\) and \(DX_{2}=-tX_{1}+1-t\). Clearly, \(D\) is an irreducible quasi-nice \(R\)-derivation. Let \(h=(1/2)X_{1}^{2}+(1-t)X_{2}\). Then \(Dh=(1-t)^{2}\). So, \(I_{1}\neq t(1-t)A\). Here, we note that the induced \(R_{(t)}\)-Ind \(D_{t}\) is fixed point free.
**Acknowledgement:**
The authors thank Dr. Prosenjit Das for his valuable comments while going through the earlier drafts and suggesting improvements. This research is supported by the Indo-Russia Project DST/INT/RUS/RSF/P-48/2021 with TPN 64842.
|
2303.03865 | Bicategories of Automata, Automata in Bicategories | We study bicategories of (deterministic) automata, drawing from prior work of
Katis-Sabadini-Walters, and Di Lavore-Gianola-Rom\'an-Sabadini-Soboci\'nski,
and linking their bicategories of `processes' to a bicategory of Mealy machines
constructed in 1974 by R. Guitart. We make clear the sense in which Guitart's
bicategory retains information about automata, proving that Mealy machines \'a
la Guitart identify to certain Mealy machines \'a la K-S-W that we call fugal
automata; there is a biadjunction between fugal automata and the bicategory of
K-S-W. Then, we take seriously the motto that a monoidal category is just a
one-object bicategory. We define categories of Mealy and Moore machines inside
a bicategory B; we specialise this to various choices of B, like categories,
relations, and profunctors. Interestingly enough, this approach gives a way to
interpret the universal property of reachability as a Kan extension and leads
to a new notion of 1- and 2-cell between Mealy and Moore automata, that we call
intertwiners, related to the universal property of K-S-W bicategory. | Guido Boccali, Andrea Laretto, Fosco Loregian, Stefano Luneia | 2023-03-07T13:08:11Z | http://arxiv.org/abs/2303.03865v2 | # Bicategories of automata, automata in bicategories1
###### Abstract
We study _bicategories of_ (deterministic) _automata_, i.e. how automata \(E\gets E\otimes I\to O\) organise as the 1-cells of a bicategory \(\mathsf{Mly}_{\mathcal{K}}\), drawing from prior work of Katis-Sabadini-Walters, and Di Lavore-Gianola-Roman-Sabadini-Sobocinski, and linking their bicategories of 'processes' to a bicategory of Mealy machines constructed in 1974 by R. Guitart. We make clear the sense in which Guitart's bicategory retains information about automata, proving that Mealy machines _a la_ Guitart identify to _certain_ Mealy machines _a la_ K-S-W that we call _fugal automata_; there is a biadjunction between fugal automata and the bicategory of K-S-W. Then, we take seriously the motto that a monoidal category is just a one-object bicategory. We define categories of Mealy and Moore machines _inside_ a bicategory \(\mathsf{B}\); we specialise this to various choices of \(\mathsf{B}\), like categories, relations, and profunctors. Interestingly enough, this approach gives a way to interpret the universal property of'reachability of a state' as a Kan extension and leads to a new notion of 1- and 2-cell between Mealy and Moore automata, that we call _intertwiners_, related to the universal property of K-S-W bicategory.
## 1 Introduction
A single historical fact can motivate, alone, the profound connection between category theory and automata theory: one of the founders of the first wrote extensively about the second [22; 23]. A more intrinsic reason is that a natural way to interpret category theory is as a theory of _systems_ and _processes_. Morphisms in a category can be considered a powerful abstraction of'sequential operations' performed on a domain/input to obtain a codomain/output. Hence why the introduction of categorical models for computational machines has been
rich in results, starting from the elegant attempts by Arbib and Manes [2, 5, 6, 7, 8, 58] -cf. also [3, 20, 21] for exhaustive monographs- and Goguen [27, 28, 29], up to the ultra-formal -and sadly, forgotten- experimentations of [9, 10, 31, 32, 34] using hyperdoctrines, 2-dimensional monads, bicategories, lax co/limits... up to the modern coalgebraic perspective of [37, 61, 62, 66]; all this, without mentioning categorical approaches to Petri nets [53], based essentially on the same analogy, where the computation of a machine is _concurrent_ -as opposed to single-threaded.
Furthermore, many constructions of computational significance often, if not always, have a mathematical counterpart in terms of categorical notions: the transition from a deterministic machine to a non-deterministic one is reflected in the passage from automata in a monoidal category (cf. [21, 54]), to automata in the Kleisli category of an opmonoidal monad (cf. [33, 39]; this approach is particularly useful to capture categorically _stochastic_ automata, [7, 15, 19] as they appear as automata in the Kleisli category of a probability distribution monad); _minimisation_ can be understood in terms of factorisation systems (cf. [18, 29]); behaviour as an adjunction (cf. [55, 56]).
The present work starts from the intuition, first presented in [44, 59], that the analogy between morphisms and sequential machines holds up to the point that the series and parallel composition of automata should itself be reflected in the'series' and 'parallel'composition of morphisms in a category. As a byproduct of their 'Circ' construction, one can see how the 1-cells of a certain monoidal bicategory specialise exactly a _Mealy machines_\(E\stackrel{{ d}}{{\leftarrow}}E\otimes I\stackrel{{ s}}{{\rightarrow}}O\) with inputs and outputs \(I\) and \(O\).
**Outline of the paper.** The first result we find in this work, and that occupies section 2 is that this category relates to _another_ bicategory constructed by R. Guitart in [31], where the author observes that one can use certain categories \(\mathsf{Mac}(\mathcal{M},\mathcal{N})\) of spans, where \(\mathcal{M},\mathcal{N}\) are categories, as hom categories of a bicategory \(\mathsf{Mac}\), and shows that \(\mathsf{Mac}\) admits a concise description as the Kleisli bicategory of the _monad of diagrams_[31, SS1] (cf. also [34], by the same author, and [57] for a more modern survey); Mealy machines shall be recognisable as the 1-cells of \(\mathsf{Mac}\) between monoids, regarded as categories with a single object. The fundamental assumption in [31] is that a Mealy machine \(E\stackrel{{ d}}{{\leftarrow}}E\otimes M\stackrel{{ s}}{{\rightarrow}}N\) satisfies a certain property of compatibility with the action of \(d\) on \(E\), cf. (14), that we call being a _fugal_ automaton:
\[s(e,m\cdot m^{\prime})=s(e,m)\cdot s(d(e,m),m^{\prime}).\]
This notion can be motivated in the following way: if \(s\) satisfies (14), then \(s\) 'lifts' to a functor \(\mathcal{E}[d]\to N\) defined on the category of elements, and in fact, defines a'relational action' in its own right, compatible with the action \(d\) (formally speaking, \(\mathcal{E}[d]\) is a _displayed category_[4] over \(N\)). We show that there is a sub-bicategory \(\mathsf{Mly}_{\mathsf{Set}}^{\flat}\) of \(\mathsf{Mly}_{\mathsf{Set}}\) made of fugal automata and that \(\mathsf{Mly}_{\mathsf{Set}}^{\flat}\) is biequivalent (actually, strictly) to the 1-full and 2-full sub-bicategory of \(\mathsf{Mac}\) spanned by monoids.
The second result we propose in this paper is motivated by the slogan for which a monoidal category is just a bicategory with a single object: what are automata _inside a bicategory_\(\mathtt{B}\), where instead of input/output _objects_\(I,O\) we have input/output 1-cells, arranged as \(e\xleftarrow{\delta}e\circ i\xRightarrow{\sigma}\)? Far from being merely formal speculation (a similar idea was studied in a short, cryptic note [10] to describe behaviour through Kan extensions: we take it seriously and present it as a quite straightforward observation in Remark 3.6), we show how this allows for a concise generalisation of'monoidal' machines:
* first, the simple fact that a diagram of 2-cells as above exists forces \(i\) to be an endo-1-cell \(A\to A\), in order to be composable with the state 1-cell \(e:A\to B\) and map to the output 1-cell \(o:A\to B\); this hints at some hidden structure that the one-object case, where every 1-cell is an endomorphism for trivial reasons, did not allow to see (cf. Remark 3.2). The take is that input and output are not interchangeable concepts: an input 1-cell must be an endomorphism.
* Second, another piece of structure that in the monoidal case remained hidden is a natural notion of 1- and 2-cell between bicategorical machines, which specialises, in the monoidal case, to a novel notion of 1- _and even 2-cell_ between automata; we call such arrows _intertwiners_ and _intertwiner 2-cells_, cf. subsection 3.1; they have apparently never been considered before, in the monoidal case.
**Related work.** A word on related work and how we fit into it: the ideas in section 2 borrow heavily from [44, 59] where bicategories of automata (or 'processes') are studied in fine detail; in section 2 we carry on a comparison with a different approach to bicategories of automata, present in [31] but also in [32, 34]; in particular, our proof that there is an adjunction between the two bicategories is novel -to the best of our knowledge- and it hints at the fact that the two approaches are far from being independent. At the level of an informal remark, the idea of approaching automata via (spans where one leg is a) fibrations bears some resemblance to Walters' work on context-free languages through displayed categories in [68], and the requirement to have a fibration as one leg of the span should be thought as mirroring _determinism_ of the involved automata: if \(\langle s,d\rangle:E\times M\to N\times E\) is fugal and \(s\) defines a _fibration_ over \(N\), then \(E\) is a \(M\)-\(N\)-bimodule, not only an \(M\)-set; there is extensive work of Betti-Kasangian [11, 12, 41] and Kasangian-Rosebrugh [42] on 'profunctorial' models for automata, their behaviour, and the universal property enjoyed by their minimisation: spans of two-sided fibrations [63, 64] and profunctors are well-known to be equivalent ways to present the same bicategory of two-sided fibrations. Carrying on our study will surely determine a connection between the two approaches.
For what concerns section 3, the idea of valuing a Mealy or a Moore machine in a bicategory seems to be novel, although in light of [59] and in particular of their concrete description of \(\mathcal{C}=\Omega\Sigma(\mathcal{K},\otimes)\) it seems that both \(\mathsf{Mly}_{\mathtt{B}}\) and \(\mathsf{Mre}_{\mathtt{B}}\)
allow defining tautological functors into \(\mathcal{C}\). How these two bicategories relate is a problem we leave for future investigation: [59] proves that when \(\mathcal{K}\) is Cartesian monoidal, \(\mathsf{Mly}_{\mathcal{K}}\) is \(\Omega\Sigma(\mathcal{K},\times)\). The conjecture is that our \(\mathsf{Mly}_{\mathfrak{B}}\) is \(\Omega\mathsf{B}\) under some assumptions on the bicategory \(\mathsf{B}\): our notion of intertwiner seems to hint in that direction. Characterising 'behaviour as a Kan extension' is nothing but taking seriously the claim that animates applications of coalgebra theory [38, 39] to automata; the -sadly forgotten- work of Bainbridge [10] bears some resemblance to our idea, but his note is merely sketched, no plausibility for his intuition is given. Nevertheless, we recognise the potential of his idea and took it to its natural continuation with modern tools of 2-dimensional algebra.
### Mealy and Moore automata
The scope of the following subsection is to introduce the main characters studied in the paper:1 categories of automata valued in a monoidal category \((\mathcal{K},\otimes)\) (in two flavours: 'Mealy' machines, where one considers spans \(E\gets E\otimes I\to O\), and 'Moore', where instead one consider pairs \(E\gets E\otimes I,E\to O\)). 'Mealy' automata are known as 'deterministic automata' in today's parlance. However, since we often need to distinguish between the two kinds of diagrams or state definitions for both at a time, we stick to an older terminology.
Footnote 1: An almost identical introductory short section appears in [13], of which the present note is a parallel submission –although related, the two manuscripts are essentially independent, and the purpose of this repetition is the desire for self-containment.
The only purpose of this short section is to fix the notation for section 2 and 3; comprehensive classical references for this material are [3, 21].
For the entire subsection, we fix a monoidal category \((\mathcal{K},\otimes,1)\).
**Definition 1.1** (Mealy machine).: A _Mealy machine_ in \(\mathcal{K}\) of input object \(I\) and output object \(O\) consists of a triple \((E,d,s)\) where \(E\) is an object of \(\mathcal{K}\) and \(d,s\) are morphisms in a span
\[\mathfrak{e}:=\Big{(}\begin{array}{c}E\stackrel{{ d}}{{ \longleftarrow}}E\otimes I\stackrel{{ s}}{{\longleftarrow}}O \end{array}\Big{)} \tag{1}\]
**Remark 1.2** (The category of Mealy machines).: Mealy machines of fixed input and output \(I,O\) form a category, if we define a _morphism of Mealy machines_\(f:\mathfrak{e}=(E,d,s)\to(F,d^{\prime},s^{\prime})=\mathfrak{f}\) as a morphism \(f:E\to F\) in \(\mathcal{K}\) such that
(2)
Composition and identities are performed in \(\mathcal{K}\).
The category of Mealy machines of input and output \(I,O\) is denoted as \(\mathsf{My}_{\mathcal{K}}(I,O)\).
**Definition 1.3** (Moore machine).: A _Moore machine_ in \(\mathcal{K}\) of input object \(I\) and output object \(O\) is a diagram
\[\mathfrak{m}:=\Big{(}\begin{array}{ccccc}\raisebox{-1.29pt}{\includegraphics[ ]{100.eps}}&\raisebox{-1.29pt}{\includegraphics[]{100.eps}}&\raisebox{-1.29pt}{ \includegraphics[]{100.eps}}&\raisebox{-1.29pt}{\includegraphics[]{100.eps}}& \raisebox{-1.29pt}{\includegraphics[]{100.eps}}&\raisebox{-1.29pt}{ \includegraphics[]{100.eps}}\\ \end{array} \tag{3}\]
**Remark 1.4** (The category of Moore machines).: Moore machines of fixed input and output \(I,O\) form a category, if we define a _morphism of Moore machines_\(f:\mathfrak{e}=(E,d,s)\to(F,d^{\prime},s^{\prime})=\mathfrak{f}\) as a morphism \(f:E\to F\) in \(\mathcal{K}\) such that
\[\begin{array}{ccccc}\raisebox{-1.29pt}{\includegraphics[]{100.eps}}& \raisebox{-1.29pt}{\includegraphics[]{100.eps}}&\raisebox{-1.29pt}{ \includegraphics[]{100.eps}}&\raisebox{-1.29pt}{\includegraphics[]{100.eps}}& \raisebox{-1.29pt}{\includegraphics[]{100.eps}}\\ \end{array} \tag{4}\]
**Remark 1.5** (Canonical extension of a machine).: If \((\mathcal{K},\otimes)\) has countable coproducts preserved by each \(A\otimes\_\) then the span (1), considering for example Mealy machines, can be 'extended' to a span
\[\begin{array}{ccccc}\raisebox{-1.29pt}{\includegraphics[]{100.eps}}& \raisebox{-1.29pt}{\includegraphics[]{100.eps}}&\raisebox{-1.29pt}{ \includegraphics[]{100.eps}}&\raisebox{-1.29pt}{\includegraphics[]{100.eps}} \\ \end{array} \tag{5}\]
where \(d^{*},s^{*}\) can be defined inductively from components \(d_{n},s_{n}:E\otimes I^{\otimes n}\to E,O\); if \(\mathcal{K}\) is closed, the map \(d^{*}\) corresponds, under the monoidal closed adjunction, to the monoid homomorphism \(I^{*}\to[E,E]\) induced by the universal property of \(I^{*}=\sum_{n\geq 0}I^{\otimes n}\).
## 2 Bicategories of automata
Let \((\mathcal{K},\times)\) be a Cartesian category. There is a bicategory \(\mathsf{My}_{\mathcal{K}}\) defined as follows (cf. [59] where this is called '\(\mathsf{Circ}\)' and studied more generally, in case the base category has a non-Cartesian monoidal structure):
**Definition 2.1** (The bicategory \(\mathsf{My}_{\mathcal{K}}\), [59]).: The bicategory \(\mathsf{My}_{\mathcal{K}}\) has
* its _0-cells_\(I,O,U,\dots\) are the same objects of \(\mathcal{K}\);
* its _1-cells_\(I\to O\) are the Mealy machines \((E,d,s)\), i.e. the objects of the category \(\mathsf{My}_{\mathcal{K}}(I,O)\) in Remark 1.2, thought as morphisms \(\langle s,d\rangle:E\times I\to O\times E\) in \(\mathcal{K}\);
* its _2-cells_ are Mealy machine morphisms as in Remark 1.2;
* the composition of 1-cells \(\_\diamondsuit\_\) is defined as follows: given 1-cells \(\langle s,d\rangle:E\times I\to J\times E\) and \(\langle s^{\prime},d^{\prime}\rangle:F\times J\to K\times F\) their composition is the 1-cell
\(\langle s^{\prime}\diamonds s,d^{\prime}\diamondd\rangle:(F\times E)\times I\to K \times(F\times E)\), obtained as \[F\times E\times I\xrightarrow{F\times\langle s,d\rangle}F\times J\times E \xrightarrow{\langle s^{\prime},d^{\prime}\rangle\times E}K\times F\times E;\] (6)
* the _vertical_ composition of \(2\)-cells is the composition of Mealy machine morphisms \(f:E\to F\) as in Remark 1.2;
* the _horizontal_ composition of \(2\)-cells is the operation defined thanks to bifunctoriality of \(\_\Diamond\Diamond\_:\mathsf{Mly}_{\mathcal{K}}(B,C)\times\mathsf{Mly}_{ \mathcal{K}}(A,B)\to\mathsf{Mly}_{\mathcal{K}}(A,C)\);
* the associator and the unitors are inherited from the monoidal structure of \(\mathcal{K}\).
**Remark 2.2**.: Spelled out explicitly, the composition of \(1\)-cells in Equation 6 corresponds to the following morphisms (where we freely employ \(\lambda\)-notation available in any Cartesian category):
\[d_{2}\diamondd d_{1} :\lambda efa.\langle d_{2}(f,s_{1}(e,a)),d_{1}(e,a)\rangle \tag{7}\] \[s_{2}\diamonds s_{1} :\lambda efa.s_{2}(f,s_{1}(e,a)) \tag{8}\]
**Remark 2.3** (Kleisli extension of automata as base changes).: If \(P:\mathcal{K}\to\mathcal{K}\) is a commutative monad [47, 48], we can lift the monoidal structure \((\mathcal{K},\otimes)\) to a monoidal structure \((\mathsf{Kl}(P),\bar{\otimes})\) on the Kleisli category of \(P\); this leads to the notion of \(P\)_-non-deterministic automata_ or \(P_{\lambda}\)_-machines_ studied in [33, SS2, Definition 6]. Nondeterminism through the passage to a Kleisli category is a potent idea that developed into the line of research on automata theory through coalgebra theory [39], cf. in particular Chapter 2.3 for a comprehensive reference, or the self-contained [37].
We do not investigate the theory of \(P_{\lambda}\)-machines apart from the following two results the proof of which is completely straightforward: we content ourselves with observing that the results expounded in [43, 59], and in general the language of bicategories of processes, naturally lends itself to the generation of _base-change functors_, of which the following two are particular examples.
**Proposition 2.4**.: The correspondence defined at the level of objects by sending \((E,d,s)\in\mathsf{Mly}_{\mathcal{K}}(I,O)\) to
\[PE\xrightarrow{\eta_{E}}E\xleftarrow{d}E\otimes I\xrightarrow{s}O \xrightarrow{\eta_{O}}PO \tag{9}\]
extends to a functor \(L:\mathsf{Mly}_{\mathcal{K}}(I,O)\to\mathsf{Mly}_{\mathcal{K}l(P)}(I,O)\).
**Proposition 2.5**.: The correspondence sending \((E,d,s)\in\mathsf{Mly}_{\mathcal{K}l(P)}(I,O)\) into
\[PE\xleftarrow{\mu_{E}}PPE\xrightarrow{PdoD}PE\otimes PI\xrightarrow{PsoD}PPO \xrightarrow{\mu_{O}}PO \tag{10}\]
extends to a functor \((-)^{e}:\mathsf{Mly}_{\mathcal{K}l(P)}(I,O)\to\mathsf{Mly}_{\mathcal{K}}(PI, PO)\).
More precisely, the proof of the following result is straightforward -only slightly convoluted in terms of notational burden- so much so that we feel content to enclose it in a remark.
**Remark 2.6**.: Let \(\mathcal{H},\mathcal{K}\) be cartesian monoidal categories, then we can define \(2\)-categories \(\mathsf{Mly}_{\mathcal{H}},\mathsf{Mly}_{\mathcal{K}}\) as in Definition 2.1; let \(F:\mathcal{H}\to\mathcal{K}\) be a lax monoidal functor. Then, there exists a 'base change' pseudofunctor \(F_{*}:\mathsf{Mly}_{\mathcal{H}}\to\mathsf{Mly}_{\mathcal{K}}\), which is the action on \(1\)-cells of a \(2\)-functor \(\mathsf{Cat}_{\times}\to\mathsf{Bicat}\) from (Cartesian monoidal categories, product-preserving functors, Cartesian natural transformations), to (bicategories, pseudofunctors, oplax natural transformations).
As a corollary, we re-obtain the functors of Proposition 2.5 and Proposition 2.4 from the free and forgetful functors \(F_{P}:\mathcal{K}\to\mathsf{Kl}(P)\) and \(U_{P}:\mathsf{Kl}(P)\to\mathcal{K}\).
### Fugal automata, Guitart machines
A conceptual construction for \(\mathsf{Mly}_{\mathcal{K}}\) in Definition 2.1 is given as follows in [43]: it is the category \(\Omega\Sigma(\mathcal{K},\otimes)\) of lax functors \(\mathbb{N}\to\Sigma(\mathcal{K},\times)\), where \(\Sigma\) is the'suspension' of \((\mathcal{K},\otimes)\), i.e. \(\mathcal{K}\) regarded as a one-object bicategory; a universal property for \(\mathsf{Mly}_{\mathcal{K}}\) is provided in [44] (actually, for any \(\Omega\Sigma(\mathcal{K},\otimes)\)): it is the free _category with feedbacks_ (op. cit., Proposition 2.6, see also [50]) on \(\mathcal{K}\). The bicategory \(\mathsf{Mly}_{\mathcal{K}}\) addresses the fundamental question of whether one can fruitfully consider morphisms in a category as an abstraction of'sequential operations' performed on a domain/input to obtain a codomain/output, and up to what point the analogy between morphisms and sequential machines holds up (composing \(1\)-cells in \(\mathsf{Mly}_{\mathcal{K}}\) accounts for the sequential composition of state machines, where the'state' \(E\) is an intrinsic part of the specification of a machine/\(1\)-cell \(\langle s,d\rangle\)).
Twenty eight years before [44], however, Rene Guitart [31] exhibited another bicategory \(\mathsf{Mac}\) of 'Mealy machines', defined as a suitable category of spans, of which one leg is a fibration, and its universal property: \(\mathsf{Mac}\) is the Kleisli bicategory of the diagram monad (_monade des diagrammes_ in [31, SS1], cf. [46, 57]) \(\mathsf{Cat}/\!\!/\_\).2
Footnote 2: Guitart’s note [31] is rather obscure with respect of the fine details of his definition, as he chooses for \(2\)-cells the \(H\) for which the upper triangle in (12) is only _laxly_ commutative, and when it comes to composition of \(1\)-cells he invokes a _produit fibre canonique_; apparently, this can’t be interpreted as a strict pullback, or there would be no way to define horizontal composition of \(2\)-cells; using a comma object instead of a strict pullback, the lax structure is given by the universal property –observe that the functor that must be an opfibration is indeed an opfibration, thanks to [36, Exercise 1.4.6], but this opfibration does not remember much of the opfibration \(q\) one pulled back. Our theorem involves a strict version of Guitart’s \(\mathsf{Mac}\), because the functor \(\Pi\) of Theorem 2.17 factors through \(\mathsf{Mac}^{\mathsf{s}}\subseteq\mathsf{Mac}\).
**Definition 2.7** (The bicategory \(\mathsf{Mac}^{\mathsf{s}}\), adapting [31]).: Define a bicategory \(\mathsf{Mac}^{\mathsf{s}}\) as follows:
* \(0\)-cells are categories \(\mathcal{A},\mathcal{B},\mathcal{C}\ldots\);
* 1-cells \((\mathcal{E};p,S):\mathcal{A}\to\mathcal{B}\) consist of spans (11) where \(p:\mathcal{E}\to\mathcal{A}\) is a discrete opfibration;
* 2-cells \(H:(\mathcal{E};p,S)\Rightarrow(\mathcal{F};q,T)\) are pairs where \(H:\mathcal{E}\to\mathcal{F}\) is a morphism of opfibrations (cf. [36, dual of 1.7.3.(i)]): depicted graphically, a 2-cell is a diagram (12) where both triangles commute and \(H\) is an opCartesian functor (it preserves opCartesian morphisms);
* composition of 1-cells \(\mathcal{A}\overset{\mathcal{P}}{\leftarrow}\mathcal{E}\xrightarrow{S} \mathcal{B}\) and \(\mathcal{B}\overset{q}{\leftarrow}\mathcal{F}\xrightarrow{T}\mathcal{C}\) is via pullbacks, as it happens in spans, and all the rest of the structure is defined as in spans.
Given this, a natural question that might arise is how do the two bicategories of Definition 2.1 and Definition 2.7 interact, if at all?
In the present section, we aim to prove the existence of an adjunction (cf. Theorem 2.18) between a suitable sub-bicategory of \(\mathsf{Mac}^{\text{s}}\) and a sub-bicategory of \(\mathsf{Mly}_{\mathsf{Set}}\) spanned over what we call _fugal_ Mealy machines between monoids (cf. Definition 2.11).3
Footnote 3: A _fugue_ is ‘a musical composition in which one or two themes are repeated or imitated by successively entering voices and contrapuntally developed in a continuous interweaving of the voice parts’, cf. [67]. In our case, the interweaving is between \(s,d\) in a Mealy machine.
Since the construction of \(\mathsf{Mac}^{\text{s}}\) outlined in [31] requires some intermediate steps (and it is written in French), we deem it necessary to delve into the details of how its structure is presented. To fix ideas, we keep working in the category of sets and functions.
**Notation 2.8**.: In order to avoid notational clutter, we will blur the distinction between a monoid \(M\) and the one-object category it represents; also, given the \(d\) part of a Mealy machine, we will denote as \(d^{*}\) both the extension \(E\times I^{*}\to E\) of Remark 1.5, which is a monoid action of \(I^{*}\) on \(E\), and the functor \(I^{*}\to\mathsf{Set}\) to which the action corresponds.
**Remark 2.9**.: In the notation above, a Mealy machine \(\mathfrak{e}=(E,d,s)\) yields a discrete opfibration (cf. [1, 36]) \(\mathcal{E}[d^{*}]\to I^{*}\) over the monoid \(I^{*}\), and \(\mathcal{E}[a]\) is the _translation category_ of an \(M\)-set \(a:M\times X\to X\) (cf. [14] for the case when \(M\) is a group: clearly, \(\mathcal{E}[a]\) is the category of elements of the action \(a:M\to\mathsf{Set}\) regarded as a functor), i.e. the category having
* objects the elements of \(E\);
* a morphism \(m:e\to e^{\prime}\) whenever \(e^{\prime}=d^{*}(e,m)\).
Composition and identities are induced by the fact that \(d^{*}\) is an action.
**Remark 2.10**.: The hom-categories \(\mathsf{Mac}^{\mathrm{s}}(\mathcal{A},\mathcal{B})\) of Definition 2.7 fit into strict pullbacks
(13)
where \(\mathsf{Cat}/\mathcal{B}\) is the usual slice category of \(\mathsf{Cat}\) over \(\mathcal{B}\).
**Definition 2.11** (Fugal automaton).: Let \(M,N\) be monoids; a Mealy machine \(\langle s,d\rangle:E\times M\to N\times E\) is _fugal_ if its \(s\) part satisfies the equation
\[s(e,m\cdot m^{\prime})=s(e,m)\cdot s(d(e,m),m^{\prime}). \tag{14}\]
**Remark 2.12**.: This definition appears in [31, SS2] and it looks an ad-hoc restriction for what an output map in a Mealy machine shall be; but (14) can be motivated in two ways:
* A fugal Mealy machine \(\langle s,d\rangle:E\times M\to N\times E\) induces in a natural way a functor \(\Sigma:\mathcal{E}[d^{*}]\to N\) because (14) is exactly equivalent to the fact that \(\Sigma\) defined on objects in the only possible way, and on morphisms as \(\Sigma(e\to d^{*}(e,m))=s(e,m)\) preserves (identities and) composition;
* given a generic Mealy machine \(\langle s,d\rangle:E\times A\to B\times E\) one can produce a 'universal' fugal Mealy machine \(\langle s,d\rangle^{\flat}=\langle s^{\flat},\_\rangle:E\times A^{*}\to B^{*} \times E\), and this construction is well-behaved for \(1\)-cell composition in \(\mathsf{Mly_{Set}}\), in the sense that \((s_{2}\diamondsuit s_{1})^{\flat}=s_{2}^{\flat}\diamondsuit s_{1}^{\flat}\).
The remainder of this section is devoted to making these claims precise (and prove them). In particular, the 'universality' of \(\langle s,d\rangle^{\flat}\) among fugal Mealy machines obtained from \(\langle s,d\rangle\) is clarified by the following Lemma 2.13 and by Theorem 2.18, where we prove that there is a \(2\)-adjunction between \(\mathsf{Mly_{Set}}\) and \(\mathsf{Mly_{Set}^{\flat}}\).
**Lemma 2.13**.: Given sets \(A,B\), denote with \(A^{*},B^{*}\) their free monoids; then, there exists a 'fugal extension' functor \((\_)^{\flat}_{A,B}:\mathsf{Mly_{Set}}(A,B)\to\mathsf{Mly_{Set}^{\flat}}(A^{*},B^{*})\).
Proof.: The proof is deferred to the appendix, p. 23. In particular, the map \(s^{\flat}\) is constructed inductively as
\[\begin{cases}s^{\flat}(e,[\,])&=[\,]\\ s^{\flat}(e,a::as)&=s(e,a)::s^{\flat}(d(e,a),as)\end{cases} \tag{15}\]
and it fits in the Mealy machine \(\langle s^{\flat},d^{*}\rangle:E\times A^{*}\to B^{*}\times E\) where \(d^{*}\) is as in (5). The proof that \(\langle s^{\flat},d^{*}\rangle\) is fugal in the sense of (14) can be done by induction and poses no particular difficulty.
**Lemma 2.14**.: Given sets \(A,B\) there exists a commutative square
(16)
Proof of Lemma 2.14.: Given a fugal Mealy machine \(\langle s,d\rangle:E\times A^{*}\to B^{*}\times E\) between free monoids, from the action \(d\) we obtain a discrete opfibration \(\mathcal{E}[d]\to A^{*}\), and from the map \(s:E\times A^{*}\to B^{*}\) we obtain a functor \(\Sigma:\mathcal{E}[d^{*}]\to B^{*}\) as in Remark 2.12. So, one can obtain a span
(17)
where the leg \(D:\mathcal{E}[d^{*}]\to A^{*}\) is as in Remark 2.9 and \(\Sigma\) is an in Remark 2.12. The functors \(\mathsf{opFib}/A^{*}\leftarrow\mathsf{Mly}^{\flat}_{\mathsf{Set}}(A^{*},B^{ *})\rightarrow\mathsf{Cat}/B^{*}\) project to each of the two legs.
**Corollary 2.15**.: The universal property of the hom-categories \(\mathsf{Mac}^{\mathrm{s}}(\mathcal{A},\mathcal{B})\) exposed in Remark 2.10 yields the right-most functor in the composition
(18)
**Lemma 2.16** (Fugal extension preserves composition).: Let \(A,B,C\) be sets, \(s_{1}:E\times A\to B\) and \(s_{2}:F\times B\to C\) parts of Mealy machines \(\langle s_{1},\_\rangle\) and \(\langle s_{2},\_\rangle\); then
\[(s_{2}\diamondsuit s_{1})^{\flat}=s_{2}^{\flat}\diamondsuit s_{1}^{\flat}. \tag{19}\]
Proof.: The proof is deferred to the appendix, p. 23.4
Footnote 4: The argument is straightforward but tedious (the difficult part is that the condition to verify on \((s_{2}\diamondsuit s_{1})^{\flat}\) involves \(d_{2}\diamondd d_{1}\), the expression of which we recall from (7), is the \(\lambda\)-term \(\lambda efa.\langle d_{2}(f,s_{1}(e,a)),d_{1}(e,a)\rangle\).
This, together with the fact that the identity \(1\)-cell \(1\times A\to A\times 1\) is fugal (the proof is straightforward), yields that there exists a \(2\)-subcategory \(\mathsf{Mly}^{\flat}_{\mathsf{Set}}\) of \(\mathsf{Mly}_{\mathsf{Set}}\) where \(0\)-cells are monoids, \(1\)-cells are the \(\langle s,d\rangle\) where \(s\) is fugal in the sense of Definition 2.11, and we take all \(2\)-cells.
**Theorem 2.17**.: The maps \(\Gamma_{A,B}\) of Corollary 2.15 constitute the action on \(1\)-cells of a \(2\)-functor \(\Gamma:\mathsf{Mly}_{\mathsf{Set}}\rightarrow\mathsf{Mac}^{s}\). More precisely, there are \(2\)-functors \((\_)^{\flat}:\mathsf{Mly}_{\mathsf{Set}}\rightarrow\mathsf{Mly}^{\flat}_{ \mathsf{Set}}\) and \(\Pi:\mathsf{Mly}^{\flat}_{\mathsf{Set}}\rightarrow\mathsf{Mac}^{s}\) whose composition is \(\Gamma\).
Proof.: The proof is deferred to the appendix, p. 24.
**Theorem 2.18**.: The 2-func \((\_)^{\flat}:\mathsf{Mly}_{\mathsf{Set}}\to\mathsf{Mly}_{\mathsf{Set}}^{\flat}\) admits a right 2-adjoint; the 2-functor \(\Pi:\mathsf{Mly}_{\mathsf{Set}}^{\flat}\to\mathsf{Mac}^{\flat}\) identifies \(\mathsf{Mly}_{\mathsf{Set}}^{\flat}\) as the 1-full and 2-full subcategory of \(\mathsf{Mac}^{\flat}\) spanned by monoids.
Proof.: The proof is deferred to the appendix, p. 24. The last statement essentially follows from (17): the span \((D,\Sigma)\) is essentially equivalent to the fugal Mealy machine \(\langle s,d\rangle\), since its left leg \(D\) determines a unique action of \(A^{*}\) on the set of objects \(\mathcal{E}[d^{*}]_{0}\), and \(\Sigma\) and \(s\) are mutually defined.
## 3 Bicategory-valued machines
A monoidal category is just a bicategory with a single 0-cell; then, do Definition 1.1 and Definition 1.3 admit a generalisation when instead of \(\mathcal{K}\) we consider a bicategory \(\mathbb{B}\) with more than one object? The present section answers in the positive. We also outline how, passing to automata valued in a bicategory, a seemingly undiscovered way to define morphisms between automata, different (from (1.2) and) from the categories of 'variable' automata described in [21, SS11.1]: we study this notion in Definition 3.12.
In our setting, 'automata' become diagrams of _2-cells_ in \(\mathbb{B}\), and input, output and state are _1-cells_, in contrast with previous studies where automata appeared as objects, and with [59] (and our section 2), where they appear as 1-cells. This perspective suggests that 2-dimensional diagrams of a certain shape can be thought of as state machines -so, they carry a computational meaning; but also that state machines can be fruitfully interpreted as diagrams: in Example 3.11 we explore definitions of an automaton where input and output are _relations_, or functors (in Example 3.9), or profunctors (in Example 3.10); universal objects that can be attached to the 2-dimensional diagram then admit a computational interpretation (cf. (28) where a certain Kan extension resembles a'reachability' relation).
This idea is not entirely new: it resembles an approach contained in [9, 10] where the author models the state space of abstract machines as a functor, of which one can take the left/right Kan extension along an 'input scheme'. However, Bainbridge's works are rather obscure (and quite ahead of their time), so we believe we provide some advancement to state of the art by taking his idea seriously and carrying to its natural development -while at the same time, providing concrete examples of bicategories in which inputs/outputs automata can be thought as 1-cells, and investigating the structure of the class of all such automata as a global object.
**Definition 3.1**.: Adapting from Definition 1.1_verbatim_, if \(\mathbb{B}\) is a bicategory with 0-cells \(A,B,X,Y,\dots\), 1-cells \(i:A\to B,o:X\to Y,\dots\) and 2-cells \(\alpha,\beta,\dots\) the kind of object we want in \(\mathsf{Mly}_{\mathbb{B}}(i,o)\) is a span of the following form:
(20)
for \(1\)-cells \(i:X\to Y\), \(e:A\to B\), \(o:C\to D\). Note that with \(\_\circ\_\), we denote the composition of \(1\)-cells in \(\mathbb{B}\), which becomes a monoidal product in \(\mathbb{B}\) has a single \(0\)-cell.
**Remark 3.2**.: The important observation here is that the mere existence of the span \((\delta,\sigma)\) 'coherces the types' of \(i,o,e\) in such a way that \(i\)_must_ be an endomorphism of an object \(A\in\mathbb{B}\), and \(e,o:A\to B\) are \(1\)-cells. Interestingly, these minimal assumptions required even to consider an object like (20) make iterated compositions \(i\circ\cdots\circ i\) as meaningful as iterated tensors \(I\otimes\cdots\otimes I\), and in fact, the two concepts coincide when \(\mathbb{B}\) has a single object \(*\) and hom-category \(\mathbb{B}(*,*)=\mathcal{K}\).
In the monoidal case, the fact that an input \(1\)-cell stands on a different level from an output was completely obscured by the fact that _every_\(1\)-cell is an endomorphism.
Let us turn this discussion into a precise definition.
**Definition 3.3** (Bicategory-valued Mealy machines).: Let \(\mathbb{B}\) be a bicategory, and fix two \(1\)-cells \(i:A\to A\) and \(o:A\to B\); define a category \(\mathsf{Mly}_{\mathbb{B}}(i,o)\) as follows:
1. the objects are diagrams of \(2\)-cells as in (20);
2. the morphisms \((e,\delta,\sigma)\to(e^{\prime},\delta^{\prime},\sigma^{\prime})\) are \(2\)-cells \(\varphi:e\Rightarrow e^{\prime}\) subject to conditions similar to Remark 1.2: * \(\sigma^{\prime}\circ(\varphi*i)=\sigma\); * \(\delta^{\prime}\circ(\varphi*i)=\varphi\circ\delta\).
**Definition 3.4** (Bicategory-valued Moore machines).: Define a category \(\mathsf{Mre}_{\mathbb{B}}(i,o)\) as follows:
1. the objects are pairs of \(2\)-cells in \(\mathbb{B}\), \(\delta:e\circ i\Rightarrow e\) and \(\sigma:e\Rightarrow o\);
2. the morphisms \((e,\delta,\sigma)\to(e^{\prime},\delta^{\prime},\sigma^{\prime})\) are \(2\)-cells \(\varphi:e\Rightarrow e^{\prime}\) such that diagrams of \(2\)-cells similar to those in Definition 3.3 are commutative.
**Notation 3.5**.: In the following, an object of \(\mathsf{Mly}_{\mathbb{B}}(i,o)\) (resp., \(\mathsf{Mre}_{\mathbb{B}}(i,o)\)) will be termed a _bicategorical Mealy machine_ (resp., a _bicategorical Moore machine_) of input cell \(i\) and output cell \(o\), and the objects \(A,B\) are the _base_ of the bicategorical Mealy machine \((e,\delta,\sigma)\). To denote that a bicategorical Mealy machine is based on \(A,B\) we write \((e,\delta,\sigma)_{A,B}\).
In [10] the author models the state space of abstract machines as follows: fix categories \(A,X,E\) and a functor \(\Phi:X\to A\), of which one can take the left/right Kan extension along an 'input scheme' \(u:E\to X\); a _machine with input scheme_\(u\) is a diagram of \(2\)-cells in \(\mathsf{Cat}(E,A)\) of the form \(\mathcal{M}=(I\Rightarrow\Phi\circ u\Rightarrow J)\), and the _behaviour_\(B(\mathcal{M})\) of \(\mathcal{M}\) is the diagram of \(2\)-cells \(\mathrm{Lan}_{u}I\Rightarrow\Phi\Rightarrow\mathrm{Ran}_{u}J\).
All this bears some resemblance to the following remark, but at the same time looks very mysterious, and not much intuition is given in _op. cit._ for what
the approach in study means; we believe our development starts from a similar point (the intuition that a category of machines is, in the end, some category of diagrams -a claim we substantiate in Proposition 3.8) but rapidly takes a different turn (cf. Definition 3.12), and ultimately gives a cleaner account of Bainbridge's perspective (see also [9] of the same author).
**Remark 3.6** (Behaviour as a Kan extension).: A more convenient depiction of the span in bmoi will shed light on our Definition 3.3 and 3.4, giving in passing a conceptual motivation for the convoluted shape of finite products in \(\mathsf{M}\mathsf{r}\mathsf{e}_{\mathcal{K}}(I,O)\) and \(\mathsf{M}\mathsf{y}_{\mathcal{K}}(I,O)\) (cf. [21, Ch. 11]): a bicategorical Moore machine in \(\mathbb{B}\) of fixed input and output \(i,o\) consists of a way of filling the dotted arrows in the diagram
(21)
with \(e:A\to B\) and two \(2\)-cells \(\delta,\sigma\). But then the 'terminal way' of filling such a span can be characterised by the right extension of the output object along a certain \(1\)-cell obtained from the input \(i\). Let us investigate how.
First of all, we have to assume something on the ambient hom-categories \(\mathbb{B}(A,A)\), namely that each of these admits a left adjoint to the forgetful functor
(22)
so that every endo-\(1\)-cell \(i:A\to A\) has an associated extension to an endo-\(1\)-cell \(i^{\natural}:A\to A\) with a unit map \(i\Rightarrow i^{\natural}\) that is initial among all \(2\)-cells out of \(i\) into a monad in \(\mathbb{B}\); \(i^{\natural}\) is usually called the _free monad_ on \(i\).
**Construction 3.7**.: Now, fix \(i,o\) as in Definition 3.4; we claim that the terminal object of \(\mathsf{M}\mathsf{r}\mathsf{e}_{\mathbb{B}}(i,o)\) is obtained as the right extension in \(\mathbb{B}\) of the output \(o\) along \(i^{\natural}\). We can obtain
* from the unit \(\boldsymbol{\eta}:\mathrm{id}_{A}\Rightarrow i^{\natural}\) of the free monad on \(i\), a canonical modification \(\mathrm{Ran}_{i}\Rightarrow\mathrm{Ran}_{\mathrm{id}}=\mathrm{id}_{A}\), with components at \(o\) given by \(2\)-cells \(\sigma:\mathrm{Ran}_{i}o\Rightarrow o\); this is a choice of the right leg for a diagram like bmoi;
* from the multiplication \(\boldsymbol{\mu}:i^{\natural}\circ i^{\natural}\Rightarrow i^{\natural}\) of the free monad on \(i\), a canonical modification \(\mathrm{Ran}_{i^{\natural}}\Rightarrow\mathrm{Ran}_{i^{\natural}}\circ\mathrm{ Ran}_{i^{\natural}}\), whose components at \(o\) mate to a \(2\)-cell \(\delta_{0}:\mathrm{Ran}_{i^{\natural}}o\circ i^{\natural}\Rightarrow\mathrm{ Ran}_{i^{\natural}}o\); the composite (23) The left leg is now chosen for a diagram like bmoi.
Together, \((\mathrm{Ran}_{i^{\natural}}o,\delta,\sigma)\) is a bicategorical Mealy machine, and the universal property of the right Kan extension says it is the terminal such. A similar line of reasoning yields the same result for \(\mathsf{M}\mathsf{y}_{\mathbb{B}}(i,o)\), only now \(\sigma\) is the \(2\)-cell
obtained as mate of \(\epsilon\circ(\operatorname{Ran}_{i^{\natural}}o*\eta):\operatorname{Ran}_{i^{ \natural}}o\circ i\Rightarrow\operatorname{Ran}_{i^{\natural}}o\circ i^{ \natural}\Rightarrow o\) from the counit of \(\_\circ i^{\natural}\dashv\operatorname{Ran}_{i^{\natural}}\).
**Proposition 3.8** (\(\operatorname{\boldsymbol{M}\!\operatorname{\boldsymbol{r}}\!\operatorname{ \boldsymbol{e}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\! \operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\! \operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{ }}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{ }}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{ }}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{}\!\operatorname{\boldsymbol{}}\!\operatorname{} \operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ }\!\operatorname{\boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{\boldsymbol{}}\!\operatorname{}\!\operatorname{ \boldsymbol{}}\!\operatorname{}\operatorname{\boldsymbol{}}\!\operatorname{ \boldsymbol{}}\!\operatorname{}\operatorname{\boldsymbol{}}\!\operatorname{ }\!\operatorname{\boldsymbol{}}\!\operatorname{}\!\operatorname{\boldsymbol{}} \!\operatorname{\boldsymbol{}}\!\operatorname{}\!\operatorname{\boldsymbol{}} \!\operatorname{}\operatorname{\boldsymbol{}}\!\operatorname{}\!\operatorname{ \boldsymbol{}}\!\operatorname{}}\!\operatorname{\boldsymbol{}}\!\operatorname{ }\operatorname{}\!\operatorname{\boldsymbol{}}\!\operatorname{}\!\operatorname{ \boldsymbol{}}\!\operatorname{}\!\operatorname{\boldsymbol{}}\!\operatorname{} \operatorname{}\!\operatorname{\boldsymbol{}}\!\operatorname{}\!\operatorname{ \boldsymbol{}}\!\operatorname{}\operatorname{}\!\operatorname{\boldsymbol{}} \!\operatorname{}\operatorname{\boldsymbol{}}\!\operatorname{}\!\operatorname{} \operatorname{\boldsymbol{}}\!\operatorname{}}\!\operatorname{\boldsymbol{}} \!\operatorname{}\operatorname{\boldsymbol{}}\!\operatorname{}\operatorname{ \boldsymbol{}}\!\operatorname{}\operatorname{}\!\operatorname{}\operatorname{} \operatorname{\boldsymbol{}}\!\operatorname{}\!\operatorname{}\operatorname{ \boldsymbol{}}\!\operatorname{}\!\operatorname{}\operatorname{}\!\operatorname{ \boldsymbol{}}\!\operatorname{}\operatorname{}}\!\operatorname{\boldsymbol{}} \!\operatorname{}\operatorname{}\!\operatorname{}\operatorname{}\!\operatorname{ \boldsymbol{}}\!\operatorname{}\operatorname{}\!\operatorname{}\operatorname{} \!\operatorname{}\operatorname{\boldsymbol{}}\!\operatorname{}\operatorname{} \!\operatorname{}\operatorname{}\!\operatorname{}\operatorname{\boldsymbol{}} \!\operatorname{}\operatorname{}\!\operatorname{}\operatorname{}\!\operatorname{} \operatorname{}\!\operatorname{}\operatorname{}\!\operatorname{}\operatorname{}} \!\operatorname{\boldsymbol{}}\!\operatorname{}\operatorname{}\!\operatorname{} \operatorname{}\!\operatorname{}\operatorname{}\!\operatorname{}\operatorname{} \operatorname{\boldsymbol{}}\!\operatorname{}\operatorname{}\!\operatorname{} \operatorname{}\operatorname{}}\!\operatorname{}\operatorname{}\!\operatorname{} \operatorname{}\operatorname{}\!\operatorname{}\operatorname{}\!\operatorname{}\operatorname{ }\operatorname{}\!\operatorname{}\operatorname{}\!\operatorname{}\operatorname{} \operatorname{}\!\operatorname{}\operatorname{}\operatorname{}\!\operatorname{} \operatorname{}\!\operatorname{}\operatorname{}}\!\operatorname{}\operatorname{} \operatorname{}\!\operatorname{}\operatorname{}\!\operatorname{}\operatorname{} \operatorname{}\!\operatorname{}\operatorname{}\!\operatorname{}\operatorname{} \operatorname{}\!\operatorname{}\operatorname{}\operatorname{} \!\operatorname{}\operatorname{}\operatorname{}}\!\operatorname{}\operatorname{} \!\operatorname{}\operatorname{}\operatorname{}\!\operatorname{}\operatorname{} \operatorname{}\operatorname{}\!\operatorname{}\operatorname{}\!\operatorname{} \operatorname{}\!\operatorname{}\operatorname{}\operatorname{}\!\operatorname{} \operatorname{}\!\operatorname{}\operatorname{}\operatorname{}}\!\operatorname{} \operatorname{}\!\operatorname{}\operatorname{}\!\operatorname{}\operatorname{} \operatorname{}\!\operatorname{}\operatorname{}\!\operatorname{}\operatorname{} \operatorname{}\operatorname{}\!\operatorname{}\operatorname{}\!\operatorname{} \operatorname{}\operatorname{}\!\operatorname{}\operatorname{}\!\operatorname{} \operatorname{}\!\
element of the output space \(\Upsilon_{C}(u)\in OC\), and that this association is natural in \(C\).
**Example 3.10** (Bicategorical machines in profunctors).: We can reason similarly in the bicategory of categories and profunctors of [16; 17; 40], [51; Ch. 5]; now an endo-1-cell \(I:\mathcal{C}\to\mathcal{C}\) on a category \(\mathcal{C}\) consists of an 'extension' of the underlying graph of \(UC\) to a bigger graph \((UC)^{+}\),6 and the free promonad \(I^{\natural}\) (cf. [49, SS5]) corresponds to the quotient of the free category on \((UC)^{+}\) where 'old' arrows compose as in \(\mathcal{C}\), and 'new' arrows compose freely; moreover, all right extensions \(\langle P/Q\rangle:\mathcal{X}\rightsquigarrow\mathcal{Y}\) of \(Q:\mathcal{A}\rightsquigarrow\mathcal{Y}\) along \(P:\mathcal{A}\rightsquigarrow\mathcal{X}\) exist in the bicategory \(\mathtt{Prof}\), as they are computed as the end in [51, 5.2.5],
Footnote 6: More precisely, to the underlying graph of \(\mathcal{C}\), made of ‘old’ arrows, we adjoin a directed edge \(e_{x}:C\to C^{\prime}\) for each \(x\in I(C,C^{\prime})\).
\[\langle P/Q\rangle:(X,Y)\,\raisebox{-1.29pt}{\includegraphics[width=14.226378pt]{./.
**Definition 3.12** (Intertwiner between bicategorical machines).: Consider two bicategorical Mealy machines \((e,\delta,\sigma)_{A,B},(e^{\prime},\delta^{\prime},\sigma^{\prime})_{A^{\prime},B^{\prime}}\) on different bases (so in particular \((e,\delta,\sigma)_{A,B}\in\mathsf{My}_{\mathsf{B}}(i,o)\) and \((e^{\prime},\delta^{\prime},\sigma^{\prime})_{A^{\prime},B^{\prime}}\in \mathsf{My}_{\mathsf{B}}(i^{\prime},o^{\prime})\)); an _intertwiner_\((u,v):(e,\delta,\sigma)\looparrow(e^{\prime},\delta^{\prime},\sigma^{ \prime})\) consists of a pair of 1-cells \(u:A\to A^{\prime},v:B\to B^{\prime}\) and a triple of 2-cells \(\iota,\epsilon,\omega\) disposed as follows:
(29)
to which we require to satisfy the following identities (we provide a 'birdseye' view of the commutativities that we require, as (29) is unambiguous about how the 2-cells \(\iota,\delta,\sigma,\epsilon,\omega\) can be composed):
(30)
**Remark 3.13**.: Interestingly enough, when it is spelled out in the case when \(\mathsf{B}\) has a single 0-cell, this notion does not reduce to Remark 1.2, as an intertwiner between a Mealy machine \((E,d,s)_{I,O}\) and another \((E^{\prime},d^{\prime},s^{\prime})_{I^{\prime},O^{\prime}}\) consists of a pair of objects \(U,V\in\mathcal{K}\), such that
1. there exist morphisms \(\iota:I^{\prime}\otimes U\to V\otimes I,\epsilon:E^{\prime}\otimes U\to V \otimes E,\omega:O^{\prime}\otimes U\to V\otimes O\);
2. the following two identities hold: \[\epsilon\circ(d^{\prime}\otimes U) =(V\otimes d)\circ(\epsilon\otimes I)\circ(E^{\prime}\otimes\iota)\] \[\omega\circ(s^{\prime}\otimes U) =(V\otimes s)\circ(\epsilon\otimes I)\circ(E^{\prime}\otimes\iota)\]
In the single-object case, this notion does not trivialise in any obvious way, and -in stark contrast with the notion of morphism of automata given in (1.2)-intertwiners between machines support a notion of higher morphisms _even in the monoidal case_.
**Definition 3.14** (2-cell between machines).: In the same notation of Definition 3.12, let \((u,v),(u^{\prime},v^{\prime}):(e,\delta,\sigma)\looparrow(e^{\prime},\delta^{ \prime},\sigma^{\prime})\) be two parallel intertwiners between bicategorical Mealy machines; a 2-cell \((\varphi,\psi):(u,v)\Rightarrow(u^{\prime},v^{\prime})\) consists of a pair of 2-cells \(\varphi:u\Rightarrow u^{\prime}\), \(\psi:v\Rightarrow v^{\prime}\) such that the following identities hold true:
(31)
**Remark 3.15**.: When it is specialised to the monoidal case, Definition 3.14 yields the following notion: a \(2\)-cell \((f,g):(U,V)\Rightarrow(U^{\prime},V^{\prime})\) as in Remark 3.13 consists of a pair of morphisms \(f:U\to U^{\prime}\) and \(g:V\to V^{\prime}\) subject to the conditions that the following two squares commute:
(32)
Intuitively speaking, in this particular case, the machine \(2\)-cells correspond to pairs \((f,g)\) of \(\mathcal{K}\)-morphisms such that both pairs \((E^{\prime}\otimes I^{\prime}\otimes f,E^{\prime}\otimes f)\) and \((g\otimes E\otimes I,g\otimes E)\) form morphisms in the arrow category of \(\mathcal{K}\).
**Remark 3.16**.: Let \(\mathbb{B}\) be a bicategory; in [43] the authors exploit the universal property of a bicategory \(\Omega\mathbb{B}=\mathsf{Psd}(\mathbf{N},\mathbb{B})\) as the category of pseudofunctors, lax natural transformations and modifications with domain the monoid of natural numbers, regarded as a single object category. The typical object of \(\Omega\mathbb{B}\) is an endomorphism \(i:A\to A\) of an object \(A\in\mathbb{B}\), and the typical \(1\)-cell consists of a lax commutative square
(33)
This presentation begs the natural question of whether there is a tautological functor \(\mathsf{Mly}_{\mathbb{B}}\to\Omega\mathbb{B}\) given by 'projection', sending \((i,o;(e,\delta,\sigma))\) into \(i\); the answer is clearly affirmative, and in fact such functors mates to a unique \(2\)-functor \(\mathbf{N}\boxtimes\mathsf{Mly}_{\mathbb{B}}\to\mathbb{B}\) under the isomorphism given by Gray tensor product [30]; this somehow preserves the intuition (cf. [65, SS1]) of \(\Omega\mathbb{B}\) as a category of 'lax dynamical systems'.
## 4 Conclusions
We sketch some directions for future research.
**Conjecture 4.1**.: Let \(T\) be a monad on \(\mathsf{Set}\), and \(\mathcal{V}\) a quantale [24, Ch. 2]; we can define the locally thin bicategory \((T,\mathcal{V})\)-\(\mathsf{Prof}\) as in [35, Ch. III] where \(1\)-cells \(a:A\stackrel{{ T}}{{\rightsquigarrow}}B\) are \((T,\mathcal{V})\)_-relations_, i.e. \(\mathcal{V}\)-functors \(a:TA\times B\to\mathcal{V}\) for a fixed _lax extension_\(\hat{T}:\mathcal{V}\)-\(\mathsf{Prof}\to\mathcal{V}\)-\(\mathsf{Prof}\) of \(T\) to \(\mathcal{V}\)-\(\mathsf{Prof}\). In each of these bicategories, we can interpret our Construction 3.7 and adapt Example 3.10, considering the expression for \(\mathrm{Ran}_{q}r\). For suitable choices of the pair \((T,\mathcal{V})\), inside \((T,\mathcal{V})\)-\(\mathsf{Prof}\), one can recognise the categories of topological spaces, approach spaces [52], metric and ultrametric, closure spaces...as the \((T,\mathcal{V})\)_-categories_ of [35, SSIII.1.6]. We conjecture that when instantiated in \((T,\mathcal{V})\)-\(\mathsf{Prof}\), Equation 28
yields a 2-categorical way to look at topological, metric and loosely speaking 'fuzzy' approaches to automata theory.
**Conjecture 4.2**.: From Example 3.9 and 3.10 we suspect that the 'non-determinism via Kleisli category' approach of [33] can be carried over for the presheaf construction on \(\mathsf{Cat}\) and its Kleisli bicategory \(\mathsf{Prof}\): if automata (classically intended) in the Kleisli category of the powerset monad are nondeterministic automata in \(\mathsf{Set}\), _bicategorical_ automata in the Kleisli _bicategory_ of the _presheaf construction_ (cf. [25]) are nondeterministic bicategorical automata: passing from Example 3.9 to Example 3.10 accounts for a form of non-determinism.
The exciting conjecture here is that one might be able to address _nondeterministic_ bicategorical automata in \(\mathsf{B}\) as _deterministic_ bicategorical automata in a proarrow equipment [60, 69, 70] for \(\mathsf{B}\). The well-established apparatus of formal category theory could then elucidate classical constructions like minimisation, behaviour, and bisimulation by putting them in the bigger conceptual framework arising from Definition 3.3, Definition 3.4.
**Conjecture 4.3**.: We have been left with two questions about the adjunction we outline in Theorem 2.18:
* is the adjunction \(\mathsf{Mly}_{\mathsf{Set}}\leftrightarrows\mathsf{Mly}_{\mathsf{Set}}^{ \flat}\) 2-monadic? In other words, can one recognise fugal Mealy machines as the algebras for the 2-monad \(\mathsf{Mly}_{\mathsf{Set}}\xrightarrow{(\_)^{\flat}}\mathsf{Mly}_{\mathsf{ Set}}^{\flat}\to\mathsf{Mly}_{\mathsf{Set}}\)?
* if \(\mathsf{Mly}_{\mathsf{Set}}^{\flat}\) or \(\mathsf{Mac}^{\flat}\) are categories with feedback in the sense of [44, 2.6], it is the case that the functors \(\mathsf{Mly}_{\mathsf{Set}}\to\mathsf{Mly}_{\mathsf{Set}}^{\flat}\) and \(\Gamma\) can be characterised more swiftly through the universal property of \(\mathsf{Mly}_{\mathsf{Set}}=\mathsf{Circ}(\mathsf{Set},\times)\) to be the free category with feedback over \((\mathsf{Set},\times)\); in light of the results expounded in [50, SS2.4 and SS3] that characterise many free categories with feedback as categories of spans, it seems reasonable to conjecture that also \(\mathsf{Mac}\) and \(\mathsf{Mac}^{\flat}\) are categories with feedback.
|
2306.17016 | On the suitability of rigorous coupled-wave analysis for fast optical
force simulations | Optical force responses underpin nanophotonic actuator design, which requires
a large number of force simulations to optimize structures. Commonly used
computation methods, such as the finite-difference time-domain (FDTD) method,
are resource intensive and require large amounts of calculation time when
multiple structures need to be compared during optimization. This research
demonstrates that performing optical force calculations on periodic structures
using the rigorous coupled-wave analysis method is typically on the order of 10
times faster than FDTD with sufficient accuracy to suit optical design
purposes. Moreover, this speed increase is available on consumer grade laptops
with a CUDA-compatible GPU avoiding the need for a high performance computing
resource. | Bo Gao, Henkjan Gersen, Simon Hanna | 2023-06-29T15:09:06Z | http://arxiv.org/abs/2306.17016v1 | # On the suitability of rigorous coupled-wave analysis for fast optical force simulations
###### Abstract
Optical force responses underpin nanophotonic actuator design, which requires a large number of force simulations to optimize structures. Commonly used computation methods, such as the finite-difference time-domain (FDTD) method, are resource intensive and require large amounts of calculation time when multiple structures need to be compared during optimization. This research demonstrates that performing optical force calculations on periodic structures using the rigorous coupled-wave analysis method is typically on the order of 10 times faster than FDTD with sufficient accuracy to suit optical design purposes. Moreover, this speed increase is available on consumer grade laptops with a CUDA-compatible GPU avoiding the need for a high performance computing resource.
+
Footnote †: : _J. Opt._
_Keywords_: optical force, rigorous coupled-wave analysis (RCWA), finite-difference time-domain (FDTD), metamaterials
## 1 Introduction
With the rapid development of nanofabrication techniques and increased understanding of optical responses, fabricating fast optically driven actuators becomes feasible. One of the challenges in practice is to design the structure of a potential actuator in a given material system with the desired optical response. This process is also known as inverse design[1], and it is commonly carried out by comparing simulated responses for various structures.
A diverse range of conventional optimization algorithms have been used to accelerate the search for optimal structures in inverse design, including genetic algorithms[2], simulated annealing[3], gradient-search algorithms[4] and topology optimization[5]. These optimization methods are search strategy driven, because the structures to be simulated depend on decisions made during the algorithm and cannot be foretold and prepared in advance. Recently, data-driven algorithms involving machine learning[6] and neural networks[7] have started to be used[8], where the simulations needed for the training sets are independent of the optimization routines and can be prepared in advance and reused.
Although the approaches mentioned above provide optimization strategies that avoid exploring the entire structural parameter space, they still require a large number of simulations of various structures. These simulations are normally the most computationally expensive parts in inverse design routines due to the high complexity of the structural parameters. As a result, the suitability of any inverse design method strongly depends on the speed and required computing resources to obtain an accurate simulation of an individual structure.
The interest in optical forces for actuation control has exploded in recent decades due to the development of optical trapping techniques[9]. As a tool for manipulating the dynamic behaviour of nanophotonic structures[10, 11, 12], optical forces have the potential for creating fast directly driven actuators[13]. Although the magnitude of optical forces is usually small, of the order of pN, and operate on the microscopic scale, research shows that they can be manipulated and amplified to be observed at a macroscopic scale[14] typically through the use of repeats of a designed motif[15]. Although many kinds of optical forces are considered in designing actuators such as radiation pressure and gradient forces[16], we are particularly interested in optimising so-called lateral optical forces, which typically act in a direction that is not parallel to the propagation direction of the incident light. Recent examples of lateral optical forces include the design of plasmonic linear nanomotors[17], microscopic metavehicles[18], self-stabilized lightsails[19], and diffractive solar sails[20, 21].
There are several methods available to calculate optical forces directly from scattering theory, such as the discrete dipole approximation[22] and the T-matrix method[23]. However, these methods are designed to calculate scattering from single or simple geometric structure and are generally not well-suited to deal with repeated complex structures or periodic boundary conditions.
The finite-difference time-domain (FDTD), finite-difference frequency-domain (FDFD) and finite element method (FEM) are commonly used for simulating electromagnetic fields for periodic structures by discretising space and adding consideration of boundary conditions. As a time-evolution based method, FDTD can take a long time for a system to reach the steady state, rendering it inefficient for inverse design studies used for force optimization. The FDFD and FEM methods are capable of calculating the steady state directly, but their solution requires the inversion of a huge sparse matrix whose size is determined by the number of lattice sites required. This can be problematic when the structure is 3D and requires a fine mesh to model[24].
The rigorous coupled-wave analysis (RCWA) method, as a type of Fourier modal method which is optimal for monochromatic situations, has been proven to be an extremely fast and efficient tool for studying reflectance, transmittance and diffraction efficiencies of multilayered periodic structures[25]. RCWA is not typically used in the field of nanophotonic inverse design due to its slow convergence behaviour in dealing with structures much smaller than a wavelength and high refractive index (RI) contrast materials. Although several approaches based on RCWA[26, 27, 28] have recently demonstrated that these difficulties can be overcome, the suitability of performing optical force calculations using RCWA has, to the best of our knowledge, not been tested.
In this article, we present an optical force simulation approach for periodic structures suitable for
nanophotonic inverse design of optical actuators based on the RCWA method. To illustrate our approach, we simulate an experimentally studied structure[18] which acts as a benchmark for comparing force calculations using the RCWA and FDTD methods. After describing the structure studied and outlining the principle of optical force calculation using RCWA and FDTD, we compare optical force calculations of the two methods using two scenarios that are most commonly found in inverse design: changing of incident wavelength and structural parameters. We then investigate the convergence and accuracy for the calculation of optical forces using the two methods. Finally, we demonstrate that the computational time usage of RCWA is a fraction of that of FDTD whilst its hardware requirements are those within the availability of consumer grade laptops.
## 2 Benchmark
In order to compare the computation performance of optical forces calculated using RCWA and FDTD, a series of structures derived from the meta-vehicle study of Andren _et al[18]_ were chosen as a benchmark. These structures consist of an asymmetric repeating motif, as shown in figure 1, giving rise to a component of optical force perpendicular to the incident field direction.
All the structures were illuminated with a normally incident plane-wave propagating in the \(z\) direction and linearly polarized in the \(x\) direction as shown schematically in figure 1(a). Due to the lack of mirror symmetry in the \(x\) direction of the system, the light deflected by the structure is asymmetric and as a result exerts an optical force \(F_{\mathrm{opt}}\) with a significant lateral component \(F_{\mathrm{lof}}\) in the \(x\) direction and a force component related to the radiation pressure \(F_{\mathrm{rad}}\) in the \(z\) direction. As the structures of interest are periodic arrays of nanoantenna inclusions, periodic boundary conditions in the \(x\) and \(y\) directions of the unit cell are used in the simulation.
Figure 1(b) and figure 1(c) show the unit cell structure used in the simulation. The unit cell has a periodicity of \(\Lambda_{x}=950\,\mathrm{nm}\) and \(\Lambda_{y}=600\,\mathrm{nm}\) and a total thickness of \(H=1000\,\mathrm{nm}\), and is immersed in water (RI 1.33). The dimer nanoantenna inclusion is \(200\,\mathrm{nm}\) in width (w), \(460\,\mathrm{nm}\) in height (h), \(720\,\mathrm{nm}\) in total length (L) and with a \(50\,\mathrm{nm}\) gap (g) that separates the two blocks. The inclusion is made of polycrystalline Si (RI 3.45), whereas other parts of the unit cell consist of SiO\({}_{2}\) (RI 1.45). The incident wavelength in water \(\lambda_{w}\) and the position of the gap centre \(x_{g}\) were modified and the corresponding optical forces exerted on the structure were studied in the simulations that were performed.
To simplify the simulation, and because the FDTD code, MEEP[29], currently can only calculate optical forces in a vacuum, we re-scaled all RIs and wavelengths used in this study by the RI of water, such that the background material is reduced to vacuum to allow a direct comparison between the two methods.
## 3 Principle of RCWA and FDTD
Both the FDTD and RCWA methods solve Maxwell's equations, either in the time-domain for FDTD:
\[\mathbf{\nabla}\times\mathbf{E}(\mathbf{r},t) = -\partial_{t}\mathbf{B}(\mathbf{r},t), \tag{1}\] \[\mathbf{\nabla}\times\mathbf{H}(\mathbf{r},t) = \partial_{t}\mathbf{D}(\mathbf{r},t)+\mathbf{J}(\mathbf{r},t), \tag{2}\]
or in the frequency-domain for RCWA:
\[\mathbf{\nabla}\times\mathbf{E}(\mathbf{r},\omega) = -\mathrm{i}\omega\mathbf{B}(\mathbf{r},\omega), \tag{3}\] \[\mathbf{\nabla}\times\mathbf{H}(\mathbf{r},\omega) = \mathrm{i}\omega\mathbf{D}(\mathbf{r},\omega)+\mathbf{J}(\mathbf{r},\omega), \tag{4}\]
where \(\mathbf{r}=(x,y,z)\) is the position vector, \(\omega\) is the angular frequency and \(\mathbf{J}\) is the electric current density. The frequency-domain equations are the Fourier transform of the time-domain equations. In both sets of equations, the complexity of solving electric and magnetic fields \(\mathbf{E}\) and \(\mathbf{H}\) comes from the electric and magnetic flux densities \(\mathbf{D}\) and \(\mathbf{B}\), as they rely on the complex relative permittivity \(\tilde{\epsilon}_{r}(\mathbf{r})\) and permeability \(\tilde{\mu}_{r}(\mathbf{r})\) distribution of the materials:
\[\mathbf{D}(\mathbf{r}) = \epsilon_{0}\tilde{\epsilon}_{r}(\mathbf{r})\mathbf{E}(\mathbf{r}), \tag{5}\] \[\mathbf{B}(\mathbf{r}) = \mu_{0}\tilde{\mu}_{r}(\mathbf{r})\mathbf{H}(\mathbf{r}), \tag{6}\]
Figure 1: (a) A schematic view of the optical system: linear \(x\)-polarized light propagating along the \(z\) direction gets deflected due to the asymmetric dimer inclusions. The resulting optical force \(\mathbf{F}_{\mathrm{opt}}\) is decomposed into a lateral optical force \(\mathbf{F}_{\mathrm{lof}}\) in \(x\) direction and a radiation force \(\mathbf{F}_{\mathrm{rad}}\) in \(z\) direction. The unit cell structure (b) and its top view (c) demonstrate the definition of coordinates and key structural parameters. For details on dimensions and materials see further details in the main text.
where \(\mu_{0}\) and \(\epsilon_{0}\) are the permeability and permittivity of vacuum, respectively.
FDTD solves (1) and (2) by discretising both space and time and updates the field values at each time step iteratively[30]. In order to obtain a second-order accuracy for discretization, both time- and space-derivatives are approximated with central finite differences, and different field components are stored on different grid locations known as the Yee Lattice [30]. The Yee Lattice is designed to allow the \(\mathbf{E}(\mathbf{r})\) and \(\mathbf{H}(\mathbf{r})\) components to leap-frog each others in time and space on each iteration. During this process, the electric and magnetic fields are computed as a function of time, so that all the secondary properties such as optical forces and fluxes can be deduced from the primary fields. Since this is a time evolution simulation, it is necessary to include the source current density in the simulation region, and begin from a point in time where there are no fields in space generally. Consequently, any steady state measurement has to wait for the system to settle, which can be a lengthy process.
The RCWA method, on the other hand, is a semi-analytical method that solves the frequency-domain equations (3) and (4). The current density term can be dropped if the region of interest has no current source in it. By considering a monochromatic incident plane-wave with angular frequency \(\omega_{0}\), or wavenumber \(k_{0}\), and substituting (5) and (6) as well as a reduced magnetic field \(\mathbf{H}_{\rm r}(\mathbf{r})=-{\rm i}\sqrt{\mu_{0}/\epsilon_{0}}\mathbf{H}(\mathbf{r})\), (3) and (4) are reduced to
\[\mathbf{\nabla}\times\mathbf{E}(\mathbf{r}) = k_{0}\tilde{\mu}_{\rm r}(\mathbf{r})\mathbf{H}_{\rm r}(\mathbf{r}), \tag{7}\] \[\mathbf{\nabla}\times\mathbf{H}_{\rm r}(\mathbf{r}) = k_{0}\tilde{\epsilon}_{\rm r}(\mathbf{r})\mathbf{E}(\mathbf{r}). \tag{8}\]
Here, the reduced magnetic field \(\mathbf{H}_{\rm r}(\mathbf{r})\) is introduced for improved numerical robustness as it shares the same order of magnitude with the electric field \(\mathbf{E}(\mathbf{r})\).
As a Fourier modal method, RCWA also takes the Fourier transform of space in the \(x\) and \(y\) directions, which are perpendicular to the beam propagation direction. Hence the fields as well as the permittivity and permeability distribution can be expressed as a Fourier series:
\[\mathbf{E}(x,y,z) = \sum_{m,n}\mathbf{e}^{mn}(z)\exp[-{\rm i}(k_{x}^{mn}x+k_{y}^{mn}y)], \tag{9}\] \[\mathbf{H}_{\rm r}(x,y,z) = \sum_{m,n}\mathbf{h}^{mn}(z)\exp[-{\rm i}(k_{x}^{mn}x+k_{y}^{mn}y)], \tag{10}\]
where \(\mathbf{e}^{mn}(z)\) and \(\mathbf{h}^{mn}(z)\) are \(z\)-direction analytical amplitude functions of the corresponding Fourier mode. Taking the grating diffraction equation with periodicity \(\Lambda_{x}\) and \(\Lambda_{y}\) into account, the Fourier mode \(k_{x}^{mn}\) and \(k_{y}^{mn}\) are equivalent to the \(x\) and \(y\) components of the wavevector of the \(mn^{\rm th}\) order diffracted plane wave:
\[k_{x}^{mn}=k_{{\rm in},x}-2\pi m/\Lambda_{x}, \tag{11}\]
\[k_{y}^{mn}=k_{{\rm in},y}-2\pi n/\Lambda_{y}, \tag{12}\]
where \(k_{{\rm in},x}\) and \(k_{{\rm in},y}\) are the \(x\) and \(y\) components of the wavevector of the incident plane wave. Similarly, the relative permittivity and permeability can be expressed in reciprocal space as \(\tilde{\epsilon}_{r}^{mn}(z)\) and \(\tilde{\mu}_{r}^{mn}(z)\) such that their distribution only depends on the \(z\) coordinate:
\[\tilde{\epsilon}_{\rm r}(x,y,z) = \sum_{m,n}\epsilon_{r}^{mn}(z)\exp[-{\rm i}(k_{x}^{mn}x+k_{y}^{mn }y)], \tag{13}\] \[\tilde{\mu}_{\rm r}(x,y,z) = \sum_{m,n}\mu_{\rm r}^{mn}(z)\exp[-{\rm i}(k_{x}^{mn}x+k_{y}^{mn }y)]. \tag{14}\]
Considering a structure can be divided into several layers which are uniform in the \(z\) direction, the distribution of \(\tilde{\epsilon}_{\rm r}\) and \(\tilde{\mu}_{\rm r}\) in real space depends only on the \(x\) and \(y\) coordinates for each layer. Then, within each layer, after taking the Fourier transform that eliminates the dependence on \(x\) and \(y\) coordinates, the relative permittivity and permeability in each Fourier mode become constant.
Consequently, the partial derivatives of the \(x\) and \(y\) components are reduced into algebraic products of a factor of \({\rm i}k_{x}^{mn}\) and \({\rm i}k_{y}^{mn}\) for the \(mn^{\rm th}\) plane wave mode, respectively. Meanwhile, after taking the Fourier transform, the product of the permittivity and the electric field (or the permeability and the magnetic field) becomes a convolution of their Fourier transforms. In numeric computation, the total number of plane wave modes is truncated by \(N_{\rm modes}=N_{m}\times N_{n}\), where \(N_{m}\) and \(N_{n}\) are the number of modes in the \(x\) and \(y\) directions, respectively. In doing so the convolution can be replaced by a matrix multiplication by converting the set of constants \(\tilde{\epsilon}_{r}^{mn}\) and \(\tilde{\mu}_{r}^{mn}\) into Toeplitz convolution matrices[31]\(\llbracket\epsilon_{\rm r}\rrbracket\) and \(\llbracket\mu_{\rm r}\rrbracket\); the size of the Toeplitz convolution matrix is \(N_{\rm modes}\times N_{\rm modes}\).
Similarly, the \(x\) and \(y\) components of the wavevector for each plane wave mode \(k_{x}^{mn}\) and \(k_{y}^{mn}\) can be expressed in the matrix forms \(\mathbf{K_{x}}\) and \(\mathbf{K_{y}}\). The \(x\) and \(y\) components of the amplitude functions for each Fourier mode defined in (9) and (10) can therefore be written into column vectors \(\mathbf{e}_{x}(z)\), \(\mathbf{e}_{y}(z)\) and \(\mathbf{h}_{x}(z)\), \(\mathbf{h}_{y}(z)\).
Eventually, (7) and (8) reduce to:
\[\frac{{\rm d}}{{\rm d}z}\left[\begin{array}{c}\mathbf{e}_{x}(z)\\ \mathbf{e}_{y}(z)\end{array}\right] = \mathbf{M}\left[\begin{array}{c}\mathbf{h}_{x}(z)\\ \mathbf{h}_{y}(z)\end{array}\right], \tag{15}\] \[\frac{{\rm d}}{{\rm d}z}\left[\begin{array}{c}\mathbf{h}_{x}(z)\\ \mathbf{h}_{y}(z)\end{array}\right] = \mathbf{N}\left[\begin{array}{c}\mathbf{e}_{x}(z)\\ \mathbf{e}_{y}(z)\end{array}\right], \tag{16}\]
where \(\mathbf{M}\) and \(\mathbf{N}\) are matrices with size \(N_{\rm modes}\times N_{\rm modes}\) and defined by:
\[\mathbf{M} = k_{0}\left[\begin{array}{c}\mathbf{K_{x}}\llbracket\epsilon_{ \rm r}\rrbracket^{-1}\mathbf{K_{y}}\quad\llbracket\mu_{\rm r}\rrbracket- \mathbf{K_{x}}\llbracket\epsilon_{\rm r}\rrbracket^{-1}\mathbf{K_{x}}\\ \mathbf{K_{y}}\llbracket\epsilon_{\rm r}\rrbracket^{-1}\mathbf{K_{y}}- \llbracket\mu_{\rm r}\rrbracket-\mathbf{K_{y}}\llbracket\epsilon_{\rm r} \rrbracket^{-1}\mathbf{K_{x}}\end{array}\right], \tag{17}\] \[\mathbf{N} = k_{0}\left[\begin{array}{c}\mathbf{K_{x}}\llbracket\mu_{\rm r} \rrbracket^{-1}\mathbf{K_{y}}\quad\llbracket\epsilon_{\rm r}\rrbracket- \mathbf{K_{x}}\llbracket\mu_{\rm r}\rrbracket^{-1}\mathbf{K_{x}}\\ \mathbf{K_{y}}\llbracket\mu_{\rm r}\rrbracket^{-1}\mathbf{K_{y}}- \llbracket\epsilon_{\rm r}\rrbracket-\mathbf{K_{y}}\llbracket\mu_{\rm r} \rrbracket^{-1}\mathbf{K_{x}}\end{array}\right]. \tag{18}\]
By taking another derivative of (15) and (16) with respect to \(z\), the association between the electric and
magnetic terms is decoupled:
\[\frac{\mathrm{d}^{2}}{\mathrm{d}z^{2}}\left[\begin{array}{c} \mathbf{e}_{x}(z)\\ \mathbf{e}_{y}(z)\end{array}\right]-\mathbf{M}\mathbf{N}\left[\begin{array}{c}\bm {e}_{x}(z)\\ \mathbf{e}_{y}(z)\end{array}\right]=0, \tag{19}\] \[\frac{\mathrm{d}^{2}}{\mathrm{d}z^{2}}\left[\begin{array}{c} \mathbf{h}_{x}(z)\\ \mathbf{h}_{y}(z)\end{array}\right]-\mathbf{M}\mathbf{N}\left[\begin{array}{c} \mathbf{h}_{x}(z)\\ \mathbf{h}_{y}(z)\end{array}\right]=0, \tag{20}\]
Therefore, within each layer, the two 3D partial differential equations in real space (7) and (8) are rewritten into two sets of 1D second order ordinary differential equations with respect to the \(z\) coordinate.
In practice, only one set of the equations in (19) and (20) need to be solved as the other one can be calculated directly by matrix multiplication; here, we solve for (19). Since it is a wave equation in the matrix form, its solution can be found analytically by using eigendecomposition[32]:
\[\left[\begin{array}{c}\mathbf{e}_{x}(z)\\ \mathbf{e}_{y}(z)\end{array}\right]=\mathbf{W}e^{-\mathbf{\kappa}z}\mathbf{c}^{+}+\mathbf{ W}e^{+\mathbf{\kappa}z}\mathbf{c}^{-}, \tag{21}\]
where \(\mathbf{\kappa}\) and \(\mathbf{W}\) are the diagonal eigenvalue matrix and the corresponding eigenvector matrix of the matrix \(\mathbf{M}\mathbf{N}\). \(\mathbf{c}^{+}\) and \(\mathbf{c}^{-}\) represent light that enters into the given layer with forward and backward propagation directions, respectively. The magnetic terms can be calculated from electric terms by using (16) giving:
\[\left[\begin{array}{c}\mathbf{h}_{x}(z)\\ \mathbf{h}_{y}(z)\end{array}\right] = -\mathbf{N}\mathbf{W}\mathbf{\kappa}\mathrm{e}^{-\mathbf{\kappa}z}\mathbf{c}^ {+}+\mathbf{N}\mathbf{W}\mathbf{\kappa}\mathrm{e}^{+\mathbf{\kappa}z}\mathbf{c}^{-} \tag{22}\] \[= -\mathbf{V}\mathrm{e}^{-\mathbf{\kappa}z}\mathbf{c}^{+}+\mathbf{V}\mathrm{ e}^{+\mathbf{\kappa}z}\mathbf{c}^{-},\]
where \(\mathbf{V}=\mathbf{N}\mathbf{M}\mathbf{\kappa}\). Therefore, the overall solution \(\mathbf{\Psi}_{i}(z)\) within the \(i^{\text{th}}\) layer with thickness \(L_{i}\) can be written as:
\[\mathbf{\Psi}_{i}(z) = \begin{bmatrix}\mathbf{e}_{x,i}(z)\\ \mathbf{e}_{y,i}(z)\\ \mathbf{h}_{x,i}(z)\\ \mathbf{h}_{y,i}(z)\end{bmatrix} \tag{23}\] \[= \begin{bmatrix}\mathbf{W}_{i}&\mathbf{W}_{i}\\ -\mathbf{V}_{i}&\mathbf{V}_{i}\end{bmatrix}\begin{bmatrix}\mathrm{e}^{-\mathbf{ \kappa}_{i}z}&0\\ 0&\mathrm{e}^{\mathbf{\kappa}_{i}z}\end{bmatrix}\begin{bmatrix}\mathbf{c}_{i}^{+}\\ \mathbf{c}_{i}^{-}\end{bmatrix}.\]
The scattering matrix (S-matrix) can be constructed by considering the boundary between the \(i^{\text{th}}\) and \((i+1)^{\text{th}}\) layers:
\[\mathbf{\Psi}_{i}(z=z_{i}+L_{i})=\mathbf{\Psi}_{i+1}(z=z_{i+1}), \tag{24}\]
where \(z_{i}\) and \(z_{i+1}\) are the lower boundary coordinates of the corresponding layer. The multiple reflection between each layer can be handled by using the Redheffer star product to stack the S-matrices of each layer to form a global S-matrix that represents the entire structure[32]. Since the solution is expressed in Fourier modes and satisfies the plane-wave decomposition, the diffraction efficiency of each plane wave mode can be found directly by applying the corresponding (reflection and transmission) S-matrix block on the incident light.
## 4 Optical force calculation
Optical forces \(\mathbf{F}_{\text{opt}}\) typically originate from absorption and scattering of light momentum[33]. Direct calculation of optical forces requires knowledge of the electromagnetic nearfield distribution in the vicinity of the structure, which is usually performed by a surface integral of the Maxwell stress tensor \(\mathbf{\sigma}\) over a surface \(S\) enclosing the object or region of interest:
\[\mathbf{F}_{\text{opt}}=\oint_{S}\mathbf{\sigma}\cdot\mathrm{d}\mathbf{A}. \tag{25}\]
The components of the Maxwell stress tensor \(\sigma_{\alpha\beta}\) in a vacuum can be found easily by using the local electric and magnetic field components:
\[\sigma_{\alpha\beta}=\epsilon_{0}E_{\alpha}E_{\beta}+\mu_{0}H_{\alpha}H_{\beta }-\tfrac{1}{2}\delta_{\alpha\beta}(\epsilon_{0}|\mathbf{E}|^{2}+\mu_{0}|\mathbf{H}|^{2 })\, \tag{26}\]
where \(\alpha,\beta\in\{x,y,z\}\). This is the force calculation method used in the FDTD simulation as the nearfield information is easy to obtain for the discretization.
Optical forces \(\mathbf{F}_{\text{opt}}\) can also be calculated indirectly by considering the conservation of energy and momentum. The average optical force exerted on a scattering object in one time period of oscillation \(T\) can be deduced from the momentum difference of the scattered and incident fields:
\[\langle\mathbf{F}_{\text{opt}}\rangle=-\frac{\mathbf{p}_{\text{scat}}-\mathbf{p}_{\text{ in}}}{T}. \tag{27}\]
where \(\mathbf{p}_{\text{scat}}\) and \(\mathbf{p}_{\text{in}}\) are the momentum of scattered and incident fields, respectively. Since the optical force is linear with the incident power for a given structure, a dimensionless coefficient \(\mathbf{\mathcal{F}}_{\text{opt}}\) is used to characterize the force response resulting from scattering:
\[\mathbf{\mathcal{F}}_{\text{opt}}=\frac{\langle\mathbf{F}_{\text{opt}}\rangle}{P_{ \text{in}}/c}, \tag{28}\]
where \(P_{\text{in}}\) is the incident power.
Since the total field calculated by the RCWA method is a summation over all the modes in the plane wave decomposition, the total momentum carried by the scattered field is contributed to linearly by the momentum of each mode. The linear momentum \(\mathbf{p}^{mn}\) of the \(mn^{\text{th}}\) plane wave mode can be found directly from its unit wave vector \(\hat{\mathbf{k}}^{mn}\) and radiation energy \(\mathcal{U}^{mn}\):
\[\mathbf{p}^{mn}=\frac{\mathcal{U}^{mn}n_{0}}{c}\hat{\mathbf{k}}^{mn}\, \tag{29}\]
where \(n_{0}\) is the RI of the surrounding medium. Therefore, the dimensionless force coefficient \(\mathbf{\mathcal{F}}_{\text{opt}}\) can be found by:
\[\mathbf{\mathcal{F}}_{\text{opt}} = \frac{(\mathcal{U}_{\text{in}}\hat{\mathbf{k}}_{\text{in}}-\sum_{m,n} \mathcal{U}^{mn}\hat{\mathbf{k}}^{mn})n_{0}/cT}{P_{\text{in}}/c} \tag{30}\] \[= n_{0}(\hat{\mathbf{k}}_{\text{in}}-\sum_{m,n}a^{mn}\hat{\mathbf{k}}^{mn}),\]
where \(a^{mn}=P^{mn}/P_{\text{in}}\) is the corresponding diffraction efficiency for each mode directly calculated from the
RCWA method, \(\mathcal{U}_{\mathrm{in}}\) is the radiation energy of incident light, and \(P^{mn}\) is the power of the \(mn^{\mathrm{th}}\) plane wave mode. Similar to the optical force \(\mathbf{F}_{\mathrm{opt}}\) shown in figure 1, the dimensionless force coefficient \(\mathbf{\mathcal{F}}_{\mathrm{opt}}\) can also be decomposed into a lateral component \(\mathcal{F}_{x}\) that corresponds to the lateral optical force and a forward component \(\mathcal{F}_{z}\) that corresponds to the radiation pressure.
## 5 Implementation
The FDTD method was performed using MEEP[29], an open-source software package implemented in C++ with a Python interface. It was parallelized using the Message Passing Interface (MPI). The RCWA method was implemented in Python using the PyTorch package for GPU acceleration of matrix operations with CUDA.
Most of simulations were carried out on a single node of BlueCrystal Phase4 (BC4), a high performance computing (HPC) platform at the University of Bristol. The FDTD simulations on BC4 were performed by two Intel E5-2860 v4 CPU (28 physical cores in total), whereas the RCWA simulations on BC4 were executed by one of the CPUs and an Nvidia P100 GPU with 16 GB graphical memory.
Two consumer grade laptops were also used to compare the computing performances of RCWA and FDTD with BC4. Laptop A has an 8 core Intel i7-11800H CPU and an Nvidia GeForce RTX3080 GPU with 8 GB graphical memory, while laptop B has a 12 core Intel i7-1270P CPU and an Nvidia GeForce MX550 GPU with 2 GB graphical memory.
## 6 Results and Discussion
### Force spectrum
In order to verify that our implementation of the RCWA method provides a correct calculation of the force coefficients, the force spectrum of the meta-vehicle structure[18] was calculated by varying the incident wavelength using both the RCWA and FDTD approaches. Simulations were performed using the structure shown in figure 1 with the parameters taken from the paper[18] and listed in section 2. Figure 2(a) and figure 2(b) show the lateral and forward optical force coefficients \(\mathcal{F}_{x}\) and \(\mathcal{F}_{z}\) versus incident wavelength in water, computed for wavelengths ranging from 300 nm to 1200 nm. Results are superimposed from the FDTD and RCWA approaches showing excellent agreement between the two methods, validating the RCWA implementation.
A number of features of the spectra deserve comment. Due to the periodicity of the structure, and taking the grating diffraction equation into account, in the region where the wavelength in water \(\lambda_{w}>950\) nm, the lateral optical force coefficient \(\mathcal{F}_{x}\) is constantly zero as only the zero order diffraction is permitted and there is no lateral momentum transfer in this region. This region is highlighted by the dark grey background in the right hand region of figure 2. It can be seen that the optical forces vary dramatically with the wavelength in water, needing a high sampling density to fully demonstrate details in the spectrum. Any local maxima and minima or narrow peaks and valleys appearing in the spectrum may be an indication of interference effects within the structure or possible resonance effects. For example, the two huge peaks in forward optical force coefficient \(\mathcal{F}_{z}\) spectrum at \(\lambda_{w}=970.5\) nm and \(\lambda_{w}=1006.5\) nm are due to total (or almost total) reflection. The two valleys located at \(\lambda_{w}=984.0\) nm and \(\lambda_{w}=1191.0\) nm where \(\mathcal{F}_{z}\) is nearly zero corresponds to total transmission.
Figure 2: Dimensionless force coefficients (a) \(\mathcal{F}_{x}\) and (b) \(\mathcal{F}_{x}\) calculated by FDTD and RCWA for the structure shown in figure 1 with \(x_{g}=65\) nm and different incident wavelengths. The dark grey background represents the region where only zero-order diffraction exists, whereas the lighter one represents the region where second-order diffraction exists. The red and blue dashed lines mark the two wavelengths in water \(\lambda_{w}=800\) nm and \(\lambda_{w}=900\) nm, respectively, to be investigated in the convergence study.
The wavelength used by Andren _et al_[18]\(1064\,\)nm (\(\lambda_{w}=800\,\)nm) marked by the red dashed line appears to be a local minimum of \(\mathcal{F}_{z}\) which may result from an interference effect around the inclusion layer. This can be seen in terms of thin film interference by taking the effective refractive index[34] (ERI) averaged by volume fraction into account. Since the inclusion layer has the height of \(h=460\,\)nm and ERI of 2.86, the effective optical path length of the layer is \(1316\,\)nm, which is about \(5/4\) of the incident wavelength (\(1064\,\)nm). This indicates that the light reflected by the inclusion layer has a phase difference of \(\pi\) with the incident light. The blue dashed line with \(\lambda_{w}=900\,\)nm marks a place where the optical force coefficient spectrum is rather flat and shows no sign of strong interference or resonance effects. This point will be examined in more detail in the convergence studies (see below).
The force spectrum calculated by FDTD can be performed in a single simulation provided that the light source is broadband and able to cover all wavelengths of interest. This follows from Fourier transforming the time sequence data [35]. However, one of the disadvantages of this technique is that when the wavelength of interest is too far from the central wavelength of the light source, the fraction of power in that wavelength is small and the error in the force coefficients can be amplified due to machine truncation error. Furthermore, as the time sequence of fields is truncated when the system reaches the criterion for steady state, taking a Fourier transform also produces a termination effect. This is the reason the FDTD curves in figure 2 show larger noise, especially in the zero-order diffraction region (dark grey region). Although these fluctuations can be reduced by running the simulation for a longer time or applying a window to the time sequence to reduce termination error, the most reliable solution is to run the simulation multiple times for each wavelength individually.
### Varying gap centre
To further compare the consistency between RCWA and FDTD and their potential behaviour in inverse design, a set of simulations that varies the position of the gap centre \(x_{g}\) (see figure 1(c)) was performed, while the incident wavelength in water was kept unchanged at \(\lambda_{w}=800\,\)nm. These simulations, as an example of geometry optimization, were designed to find the optimal gap position for the largest lateral optical force coefficient \(\mathcal{F}_{x}\). Figure 3 shows the calculated optical force coefficients versus gap centre position ranging from \(x_{g}=-335\,\)nm to \(x_{g}=335\,\)nm. The total length of the inclusion and the width of the gap in between was kept unchanged as \(L=720\,\)nm and \(g=50\,\)nm.
The calculated optical force coefficients from RCWA and FDTD show no significant differences. The original structure (\(x_{g}=65\,\)nm) from Andren _et al_[18] is indicated by the red dashed line with \(\mathcal{F}_{x}=0.44\). However, the optimal gap position was found at \(x_{g}=100.5\,\)nm as indicated by the green dashed line with \(\mathcal{F}_{x}=0.55\).
In addition, an odd symmetry can be observed for \(\mathcal{F}_{x}\) and an even symmetry for \(\mathcal{F}_{z}\) about the gap position \(x_{g}=0\,\)nm, which is to be expected due to the fact that, at this gap position, the two blocks on either side of the gap have the same length of \(335\,\)nm and the mirror symmetry in the \(x\) direction is regained. Consequently, the lateral optical force coefficients \(\mathcal{F}_{x}\) is 0 at this point, and for similar reasons at the left and right end as well.
### Convergence
Since the force spectrum study and the varying gap centre study confirm that the optical force coefficients calculated from the RCWA and FDTD methods are in good agreement, it is important to find the condition for each method where they can work most efficiently. In each case this will be a compromise between the need for the calculated optical force coefficient to be sufficiently accurate whilst the computing resources are kept as small as possible.
The computing resources required and the accuracy of the simulation are controlled directly by the number of lattice sites per um (resolution) for FDTD, and the total number of plane wave modes in each direction (number of modes) for RCWA. Higher resolution or a higher number of modes indicate more precise
Figure 3: Force coefficients \(\mathcal{F}_{x}\) and \(\mathcal{F}_{z}\) calculated by FDTD and RCWA methods with different gap centre position \(x_{g}\) while the total length of the inclusions and the gap width in between are kept unchanged as \(720\,\)nm and \(50\,\)nm. The red and green dashed lines indicate gap positions \(x_{g}=65\,\)nm and \(x_{g}=100.5\,\)nm, which correspond to the original structure[18] and a more optimal structure for lateral optical force coefficient \(\mathcal{F}_{x}\) response, respectively.
modelling of the structure, but require greater computing resources in terms of memory and time usage.
Figure 4 shows convergence of the lateral optical force coefficient \(\mathcal{F}_{x}\) and the forward optical force coefficient \(\mathcal{F}_{z}\) of three models as a function of the resolution for FDTD and the number of modes for RCWA. The key parameters of the three models are the incident wavelength in water \(\lambda_{w}\) and the \(x\) coordinate of the gap centre position \(x_{g}\), which are summarized in table 1. The "flat \(\lambda_{w}\)" model, as marked by the blue dashed line in figure 2 with \(\lambda_{w}=$900\,\mathrm{nm}$\) and \(x_{g}=$65\,\mathrm{nm}$\), represents a region of the force spectrum where possible resonance and interference effects within the structure are less significant and the force response versus wavelength is relatively flat. The "optimal \(x_{g}\)" model, as indicated by the green dashed line in figure 2 with \(\lambda_{w}=$800\,\mathrm{nm}$\) and \(x_{g}=$100.5\,\mathrm{nm}$\), corresponds to the optimal gap position with largest lateral optical force coefficient \(\mathcal{F}_{x}\) as discussed previously in section 6.2. The "original" model, as highlighted by the red dashed line in both figures with \(\lambda_{w}=$800\,\mathrm{nm}$\) and \(x_{g}=$65\,\mathrm{nm}$\), refers to the parameters used by Andren _et al_[18].
All the optical force coefficients are varying with a fractional difference between neighbouring data points less than 1% at and after the \($100\,\mathrm{\SIUnitSymbolMicro m}^{-1}$\) resolution for FDTD or 41 modes on each direction (\(41^{2}\) in total) for RCWA, as marked by the black dashed lines in figure 4. Therefore the values at \($100\,\mathrm{\SIUnitSymbolMicro m}^{-1}$\) for FDTD and 41 modes for RCWA were taken to represent a standard criterion for convergence across all our simulations, representing a convenient compromise between possible accuracy and available computing resources. The number of modes includes both positive and negative modes and the zero order mode, and hence will always be an odd integer. Table 2 compares the optical force coefficients taken at these standard points and their relative differences (\(|\mathrm{RCWA}-\mathrm{FDTD}|/|\mathrm{FDTD}|\times 100\%\)) as well. It shows that the calculated optical force coefficients from the two methods under standard conditions have good agreement and their differences are all no larger than 2%.
Close examination shows that the fluctuation of the RCWA results for the \(\mathcal{F}_{z}\) curve of the "original" model after the chosen convergence point is slightly larger than for the FDTD method. This variation cannot be removed by simply adding more modes and it appears to be due to the Gibbs phenomenon which occurs when taking Fourier transforms of discontinuous systems. In the present case we are dealing with
\begin{table}
\begin{tabular}{c c c} \hline Model & Wavelength \(\lambda_{w}\) (nm) & Gap centre \(x_{g}\) (nm) \\ \hline original (red) & 800 & 65.0 \\ flat \(\lambda_{w}\) (blue) & 900 & 65.0 \\ optimal \(x_{g}\) (green) & 800 & 100.5 \\ \hline \end{tabular}
\end{table}
Table 1: Parameters used in the three convergence studies. The total length of inclusion \(L=$720\,\mathrm{nm}$\) and the gap width \(g=$50\,\mathrm{nm}$\) are kept unchanged.
\begin{table}
\begin{tabular}{c c c c} \hline Model & FDTD & RCWA & Differences \\ \hline \(\mathcal{F}_{x}\): original & 0.4378 & 0.4396 & 0.41\% \\ \(\mathcal{F}_{z}\): original & 0.7536 & 0.7454 & 1.09\% \\ \(\mathcal{F}_{x}\): flat \(\lambda_{w}\) & 0.2679 & 0.2643 & 1.33\% \\ \(\mathcal{F}_{z}\): flat \(\lambda_{w}\) & 0.6169 & 0.6069 & 1.62\% \\ \(\mathcal{F}_{x}\): optimal \(x_{g}\) & 0.5596 & 0.5519 & 1.39\% \\ \(\mathcal{F}_{z}\): optimal \(x_{g}\) & 0.6761 & 0.6852 & 1.36\% \\ \hline \end{tabular}
\end{table}
Table 2: Converged optical force coefficients computed using the FDTD and RCWA methods, taken at the standard points \($100\,\mathrm{\SIUnitSymbolMicro m}^{-1}$\) or \($41\,\mathrm{m}$\) modes as shown in figure 4. The differences between the two methods in the three cases are also shown and are all within 2%.
Figure 4: Convergence of \(x\) and \(z\) components of optical force, coefficients \(\mathcal{F}_{x}\) and \(\mathcal{F}_{z}\), using (a) the FDTD method and (b) the RCWA method for three models. The key parameters of the three models are incident wavelength in water \(\lambda_{w}\) and the \(x\) coordinate of the gap centre position \(x_{g}\), which can be found in table 1. The black dashed lines represent the condition taken as a standard criterion for convergence and the calculated optical force coefficients at these points are compared in table 2.
a discontinuous permittivity and permeability at the boundary of different materials in our test structure. This is the main difficulty for RCWA when dealing with structures with characteristic sizes which are much smaller than the incident wavelength. Resonance or interference effects in the system at certain wavelengths might also amplify such fluctuations, which could explain the \(\mathcal{F}_{z}\) curve of the "original" model in the RCWA method in figure 4.
Convergence of the near field in RCWA has previously been discussed and shown to be much more problematic than for the far field[28]. This is because the near field region relies on accurate calculation of both propagating diffraction orders and evanescent modes, which are particularly susceptible to the Gibbs phenomenon[28]. Thus the suitability of direct optical force calculation using the near field results from RCWA using, for example, the Maxwell stress tensor, is questionable. However, the influence of the Gibbs phenomenon on far field properties such as diffraction efficiencies can be controlled within a reasonable degree as seen in figure 4, and the consequent fluctuations in the optical forces calculated indirectly are within 1%. Therefore, we conclude that RCWA is suitable for indirect optical force calculation using momentum conservation, as in (27) and (28).
Concerning the resolution limit for FDTD, any lower resolution than \(100\,\mathrm{\SIUnitSymbolMicro m}^{-1}\) is not a reasonable choice for the converged point because the \(100\,\mathrm{\SIUnitSymbolMicro m}^{-1}\) resolution only gives 5 lattice sites to represent the \(50\,\mathrm{nm}\) gap. Thus any lower resolution cannot describe the geometry of the structure adequately.
### Speed and computing resources
In order to test the suitability of the RCWA method for optical force optimization, aside from the ability of yielding accurate results, the speed and computing resource requirement must also be considered. During the simulations mentioned above, we observed that the RCWA method is typically on the order of 10 times faster than the FDTD method on BC4 for a given force calculation, whilst its computing resource requirements are generally within the availability of a consumer grade laptop.
Figure 5 shows the time usage on BC4 for the "original" model in the convergence study with respect to memory usage. It shows that the time usage of RCWA method for a single force evaluation is a fraction of that of the FDTD method. Simulations of other cases perform similarly, _i.e._, the time usage at the standard points, \(100\,\mathrm{\SIUnitSymbolMicro m}^{-1}\) resolution for FDTD and 41 modes for RCWA, take roughly 400 seconds and 60 seconds on BC4, respectively.
In the force spectrum study, the time usage of the FDTD simulation with a broadband source is about 35 minutes on BC4, compared with roughly 201 minutes (3.5 hours) for the RCWA method. RCWA is much slower than the FDTD in this situation because it cannot take advantage of Fourier transforming the time sequence data, as explained previously, and therefore it has to simulate for each wavelength of interest individually. The 201 minutes time usage is expected because there were 201 different wavelengths involved in figure 2, and each RCWA simulation takes about 1 minute. On the other hand, the FDTD time usage in figure 2 is greater than its typical value, _i.e._ 400 seconds, because the broadband light source needs longer for the simulation to reach a steady state. However, a disadvantage of Fourier transforming the time series data from the FDTD simulation is that it introduces termination effects in the force spectrum, and the simplest way to eliminate them is to run the FDTD simulation for each wavelength individually, which would take about 28 hours on BC4.
In the gap centre variation study (figure 3), for both RCWA and FDTD, each geometry was investigated with a single separate simulation. As there were 21 different geometries in the study, the time usage for the FDTD method was about 3.25 hours whilst the RCWA method only took about 21 minutes. Therefore the average time usage for a single simulation in this case is about 550 seconds for FDTD and 60 seconds for RCWA. The slowing of the FDTD method compared with the standard case, appears to be because some geometries take longer to reach the criterion of steady state than others. Overall, these timings indicate that for a typical inverse design problem requiring force responses for a large number
Figure 5: Computational time usage (seconds) versus memory usage (MB) for the “original” model with parameters shown in table 1 using the FDTD method with different resolution and the RCWA method with different number of modes performed on BC4. The solid points represent the time and memory usage taken at the standard points \(100\,\mathrm{\SIUnitSymbolMicro m}^{-1}\) and 41 modes as shown in figure 4. Arrows indicate increasing resolution or number of modes.
of different structures, the RCWA method is around an order of magnitude more efficient than the FDTD method.
Given the apparent speed benefit of RCWA when running on a HPC platform, we decided to test the feasibility of running the same code on more accessible, consumer grade, equipment. Table 3 shows a time usage comparison of two laptops with BC4 for our "original" model at the standard point (\(100\,\mathrm{\SIUnitSymbolMicro m}^{-1}\) resolution for FDTD and \(41\times 41\) modes for RCWA). Single precision (64bits) and double precision (128bits) data types were tested because the graphics cards on BC4 are optimized for double precision whereas the graphics cards of the most commonly available consumer grade laptops are configured for predominantly single precision arithmetic.
The single precision RCWA results show that the time usage on both laptops is about the same as BC4 and could be even faster if its GPU had better performance. Although the double precision RCWA method on Laptop A is 1.5 times slower, it is still about 20 times faster than the FDTD method on the laptop and more than 4 times faster than the FDTD method on BC4. Therefore, it is clear that the RCWA method can also work on consumer grade computers and laptops with a good performance. This strengthens the suitability of RCWA for optical force optimization, because access to HPC systems tends to be restricted or expensive, while CUDA capable graphics cards are readily available.
Single precision variables are not recommended for use in practice, as they bring higher truncation and rounding error which would potentially be amplified by the interference effects discussed above. Figure 5 also indicates that the RCWA method is more memory-intensive than the FDTD method, and its memory usage is mainly the graphical memory. The missing data in table 3 is due to the fact that running 41 modes in double precision requires 2070 MB graphical memory which is larger than the graphical memory capacity (2 GB) of Laptop B. The available graphical memory of a GPU limits the largest number of modes in each direction that can be used on it, and it is important to estimate the maximum graphical memory usage during the computation. The graphical memory usage (in bytes) is \(768N^{4}\) for double precision and \(384N^{4}\) for single precision for a given number of modes, \(N\), on each direction. This suggests an upper limit for double precision work of \(N=39\) for the 2 GB GPU in Laptop B and \(N=57\) for the 8 GB GPU in Laptop A.
Therefore, the RCWA method is more suitable for fast optical force simulation than FDTD on a consumer grade laptop with a CUDA capable graphics card and a graphical memory larger than 2 GB, which opens up opportunities for optical force based inverse design and geometry optimization without the need of HPC resources.
## 7 Conclusion
This paper shows that optical force coefficients can be calculated using the RCWA method giving results which are consistent with the FDTD method for a fraction of the computer time. The FDTD method converged when the resolution reached \(100\,\mathrm{\SIUnitSymbolMicro m}^{-1}\), with a calculation time about 400 seconds on the local HPC system. On the other hand, the RCWA method converged when the number of modes in each direction reached 41, and took about 60 seconds on the same HPC system. The differences in optical force coefficients for the benchmark case between RCWA and FDTD methods are no larger than 2%.
The RCWA method is capable of running on both HPC and consumer grade laptops within a reasonable time as long as the laptop has a CUDA-compatible GPU and graphical memory larger than 2GB (for double precision and 41 modes on each direction). RCWA is typically about 10 times faster than FDTD on HPC system such as BC4, and about 20 times faster on consumer grade laptops. The FDTD method could be made faster by using more HPC nodes, but such facilities are not generally available. Therefore, the RCWA method is very suitable for studies that include calculating optical force responses for a large number of distinct structures. Furthermore, in the consideration of green computing, using the RCWA method on personal laptops could save considerable resources and energy compared to using the HPC, and is therefore the more sustainable option.
## Acknowledgments
BG acknowledges the China Scholarship Council (CSC) and University of Bristol for a postgraduate scholarship (award number 202008060217), and Advanced Computing Research Centre (ACRC) at the University of Bristol for High Performance Computing platform BlueCrystal phase4 (BC4) support. HG acknowledges support through an Impact Acceleration Account co-funded by EPSRC and Bristol Nano Dynamics (grant number EP/R511663/1).
|
2307.10450 | Electron Cloud Measurements in Fermilab Booster | Fermilab Booster synchrotron requires an intensity upgrade from 4.5x1012 to
6.5x1012 protons per pulse as a part of Fermilab's Proton Improvement Plan-II
(PIP-II). One of the factors which may limit the high-intensity performance is
the fast transverse instabilities caused by electron cloud effects. According
to the experience in the Recycler, the electron cloud gradually builds up over
multiple turns inside the combined function magnets and can reach final
intensities orders of magnitude greater than in a pure dipole. Since the
Booster synchrotron also incorporates combined function magnets, it is
important to measure the presence of electron cloud. The presence or apparent
absence of the electron cloud was investigated using two different methods:
measuring bunch-by-bunch tune shift by changing the bunch train structure at
different intensities and propagating a microwave carrier signal through the
beampipe and analyzing the phase modulation of the signal. This paper presents
the results of the two methods and corresponding simulation results conducted
using PyECLOUD software. | S. A. K. Wijethunga, N. Eddy, J. Eldred, C. Y. Tan, B. Fellenz, E. Pozdeyev, R. V. Sharankova | 2023-07-19T20:39:20Z | http://arxiv.org/abs/2307.10450v1 | # Electron Cloud Measurements in Fermilab Booster+
###### Abstract
Fermilab Booster synchrotron requires an intensity upgrade from 4.5\(\times\)10\({}^{12}\) to 6.5\(\times\)10\({}^{12}\) protons per pulse as a part of Fermilab's Proton Improvement Plan-II (PIP-II). One of the factors which may limit the high-intensity performance is the fast transverse instabilities caused by electron cloud effects. According to the experience in the Recycler, the electron cloud gradually builds up over multiple turns inside the combined function magnets and can reach final intensities orders of magnitude greater than in a pure dipole. Since the Booster synchrotron also incorporates combined function magnets, it is important to measure the presence of electron cloud. The presence or apparent absence of the electron cloud was investigated using two different methods: measuring bunch-by-bunch tune shift by changing the bunch train structure at different intensities and propagating a microwave carrier signal through the beampipe and analyzing the phase modulation of the signal. This paper presents the results of the two methods and corresponding simulation results conducted using PyECLOUD software.
## 1 Introduction
The formation of an electron cloud (EC) can severely limit the performance of high-intensity proton accelerators due to transverse instabilities, transverse emittance growth, particle losses, vacuum degradation, heating of the chamber's surface, etc. [1-4]. Studies conducted previously at the Fermilab Recycler facility have shown that combined function magnets can trap the EC due to its magnetic mirror effect. According to their simulations, EC accumulates over many revolutions inside a combined function magnet and can reach final intensities orders of magnitudes higher than inside a pure dipole [3].
In order to meet the demands of Fermilab's Proton Improvement Plan-II (PIP-II), the Fermilab Booster [5], a rapid-cycling (15 Hz) synchrotron which is equipped with 96 combined function magnets, will need to deliver a high-intensity beam of 6.5\(\times\)10\({}^{12}\) protons per pulse, representing a 44% increase in current intensity [6]. Therefore, it is important to find evidence of the presence of an EC in the PIP-II era Booster and evaluate if it has any impact on the desired performance.
Over the years, several techniques have been developed to measure the EC in accelerators [4, 7-11]. This paper presents two such techniques employed to explore the existence or absence of the EC in the Fermilab Booster. These techniques include measuring the bunch-by-bunch tune shift by changing the bunch train structure at different intensities and propagating a microwave carrier signal through the beampipe and analyzing the phase modulation of the signal.
## 2 Bunch-by-bunch tune shift
In order to trap electrons in a magnetic field, it is necessary to have a train of closely spaced bunches [3]. In the absence of a following bunch, these secondary electrons can go through a few elastic reflections before being absorbed by the vacuum chamber. Hence, a single bunch, _i.e.,_ a clearing bunch, following the main batch can be used to clear the EC as it kicks the electrons into the vacuum chamber. Since an EC act like a lens providing additional focusing or defocusing to the beam, the clearing of the EC can be observed as a shift of the betatron tune.
The existence of the EC in the Booster was first investigated by introducing different gaps in the bunch structure with varying beam intensities. Then the corresponding tune shifts were analyzed. This paper presents measurements taken with vertical pings for four different beam intensities protons per pulse (ppp): 1.7\(\times\)10\({}^{12}\), 4.5 \(\times\)10\({}^{12}\), 5.0\(\times\)10\({}^{12}\), and 5.5 \(\times\)10\({}^{12}\) near transition. According to both simulation and theory, the strongest EC in the Booster is expected to occur near transition when the bunch length is at its shortest. Two different bunch structures were created by misaligning the laser notcher and the notcher kicker [12,13].
Figure 1 presents the vertical tune variation of each bunch at a turn near transition for both bunch structures and four intensities. Bunches with partial intensity due to the laser notcher show a positive shift compared to the rest of the beam. However, this may be due to the impedance tune depression, as the low-intensity beam shows a smaller shift compared to the high-intensity beam. For the other bunches, no significant difference between the opposite notch beam and the nominal beam.
Figure 2 depicts the vertical tune shifts of the opposite notch bunch structure with respect to the nominal notch bunch structure of the last \(\sim\)2500 turns before the transition for the four intensities. Note that the tune difference of each turn was calculated by considering the bunches with typical tunes (unaffected by the nearby notches) and taking the average of them. According to Ref. [3], a positive tune shift in the horizontal direction indicates the presence of an EC at the beam center, and a negative tune shift in the vertical direction indicates the maximum density of the EC near the walls of the vacuum chamber. Adding a clearing bunch can reduce these tune shifts. However, the results do not show either significant or consistent tune shift near transition for all four intensities.
## 3 Simulations
PyECLOUD code was used to simulate the EC build-up inside a combined function magnetic located in the Booster synchrotron [14]. Table 1 lists the main input parameters used in the simulations. The cross-section of the combined function magnet was considered a rectangle with diploe and quadrupole magnetic fields.
The simulation was conducted for 3 turns near injection and transition for the same intensities as the measurements (ppp): 1.7\(\times\)10\({}^{12}\), 4.5\(\times\)10\({}^{12}\), 5.0\(\times\)10\({}^{12}\), 5.5\(\times\)10\({}^{12}\) and additionally, PIP-II intended intensity 6.5\(\times\)10\({}^{12}\). PyEcloud did not show meaningful EC accumulation near injection with 1.8 SEY and 0.57 m (rms) bunch length. Figure 3 shows the simulations near transition.
Based on the plots shown above, it can be observed that EC exists within Booster. The accumulation of EC in opposite notch bunch structures is slower for low-intensity beam (ppp): 1.7\(\times\)10\({}^{12}\) compared to the nominal bunch structure. Nevertheless, the EC saturation is almost identical for all the high-intensity beams, irrespective of their bunch structure, _i.e._ these simulations show that the notch configurations that were used minimally clears EC.
The measurements and simulations were compared to a simple theoretical model that gives the relation between the tune shift \(\Delta Q\) and the corresponding EC density \(\rho\)[15, 16],
\[\Delta Q=\frac{r_{p}}{\gamma\beta^{2}}\left(\beta\right)\rho C\frac{x^{2}}{(x+ y)^{2}} \tag{1}\]
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Parameter** & **Transition** & **Injection** \\ \hline Beam energy [GeV] & 4.2 & 0.4 \\ \hline Bunch spacing [ns] & 19.2 & 26.4 \\ \hline Bunch length, \(\sigma\) [m] & 0.25 & 0.57 \\ \hline SEY, \(\delta\) & \multicolumn{2}{c|}{1.8} \\ \hline \end{tabular}
\end{table}
Table 1: Input parameters in PyECLOUD simulations.
Figure 3: EC build-up for different beam intensities (a) nominal notch and (b) opposite notch near transition.
Figure 2: Vertical tune shift due to the gap near transition for the four different beam intensities (ppp): (a) 1.7\(\times\)10\({}^{12}\), (b) 4.5 \(\times\)10\({}^{12}\), (c) 5.0\(\times\)10\({}^{12}\), and (d) 5.5 \(\times\)10\({}^{12}\).The error was calculated by taking the standard errors of the mean. The discontinuity in some plots is due to the distorted tune bands.
where, \(\tau_{p}\) is the classical proton radius, \(C\) is the circumference, \(\langle\beta\rangle\) is the average beta function, \(\beta\) is the relativistic beta, \(\gamma\) is the Lorentz factor, and \(x\) and \(y\) are the semi-aperture dimensions. According to the calculations, PyECLOUD simulations show the maximum tune shift near transition of about 0.001 (with 1.8 SEY), which is considerably lower than what we have observed in measurements.
## 4 Microwave Measurements
Our second method involved transmitting a microwave carrier signal through the beampipe and examining the phase modulation caused by the EC. This method can be applied in two distinct variations: transmission and resonance [17]. The transmission method required a long section of a beampipe with a uniform cross-section to avoid reflections which can lead to undefined propagation lengths. Hence, we chose the resonance TE wave method [7, 8].
In order to couple microwaves in and out of the beampipe, beam position monitor (BPM) systems were used. After taking S\({}_{21}\) measurements at a few BPM locations (using both horizontal plates and both vertical plates) along the Booster ring, Short 15 (S15) was selected as it had a resonance at 1.355 GHz, which is also close to the pipe's cut-off frequency. At this frequency, the resonator would have an effective length of about 10 m.
Figure 4 illustrates the schematic diagram of the setup used for this experiment. The standing waves are expected to be set up in between the ion pumps, creating a resonant cavity. The phase demodulator box basically consists of a mixer and low pass filter that was specially built for this experiment to measure the phase delay between the generated and the received signals.
The measurements were carried out with two different beam intensities (ppp): 4.9 \(\times 10^{12}\) and 3.8 \(\times 10^{12}\) near transition and near extraction. Ideally, near transition, the bunch length reduces to its minimum, leading to a significant phase shift caused by the maximum EC density. However, the frequency sweep of the Booster introduces complications to this measurement. Hence, we repeated the measurement near extraction because the RF frequency is nearly constant here. The carrier frequency was set to 1.355 GHz and the amplitude to 10 dB.
Figure 5 shows the phase shift of the carrier with the beam, the carrier without the beam, and the beam structure near transition. Figure 6 shows the same results near the extraction. The data presented for intensity 3.8 \(\times 10^{12}\) is averaged over 512 beam cycles, and intensity 4.9 \(\times 10^{12}\) is averaged over 256 beam cycles.
These results show that there is no detectable phase shift due to the beam near transition or extraction for both 4.9\(\times 10^{12}\) and 3.8\(\times 10^{12}\) (ppp) intensities, and therefore no EC was measured. Further, there is no visible phase shift due to the gaps between the turns.
## 5 Conclusion
The presence or absence of the electron cloud (EC) in the PIP-II era Booster was investigated using two techniques, and PyECLOUD software simulations. The bunch-by-bunch tune shift near transition showed a larger tune shift which was not consistent with the simulation's calculated tune shift, indicating the possibility of impedance tune depression. The average tune comparison near the transition did not provide a clear indication of the presence of EC. Furthermore, microwave measurements also showed that there is no detectable EC near both transition and extraction. Both bunch-by-bunch tune shift method and simulations agreed that the gap between the turns is not large enough to clear the EC. Hence the resulting tune shift is not sensitive enough to make any predictions about EC in Booster.
To explore the impact of beampipe conditioning on EC, the microwave measurements will be repeated following the Booster shut-down period in the future. Additionally, SEY measurements of the laminations and epoxy used to build the combined function magnets will be used for more accurate simulation results.
## 6 Acknowledgment
We are grateful to Salah J Chaurize and Kent Triplett for their insights and support throughout these experiments.
Figure 4: Schematic diagram of the experimental setup.
Figure 5: Phase shift near transition for two different beam intensities (ppp): (a) 4.9 \(\times 10^{12}\) and (b) 3.8 \(\times 10^{12}\).
Figure 6: Phase shift near extraction for two different beam intensities (ppp): (a) 4.9 \(\times 10^{12}\) and (b) 3.8 \(\times 10^{12}\). |
2304.09451 | Exact solutions for diffusive transport on heterogeneous growing domains | From the smallest biological systems to the largest cosmological structures,
spatial domains undergo expansion and contraction. Within these growing
domains, diffusive transport is a common phenomenon. Mathematical models have
been widely employed to investigate diffusive processes on growing domains.
However, a standard assumption is that the domain growth is spatially uniform.
There are many relevant examples where this is not the case, such as the
colonisation of growing gut tissue by neural crest cells. As such, it is not
straightforward to disentangle the individual roles of heterogeneous growth and
diffusive transport. Here we present exact solutions to models of diffusive
transport on domains undergoing spatially non-uniform growth. The exact
solutions are obtained via a combination of transformation, convolution and
superposition techniques. We verify the accuracy of these solutions via
comparison with simulations of a corresponding lattice-based random walk. We
explore various domain growth functions, including linear growth, exponential
growth and contraction, and oscillatory growth. Provided the domain size
remains positive, we find that the derived solutions are valid. The exact
solutions reveal the relationship between model parameters, such as the
diffusivity and the type and rate of domain growth, and key statistics, such as
the survival and splitting probabilities. | Stuart T. Johnston, Matthew J. Simpson | 2023-04-19T06:46:28Z | http://arxiv.org/abs/2304.09451v2 | # Exact solutions for diffusive transport on heterogeneous growing domains
###### Abstract
From the smallest biological systems to the largest cosmological structures, spatial domains undergo expansion and contraction. Within these growing domains, diffusive transport is a common phenomenon. Mathematical models have been widely employed to investigate diffusive processes on growing domains. However, a standard assumption is that the domain growth is spatially uniform. There are many relevant examples where this is not the case, such as the colonisation of growing gut tissue by neural crest cells. As such, it is not straightforward to disentangle the individual roles of heterogeneous growth and diffusive transport. Here we present exact solutions to models of diffusive transport on domains undergoing spatially non-uniform growth. The exact solutions are obtained via a combination of transformation, convolution and superposition techniques. We verify the accuracy of these solutions via comparison with simulations of a corresponding lattice-based random walk. We explore various domain growth functions, including linear growth, exponential growth and contraction, and oscillatory growth. Provided the domain size remains positive, we find that the derived solutions are valid. The exact solutions reveal the relationship between model parameters, such as the diffusivity and the type and rate of domain growth, and key statistics, such as the survival and splitting probabilities.
keywords: Random walk, Diffusion, Survival probability, Splitting probability, Growing domain, Exact solutions. +
Footnote †: journal: arXiv
## 1 Introduction
The expansion and contraction of spatial domains occurs throughout nature [1; 2; 3; 4]. At the cosmological scale, the universe is undergoing an accelerating expansion [3]; at the organism scale, regular development requires tissue growth [4]; at the cellular scale, cardiomyocytes expand and contract with every heartbeat [1]. Diffusive processes regularly occur within or along such domains. Tissue growth involves the migration of cell populations, which can be considered as a diffusive process [2]. Throughout the intracellular cardiomyocyte environment, chemical species diffuse while driving cellular function [1]. Regulation of diffusive processes within expanding and contracting domains can be critical to healthy function and form. The failure of neural crest cells to colonise growing gut tissue during development leads to Hirschsprung's disease, which is a life-threatening birth defect characterised by a partial or full absence of the enteric nervous system [2].
While a common assumption, domain expansion and contraction is not necessarily a spatially uniform process [5; 6; 7]. Binder _et al._[5] demonstrate that the growth rate of quail gut tissue during development is different between the two midgut regions and the hindgut region. Accordingly, the migration of quail neural crest cells occurs on a domain that undergoes spatially non-uniform growth. Multilayered drug-loaded nanocapsules exhibit temperature-dependent swelling, which alters the domain through which drug molecules diffuse [8].
Mathematical models are widely used to investigate diffusive processes [9]. However, in the context of diffusive processes on multiple growing domains, previous investigations have typically focused on either a single uniformly growing domain [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21] or multiple non-growing domains [22; 23; 24]. For example, Crampin _et al._[10; 11] investigate the conditions sufficient for pattern formation to occur on a uniformly growing domain, which is relevant for biological morphogenesis. Simpson _et al._[25; 20], Ryabov _et al._[18] and Yuste _et al._[21] each present exact results for specific models of diffusive processes on a uniformly growing domain. The exact results allow key statistics such as the survival probability (the probability that an individual has yet to cross the boundary of an expanding domain) to be readily calculated [26; 25; 20; 21]. Approaches have been presented to investigate diffusive processes in spatially heterogeneous non-growing domains [24], including homogenisation techniques and exact solutions for multilayered piecewise homogeneous domains [23]. One approach for representing spatially non-uniform domain growth is via multiple domains that exhibit piecewise uniform growth [5; 6]. Numerical results have been presented in the context of pattern formation during non-uniform domain growth, where the non-uniform growth is a piecewise uniform process [6]. While both single growing domain problems and multiple non-growing domain problems are both well-studied, there is a lack of exact results detailing the dynamics of diffusive processes on multiple growing domains. Accordingly, it is unclear how spatially non-uniform domain growth may inhibit or enhance diffusive transport. Understanding this interplay between diffusive transport and non-uniform domain growth may provide insight into why normal development processes succeed or fail, and allow the calculation of the effective release rate of therapeutics from nanocapsules.
Here we introduce new exact solutions for density profiles, survival probabilities and splitting probabilities for diffusive processes on multiple growing domains. The solutions are obtained via a combination of transformation, convolution and superposition techniques. We demonstrate that the solutions are valid through comparison with density profiles obtained via repeated realisations of a corresponding lattice-based random walk. We show that the solution profiles can exhibit jump discontinuities at inter-domain boundaries and investigate how model parameters, such as the domain growth rate, diffusivity and initial location, influence the survival and splitting probabilities. Critically, each of these parameters can vary between domains, which gives rise to rich behaviour that is not possible on a single uniformly growing domain. We investigate a suite of domain evolution functions, including linear growth [20], exponential growth and decay [27; 28; 20; 29], and oscillatory evolution [30] and show that the exact solutions are valid, provided that the domain size does not reduce to zero.
Figure 1: (a) Configuration of the domain in the discrete model. Agents exist on a lattice of width \(\Delta\). The pink line denotes the boundary between two domains. (b) Lattice configuration before and after possible movement events for agents in either domain. The agent that moves is highlighted in orange. (c) Lattice configuration before and after possible positive domain growth events. The location where the new lattice site is added is highlighted in orange. (d) Lattice configuration before and after possible negative domain growth events. The location where the lattice site is removed is highlighted in orange. (e) Space-time diagram of a growing domain, highlighting the expansion of space. The location of the boundaries are shown in grey. Characteristics for domain one, two and three are shown in cyan, orange and pink, respectively. For visual clarity we present characteristics for a linearly growing domain; in practice, other types of domain growth are possible.
## 2 Model
### Agent-based discrete model
We implement a one-dimensional position-jump random walk model on a growing domain [20]. Agents in the random walk are located on discrete lattice sites at \(x=i\Delta,\;i\in\{-\lfloor L(j\delta t)/\Delta\rfloor,\;\ldots,\;\lfloor L(j \delta t)/\Delta\rfloor\}\) where \(L(j\delta t)\) is the half-length of the growing domain after \(j\in\mathcal{N}_{0}\) timesteps of duration \(\delta t\) and \(\Delta\) is the lattice size. There are no restrictions on the number of agents that can occupy a single lattice site; we note other investigations impose such a restriction [17; 31]. Agents located on the site at \(x\) randomly move to one of the two nearest-neighbour sites at \(x\pm\Delta\) with probability \(P/2\) in a timestep. If selected, each agent will therefore move with probability \(P\) in a timestep. We select \(N(j\delta t)\) agents at random, with replacement, each timestep, where \(N(j\delta t)\) is the number of agents on the lattice. If an agent crosses the boundary at \(x=\pm\Delta\lfloor L(j\delta t)/\Delta\rfloor\) (i.e. the agent moves to a "ghost" lattice site at \(x=\pm\Delta\lfloor L(j\delta t)/\Delta+1\rfloor\)) then the agent is determined to have left the domain, and the agent is removed from the random walk at the end of the timestep.
We first consider a single domain undergoing spatially-uniform growth. That is, each location in space experiences uniform growth and there is a single growth rate across the domain [20]. We implement positive domain growth in the random walk as follows. When \(\lfloor L(j\delta t)/\Delta\rfloor>\lfloor L((j-1)\delta t)/\Delta\rfloor\), we randomly select a lattice site \(k\in\{1,\;\ldots,\;\lfloor L((j-1)\delta t)/\Delta\rfloor\}\). We insert a new lattice site at \(x=k\Delta\), and hence all agents located at sites \(i\geq k\) are displaced a distance \(\Delta\) in the positive direction [20; 29]. This implies that the \(i\)th lattice site translates a distance \(\Delta\) with probability \(i/\lfloor L((j-1)\delta t)/\Delta\rfloor\) after a domain growth event occurs. Domain growth events occur at a rate proportional to \(\mathrm{d}L/\mathrm{d}t\) and hence the product of the probability of translation, the translation distance, and the domain growth event rate can be considered as a discrete velocity field. We repeat this process for \(k\in\{-\lfloor L((j-1)\delta t)/\Delta\rfloor,\;\ldots,\;-1\}\), noting that this results in a displacement of \(\Delta\) in the negative direction for agents located at sites \(i\leq k\). New lattice sites initially contain zero agents. For negative domain growth (i.e. contraction or shrinkage), when \(\lfloor L(j\delta t)/\Delta\rfloor<\lfloor L((j-1)\delta t)/\Delta\rfloor\), we randomly select a lattice site \(k\in\{1,\;\ldots,\;\lfloor L((j-1)\delta t)/\Delta\rfloor\}\). We remove the lattice site at \(x=k\Delta\), and hence all agents located at sites \(i>k\) are displaced a distance \(\Delta\) in the negative direction [29]. Agents located at the removed site \(k\) are not displaced. This process is repeated for \(k\in\{-\lfloor L((j-1)\delta t)/\Delta\rfloor,\;\ldots,\;-1\}\), where there is a displacement of \(\Delta\) in the positive direction for agents located at sites \(i<k\). A schematic of movement and domain growth events is presented in Figure 1. We choose domain growth functions such that \(L(j\delta t)=0\) does not occur.
We next consider multiple growing domains, where the growth within each individual domain is spatially uniform. There is a series of adjacent domains (Figure 1), each of which grows at a (potentially distinct) uniform rate. Accordingly, the growth rate of the entire domain can be non-uniform. Here agents randomly move in the positive and negative directions with probability \(P_{g}/2\) where \(g\in\{1,\;\ldots,\;G\}\) and there are \(2G-1\) adjacent, non-overlapping domains in total. Here we always examine domains that are symmetric around \(x=0\) (Figure 1(e)). We now have \(L(j\delta t)=\sum_{g=1}^{G}L_{g}(j\delta t)\). New lattice sites are inserted into individual domains at random when \(\lfloor L_{g}(j\delta t)/\Delta\rfloor>\lfloor L_{g}((j-1)\delta t)/\Delta\rfloor\) and are removed from the individual domains at random when \(\lfloor L_{g}(j\delta t)/\Delta\rfloor<\lfloor L_{g}((j-1)\delta t)/\Delta\rfloor\) according to the processes described above.
We calculate a number of key statistics from individual realisations of the random walk, and perform numerous realisations of the random walk to generate representative average behaviour. Specifically, we calculate the average agent density
\[\overline{C}(i\Delta,j\delta t)=\frac{1}{\Delta MN}\sum_{m=1}^{M}N_{m}(i,j), \;i\in\bigg{\{}-\sum_{g=1}^{G}\bigg{\lfloor}\frac{L_{g}(j\delta t)}{\Delta} \bigg{\rfloor},\;\ldots,\;\sum_{g=1}^{G}\bigg{\lfloor}\frac{L_{g}(j\delta t)} {\Delta}\bigg{\rfloor}\bigg{\}},\;j\in\mathcal{N}_{0},\]
where \(M\) is the number of identically-prepared realisations of the random walk performed, \(N\) is the initial number of agents in the random walk, and \(N_{m}(i,j)\) is the number of agents located at site \(i\) after
\(j\) timesteps in the \(m\)th realisation of the random walk. In practice, in the numerical implementation of the random walk, we track the position of each agent in the random walk, and calculate \(N_{m}(i,j)\) from this information (Supplementary Information). We again note that agents are removed from the domain at the end of a timestep if they are located at the "ghost" sites at \(x=\pm\Delta(1+\sum_{g=1}^{G}\lfloor L_{g}(j\delta t)/\Delta\rfloor)\). We calculate the average survival probability, which represents the proportion of the initial agents that remain on the domain after \(j\) timesteps,
\[\overline{S}(j\delta t)=1-\frac{1}{MN}\sum_{m=1}^{M}\sum_{i}N_{m}(i,j),\ i\in \bigg{\{}-\sum_{g=1}^{G}\bigg{\lfloor}\frac{L_{g}(j\delta t)}{\Delta}\bigg{\rfloor},\ \ldots,\ \sum_{g=1}^{G}\bigg{\lfloor}\frac{L_{g}(j\delta t)}{\Delta}\bigg{\rfloor} \bigg{\}},\ j\in\mathcal{N}_{0}.\]
From these time-dependent statistics we can calculate relevant long time statistics such as the average splitting probabilities, which are the proportion of the agents that ever cross the negative and positive boundaries
\[\overline{S}^{-}(j\delta t)=\frac{1}{MN}\sum_{k=1}^{j}\sum_{m=1}^{M}N_{m} \bigg{(}-1-\sum_{g=1}^{G}\bigg{\lfloor}\frac{L_{g}(k\delta t)}{\Delta}\bigg{ }\bigg{\rfloor},k\bigg{)},\]
and
\[\overline{S}^{+}(j\delta t)=\frac{1}{MN}\sum_{k=1}^{j}\sum_{m=1}^{M}N_{m} \bigg{(}1+\sum_{g=1}^{G}\bigg{\lfloor}\frac{L_{g}(k\delta t)}{\Delta}\bigg{ }\bigg{\rfloor},k\bigg{)}.\]
Note that here, due to the growing domain, there is no guarantee that all agents will leave the domain as \(j\to\infty\). This is in contrast to the non-growing case where all agents eventually leave the domain. The average fraction of agents remaining on the domain at steady state is
\[\overline{\psi}=\lim_{j\to\infty}\overline{S}(j\delta t)=1-\overline{\theta} ^{-}-\overline{\theta}^{+}=\lim_{j\to\infty}\Big{[}1-\overline{S}^{-}(j\delta t )-\overline{S}^{+}(j\delta t)\Big{]}\]
In practice, we must eventually terminate the simulation of the random walk, and hence \(j\in\{0,\,\ldots,\,j_{\max}\}\) where \(j_{\max}\) is chosen to be sufficiently large that the long-time observations approximate the \(t\to\infty\) steady state limit.
### Continuum model
Previous studies have derived the relationship between a velocity field \(v(x,t)\) that translates each point \(0\leq x\leq L(t)\) in a domain, and the overall growth rate of the domain due to the expansion of each infinitesimal width of space [12],
\[\frac{\mathrm{d}L(t)}{\mathrm{d}t}=\int_{0}^{L(t)}\frac{\partial v(x,t)}{ \partial x}\ \mathrm{d}x.\]
We note that in all cases here we consider a symmetric growing domain centred around \(x=0\), that is, \(v(x,t)=-v(-x,t)\). To account for the multiple growing domains we choose a velocity field that is piecewise defined for the separate domains. Accordingly,
\[\frac{\mathrm{d}L(t)}{\mathrm{d}t}=\sum_{g=1}^{G}\frac{\mathrm{d}L_{g}(t)}{ \mathrm{d}t}=\int_{0}^{L_{1}(t)}\frac{\partial v(x,t)}{\partial x}\ \mathrm{d}x+\sum_{g=1}^{G-1}\int_{B_{g}(t)}^{B_{g+1}(t)} \frac{\partial v(x,t)}{\partial x}\ \mathrm{d}x,\]
where, for ease of notation, we define the position of the \(g\)th moving boundary
\[B_{g}(t)=\sum_{i=1}^{g}L_{i}(t),\ g\in\{1,\ \ldots,\ G\}, \tag{1}\]
with \(B_{0}(t)=0\). Here we select spatially-uniform growth within each individual domain. That is, \(v(x,t)\) is piecewise and chosen such that \(\partial v/\partial x\) is independent of \(x\) within an individual domain, though we note there may be a temporal dependence,
\[\frac{\partial v(x,t)}{\partial x}=\sigma_{g}(t)=\frac{1}{L_{g}(t)}\frac{ \mathrm{d}L_{g}(t)}{\mathrm{d}t},\ B_{g-1}(t)<x<B_{g}(t),\ g\in\{1,\ \ldots,\ G\}.\]
The velocity field is therefore
\[v(x,t)=\frac{1}{L_{g}(t)}\Big{(}x-B_{g-1}(t)\Big{)}\frac{\mathrm{d}L_{g}(t)}{ \mathrm{d}t}+\sum_{i=1}^{g-1}\frac{\mathrm{d}L_{i}(t)}{\mathrm{d}t},\ \ B_{g-1}(t)<x<B_{g}(t),\ g\in\{1,\ \ldots,\ G\}, \tag{2}\]
and recalling that \(v(x,t)=-v(-x,t)\) with \(v(0,t)=0\). We observe that the velocity field at the boundary between domains is the sum of the domain growth rates for all domains to that point (Figure 1(e)).
We now consider the evolution of a mass that undergoes linear diffusion on this growing domain. The governing equation for the mass density \(C(x,t)\) is [20]
\[\frac{\partial C(x,t)}{\partial t}=\frac{\partial}{\partial x}\bigg{(}D(x) \frac{\partial C(x,t)}{\partial x}\bigg{)}-\frac{\partial}{\partial x}\bigg{(} v(x,t)C(x,t)\bigg{)},\ -L(t)<x<L(t), \tag{3}\]
where the diffusivity \(D(x)\) is piecewise defined
\[D(x)=D_{g}=\lim_{\Delta,\delta t\to 0}\frac{\Delta^{2}P_{g}}{2\delta t},\ \ B_{g-1}(t)<x<B_{g}(t)\cup-B_{g}(t)<x<-B_{g-1}(t),\ g\in\{1,\ \ldots,\ G\}, \tag{4}\]
on each individual domain in line with the agent-based model, and \(v(x,t)\) is defined in Equation (2). To be consistent with the agent-based model we impose
\[C(-L(t),t)=C(L(t),t)=0, \tag{5}\]
that is, any mass that reaches the external boundary is removed from the system. Other boundary conditions could be considered, such as a no-flux boundary condition at one domain boundary as in [20], but such extensions are left for future work. For the boundaries between internal domains, the flux across the boundaries must be conserved and hence
\[-D_{g}\frac{\partial C}{\partial x}\bigg{|}_{x=B_{g}(t)^{-}}=-D_{g+1}\frac{ \partial C}{\partial x}\bigg{|}_{x=B_{g}(t)^{+}}. \tag{6}\]
Equations (1)-(6) therefore define the general form of the model for a yet-to-be-specified choice of \(L(t)\). Here all functional forms for \(L(t)\) are chosen to be smooth functions where \(L_{g}(t)=0\) does not occur, thereby avoiding finite-time blow-up in the solutions.
#### 2.2.1 Fixed domain solutions
To aid in constructing exact solutions to the model for multiple growing domains, we first focus on the simplest case where there is a single domain of fixed size. Here the governing equation reduces to the classical diffusion equation on a fixed domain
\[\frac{\partial C(x,t)}{\partial t}=D\frac{\partial^{2}C(x,t)}{\partial x^{2}},\ -L<x<L,\ C(-L,t)=C(L,t)=0.\]
For a Dirac delta initial condition located at \(x=x_{0}\), the solution obtained via method of images [9] is
\[C(x,t)= \frac{1}{\sqrt{4\pi Dt}}\exp\bigg{(}-\frac{(x-x_{0})^{2}}{4Dt}\bigg{)}\] \[+\frac{1}{\sqrt{4\pi Dt}}\Bigg{[}\sum_{j=1}^{\infty}(-1)^{j}\exp \bigg{(}-\frac{(x-(-2jL+(-1)^{j}x_{0}))^{2}}{4Dt}\bigg{)}\] \[+(-1)^{j}\exp\bigg{(}-\frac{(x-(2jL+(-1)^{j}x_{0}))^{2}}{4Dt} \bigg{)}\Bigg{]}. \tag{7}\]
The fluxes at the two boundaries, which represent the instantaneous rate of mass leaving the domain, are
\[F^{-}(t)=-D\frac{\partial C(x,t)}{\partial x}\bigg{|}_{x=-L} =\sum_{j=0}^{\infty}(-1)^{j}\frac{(2j+1)L+(-1)^{j}x_{0}}{\sqrt{4 \pi Dt^{3}}}\exp\bigg{(}-\frac{((2j+1)L+(-1)^{j}x_{0})^{2}}{4Dt}\bigg{)}, \tag{8}\] \[F^{+}(t)=-D\frac{\partial C(x,t)}{\partial x}\bigg{|}_{x=L} =\sum_{j=0}^{\infty}(-1)^{j}\frac{(2j+1)L-(-1)^{j}x_{0}}{\sqrt{4 \pi Dt^{3}}}\exp\bigg{(}-\frac{((2j+1)L-(-1)^{j}x_{0})^{2}}{4Dt}\bigg{)}. \tag{9}\]
Note that while the solution is an infinite summation, in practice, we can approximate this solution via a truncated summation. In all cases we choose a summation with 10 terms and verify that the solution is not sensitive to the inclusion of additional terms.
#### 2.2.2 Single growing domain solutions
We next focus on exact solutions for the case with a single growing domain. Here the governing equation is [20]
\[\frac{\partial C(x,t)}{\partial t}=D\frac{\partial^{2}C(x,t)}{\partial x^{2}} -\frac{\partial}{\partial x}\bigg{(}v(x,t)C(x,t)\bigg{)},\ -L(t)<x<L(t),\ C(L(t),t)=C(-L(t),t)=0,\]
where
\[v(x,t)=\frac{x}{L(t)}\frac{\mathrm{d}L}{\mathrm{d}t},\]
and \(L(t)\) is yet to be specified. If we can transform this equation to a linear diffusion equation on a fixed domain, we can exploit the various properties of solutions to the linear diffusion equation. We consider transformations of \(C(x,t),\ x\), and \(t\) to \(C(\xi,T),\ \xi\), and \(T\), respectively. A natural spatial scale is the ratio of the domain size to the spatial variable and hence we introduce
\[\xi=\frac{x}{L(t)}L(0),\ -L(0)<\xi<L(0).\]
Applying this transformation, we obtain, after cancellation of the first order terms in \(\xi\),
\[\frac{\partial C(\xi,t)}{\partial t}=\frac{DL(0)^{2}}{L(t)^{2}}\frac{\partial ^{2}C(\xi,t)}{\partial\xi^{2}}-\frac{C(\xi,t)}{L(t)}\frac{\mathrm{d}L(t)}{ \mathrm{d}t}.\]
The new source term acts to dilute (concentrate) the mass density in the transformed coordinate system, reflecting the stretched (contracted) nature of the domain for \(\mathrm{d}L(t)/\mathrm{d}t>0\) (\(\mathrm{d}L(t)/\mathrm{d}t<0\)). Following previous approaches [9; 32], we now introduce a temporal scaling such that the coefficient of the second \(\xi\) derivative term is a constant,
\[T(t)=\int_{0}^{t}\left(\frac{L(0)}{L(s)}\right)^{2}\,\mathrm{d}s,\]
and hence
\[\frac{\partial C(\xi,T)}{\partial T}=D\frac{\partial^{2}C(\xi,T)}{\partial\xi^{2} }-f(T)C(\xi,T),\]
where
\[f(T)=\frac{L(t)}{L(0)^{2}}\frac{\mathrm{d}L(t)}{\mathrm{d}t}.\]
We now define
\[C(\xi,T)=U(\xi,T)\exp\bigg{(}-\int_{0}^{T}f(s)\ \mathrm{d}s\bigg{)}.\]
Via a change of variable in the integral we obtain
\[C(\xi,T)=U(\xi,T)\frac{L(0)}{L(t)},\]
and hence \(U(\xi,T)\) satisfies
\[\frac{\partial U(\xi,T)}{\partial T}=D\frac{\partial^{2}U(\xi,T)}{\partial\xi ^{2}},\ -L(0)<\xi<L(0),\ T>0,\]
which is the linear diffusion equation on a finite domain. This result is equivalent to the result derived in Simpson _et al._[20]. As such, there are various well-known methods of solution for \(U(\xi,T)\) and it remains to transform back from the fixed domain solution \(U(\xi,T)\) to the growing domain solution \(C(x,t)\). Consider the fundamental solution to the diffusion equation on an infinite domain with a Dirac delta initial condition at \(x=x_{0}\), which corresponds to \(\xi=x_{0}L(0)/L(t)\),
\[U(\xi,T)=\frac{1}{\sqrt{4\pi DT}}\exp\bigg{(}-\frac{\left(\xi-x_{0}\frac{L(0)} {L(t)}\right)^{2}}{4DT}\bigg{)},\ -\infty<\xi<\infty,\]
and hence
\[C(\xi,T)=\frac{1}{\sqrt{4\pi DT}}\exp\bigg{(}-\frac{\left(\xi-x_{0}\frac{L(0)} {L(t)}\right)^{2}}{4DT}\bigg{)}\frac{L(0)}{L(t)}.\]
Transforming back into the original space and time coordinates, we have
\[C(x,t)=\frac{1}{\sqrt{4\pi DT(t)}}\exp\bigg{(}-\frac{\left(x\frac{L(0)}{L(t)} -x_{0}\right)^{2}}{4DT(t)}\bigg{)}\frac{L(0)}{L(t)}.\]
To satisfy the homogeneous Dirichlet boundary conditions at \(x=\pm L(t)\), additional terms are required in the solution to account for the flux through the boundaries and hence
\[C(x,t)= \frac{1}{\sqrt{4\pi DT(t)}}\frac{L(0)}{L(t)}\Bigg{[}\exp\bigg{(}- \frac{\left(x\frac{L(0)}{L(t)}-x_{0}\right)^{2}}{4DT(t)}\bigg{)}\] \[+\Bigg{[}\sum_{j=1}^{\infty}(-1)^{j}\exp\bigg{(}-\frac{\left(x \frac{L(0)}{L(t)}-(-2jL(0)+(-1)^{j}x_{0})\right)^{2}}{4DT(t)}\bigg{)}\] \[+(-1)^{j}\exp\bigg{(}-\frac{\left(x\frac{L(0)}{L(t)}-(2jL(0)+(-1 )^{j}x_{0})\right)^{2}}{4DT(t)}\bigg{)}\Bigg{]}. \tag{10}\]
We note that other solutions to the diffusion equation could be chosen. This is particularly relevant for initial conditions that are not a Dirac delta function, such as if \(C(x,0)\) is chosen to be a Heaviside function, or the difference between two Heaviside functions [9, 20]. However, as we detail below, the fundamental solution proves insightful for the case with multiple domains.
#### 2.2.3 Multiple fixed domain solutions
Before considering the solutions for the model of multiple growing domains, we now consider exact solutions for the simplified case of multiple fixed domains (Figure 2). This allows us to demonstrate the methods for accounting for the flux across internal boundaries, which will be employed to solve the equations for the multiple growing domains case. Here the governing equations are
\[\frac{\partial C_{1}(x,t)}{\partial t} =D_{1}\frac{\partial}{\partial x}\bigg{(}\frac{\partial C_{1}(x,t )}{\partial x}\bigg{)},\,\,-L_{1}<x<L_{1},\] \[\frac{\partial C_{2}(x,t)}{\partial t} =D_{2}\frac{\partial}{\partial x}\bigg{(}\frac{\partial C_{2}(x,t )}{\partial x}\bigg{)},\,\,-L_{2}-L_{1}<x<-L_{1},\] \[\frac{\partial C_{3}(x,t)}{\partial t} =D_{2}\frac{\partial}{\partial x}\bigg{(}\frac{\partial C_{3}(x,t )}{\partial x}\bigg{)},\,\,L_{1}<x<L_{1}+L_{2},\]
where the rate of diffusion in domain two is always the same as in domain three. The flux across the internal boundaries at \(x=\pm L_{1}\) is conserved and hence
\[-D_{1}\frac{\partial C_{1}}{\partial x}\bigg{|}_{x=-L_{1}}=-D_{2}\frac{ \partial C_{2}}{\partial x}\bigg{|}_{x=-L_{1}},\qquad-D_{1}\frac{\partial C_{ 1}}{\partial x}\bigg{|}_{x=L_{1}}=-D_{2}\frac{\partial C_{3}}{\partial x} \bigg{|}_{x=L_{1}}.\]
Individuals are absorbed at the far boundaries, such that \(C_{2}(-L_{2}-L_{1},t)=0\) and \(C_{3}(L_{1}+L_{2},t)=0\). We construct solutions by first solving for \(C_{1}(x,t)\) subject to \(C_{1}(\pm L_{1},t)=0\). We will denote this solution \(G(x,t)\), which is the solution to a related problem (with different boundary conditions) that provides the insight necessary to solve the problem for the original boundary conditions. We calculate the flux that leaves the inner domain according to \(G(x,t)\), and then re-introduce a fraction of that flux at \(x=\pm L_{1}\). This fraction represents the agents that cross the boundary but ultimately return to the original domain. Due to the linearity property of the diffusion equation, we are able to superimpose solutions that satisfy the governing equations. We have defined the boundary fluxes \(F^{-}(t)\) and \(F^{+}(t)\) in Equations (8)-(9). The relevant fraction to be added at the boundary can be determined from conservation of flux. Observing that the solution to \(C_{1}(x,t)\) is composed of the related solution \(G(x,t)\) (which, by definition, is zero at \(x=\pm L_{1}\)) and the re-introduced sources, the solution at the boundary at \(x=L_{1}\) is
\[C_{1}(L_{1},t)=\lim_{s\to t}\int_{0}^{s}\frac{w_{1}^{+}}{\sqrt{\pi D_{1}(s- \tau)}}F^{+}(\tau)\,\,\mathrm{d}\tau,\]
to first order; see below for a discussion on the timescales over which this solution is accurate. Here \(w_{1}^{+}\) is both the fraction of the flux that remains in domain one, and a weight that relates the flux at
Figure 2: (a) Illustration of a solution to the diffusion equation on multiple fixed domains and (b) its constituent components. The multiple solution profiles centred at the boundaries are an illustrative representation of the convolution terms in Equations (11), (13) and (14).
\(x=L_{1}\) in the positive \(x\) direction for \(G(x,t)\) to the flux for \(C_{1}(x,t)\), that is,
\[-D_{1}(1-w_{1}^{+})\frac{\partial G(x,t)}{\partial x}\bigg{|}_{x=L_{1}}=-D_{1} \frac{\partial C_{1}(x,t)}{\partial x}\bigg{|}_{x=L_{1}}.\]
The solution in domain three at the same boundary, which is only composed to the introduced sources, is (to first order)
\[C_{3}(L_{1},t)=\lim_{s\to t}\int_{0}^{s}\frac{w_{3}^{-}}{\sqrt{\pi D_{2}(s- \tau)}}F^{+}(\tau)\ \mathrm{d}\tau,\]
where \(w_{3}^{-}\) is the fraction of flux that enters domain three, subject to the restrction that \(w_{1}^{+}+w_{3}^{-}=1\). For conservation of flux to hold, we require
\[w_{1}^{+}=\frac{\sqrt{D_{2}}}{\sqrt{D_{1}}+\sqrt{D_{2}}},\qquad w_{3}^{-}= \frac{\sqrt{D_{1}}}{\sqrt{D_{1}}+\sqrt{D_{2}}}\]
at \(x=L_{1}\). Similarly, for the boundary at \(x=-L_{1}\), we require
\[w_{1}^{-}=\frac{\sqrt{D_{2}}}{\sqrt{D_{1}}+\sqrt{D_{2}}},\qquad w_{2}^{+}= \frac{\sqrt{D_{1}}}{\sqrt{D_{1}}+\sqrt{D_{2}}}.\]
If \(D_{1}=D_{2}\) then it is equally likely for an agent that crosses the boundary to be located in either domain. If \(D_{1}>D_{2}\) then agents reside in the outer domains for longer than in the inner domain, and the agent density will be higher in the outer domains; equally, if \(D_{1}<D_{2}\) then agents reside in the inner domain for longer, and the agent density will be higher in the inner domain. For a Dirac delta initial condition located at \(x=x_{0}\) the solution on \(-L_{1}<x<L_{1}\) is therefore (Figure 2)
\[C_{1}(x,t)=G(x,t)+\int_{0}^{t}w_{1}^{-}F^{-}(\tau)P_{1}^{-}(x,-L_{1},t-\tau)+ w_{1}^{+}F^{+}(\tau)P_{1}^{+}(x,L_{1},t-\tau)\ \mathrm{d}\tau, \tag{11}\]
where
\[G(x,t)= \frac{1}{\sqrt{4\pi D_{1}t}}\exp\bigg{(}-\frac{(x-x_{0})^{2}}{4D _{1}t}\bigg{)}\] \[+\frac{1}{\sqrt{4\pi D_{1}t}}\Bigg{[}\sum_{j=1}^{\infty}(-1)^{j} \exp\bigg{(}-\frac{(x-(-2jL_{1}+(-1)^{j}x_{0}))^{2}}{4D_{1}t}\bigg{)}\] \[+(-1)^{j}\exp\bigg{(}-\frac{(x-(2jL_{1}+(-1)^{j}x_{0}))^{2}}{4D_ {1}t}\bigg{)}\Bigg{]}, \tag{12}\]
is the solution to the diffusion equation with homogeneous Dirichlet boundary conditions, \(P_{i}^{k}(x,x_{0},t),\ k\in\{-,+\}\) is the fundamental solution to the diffusion equation on domain \(i\) with a Dirac delta initial condition at \(x=x_{0}\), a homogeneous Neumann boundary condition at the leftmost boundary, and a homogeneous Dirichlet boundary condition at the rightmost boundary (for \(k=-\); the boundary conditions are reversed for \(k=+\)), and
\[F^{-}(t)=-D_{1}\frac{\partial G}{\partial x}\bigg{|}_{x=-L_{1}},\qquad F^{+}(t )=-D_{1}\frac{\partial G}{\partial x}\bigg{|}_{x=L_{1}},\]
as before. For example,
\[P_{1}^{-}(x,x_{0},t)= \frac{1}{\sqrt{4\pi D_{1}t}}\Bigg{[}\exp\bigg{(}-\frac{(x-x_{0})^{2} }{4D_{1}t}\bigg{)}\] \[+\sum_{j=1}^{\infty}(-1)^{\lfloor(j+1)/2\rfloor}\exp\bigg{(}- \frac{(x-(-2jL_{1}+(-1)^{j}x_{0})^{2}}{4D_{1}t}\bigg{)}\] \[+\sum_{j=1}^{\infty}(-1)^{\lfloor j/2\rfloor}\exp\bigg{(}-\frac{( x-(2jL_{1}+(-1)^{j}x_{0})^{2}}{4D_{1}t}\bigg{)}\bigg{]}.\]
Note that the mass contained in the solution components corresponding to the reintroduced flux will eventually reach the opposing boundary (i.e. mass reintroduced at \(x=L_{1}\) will reach \(x=-L_{1}\)). Due to the homogeneous Dirichlet boundary imposed at \(x=-L_{1}\) for the solution component \(P_{1}^{+}(x,L_{1},t-\tau)\), this implies that there will be an additional flux through the boundary at \(x=-L_{1}\). This could be accounted for in a similar way to the original boundary flux, where the appropriate fraction of the flux is reintroduced on either side of the boundary via a convolution term. The solution would therefore involve a double convolution component. However, here we consider problems where the average timescale for an individual to cross both internal boundaries is beyond the timescale of interest, so we neglect this additional term. The solutions for \(-L_{1}-L_{2}<x<-L_{1}\) and \(L_{1}<x<L_{1}+L_{2}\) are
\[C_{2}(x,t)=\int_{0}^{t}w_{2}^{+}F^{-}(\tau)P_{2}^{+}(x,-L_{1},t -\tau)\ \mathrm{d}\tau, \tag{13}\] \[C_{3}(x,t)=\int_{0}^{t}w_{3}^{-}F^{+}(\tau)P_{3}^{-}(x,L_{1},t- \tau)\ \mathrm{d}\tau. \tag{14}\]
The solution at the boundary exhibits the jump condition [33]
\[\frac{C_{1}(L,t)}{C_{3}(L,t)}=\frac{D_{2}}{D_{1}}.\]
#### 2.2.4 Multiple growing domain solutions
We next focus on obtaining exact solutions for the case with multiple growing domains. For \(G=2\), the governing equations are
\[\frac{\partial C_{1}(x,t)}{\partial t} =D_{1}\frac{\partial}{\partial x}\bigg{(}\frac{\partial C_{1}(x,t )}{\partial x}\bigg{)}-\frac{\partial}{\partial x}\bigg{(}v_{1}(x,t)C_{1}(x,t )\bigg{)},\ -L_{1}(t)<x<L_{1}(t),\] \[\frac{\partial C_{2}(x,t)}{\partial t} =D_{2}\frac{\partial}{\partial x}\bigg{(}\frac{\partial C_{2}(x, t)}{\partial x}\bigg{)}-\frac{\partial}{\partial x}\bigg{(}v_{2}(x,t)C_{2}(x,t )\bigg{)},\ -L_{2}(t)-L_{1}(t)<x<-L_{1}(t),\] \[\frac{\partial C_{3}(x,t)}{\partial t} =D_{2}\frac{\partial}{\partial x}\bigg{(}\frac{\partial C_{3}(x, t)}{\partial x}\bigg{)}-\frac{\partial}{\partial x}\bigg{(}v_{3}(x,t)C_{3}(x,t )\bigg{)},\ L_{1}(t)<x<L_{1}(t)+L_{2}(t)\]
where \(L_{1}(t)\) and \(L_{2}(t)\) are yet to be specified and \(v_{3}(x,t)=-v_{2}(-x,t)\) such that the domain is symmetric. We consider transformations of \(C_{1}(x,t)\), \(C_{2}(x,t)\), \(C_{3}(x,t)\), \(x\), and \(t\) to \(C_{1}(\xi_{1},T_{1})\), \(C_{2}(\xi_{2},T_{2})\), \(C_{3}(\xi_{3},T_{2})\), \(\xi_{1}\), \(\xi_{2}\), \(\xi_{3}\), \(T_{1}\), and \(T_{2}\). The three spatial transformations are
\[\xi_{1}=\bigg{(}\frac{x}{L_{1}(t)}\bigg{)}L_{1}(0),\qquad\xi_{2}=\bigg{(}\frac {x-L_{1}(t)}{L_{2}(t)}\bigg{)}L_{2}(0)+L_{1}(0),\qquad\xi_{3}=\bigg{(}\frac{x+L _{1}(t)}{L_{2}(t)}\bigg{)}L_{2}(0)-L_{1}(0).\]
Here we have
\[v_{1}(x,t)=\bigg{(}\frac{x}{L_{1}(t)}\bigg{)}\frac{\mathrm{d}L_{1 }}{\mathrm{d}t},\qquad v_{2}(x,t)=\bigg{(}\frac{x-L_{1}(t)}{L_{2}(t)}\bigg{)} \frac{\mathrm{d}L_{2}}{\mathrm{d}t}+\frac{\mathrm{d}L_{1}}{\mathrm{d}t},\] \[v_{3}(x,t)=\bigg{(}\frac{x+L_{1}(t)}{L_{2}(t)}\bigg{)}\frac{ \mathrm{d}L_{2}}{\mathrm{d}t}-\frac{\mathrm{d}L_{1}}{\mathrm{d}t}.\]
Applying the transformations, we obtain
\[\frac{\partial C_{1}(\xi_{1},t)}{\partial t} =D_{1}\bigg{(}\frac{L_{1}(0)}{L_{1}(t)}\bigg{)}^{2}\frac{\partial^{ 2}C_{1}(\xi_{1},t)}{\partial\xi_{1}^{2}}-\frac{1}{L_{1}(t)}\frac{\mathrm{d}L_{ 1}}{\mathrm{d}t}C_{1}(\xi_{1},t),\ -L_{1}(0)<\xi_{1}<L_{1}(0),\] \[\frac{\partial C_{2}(\xi_{2},t)}{\partial t} =D_{2}\bigg{(}\frac{L_{2}(0)}{L_{2}(t)}\bigg{)}^{2}\frac{\partial^ {2}C_{2}(\xi_{2},t)}{\partial\xi_{2}^{2}}-\frac{1}{L_{2}(t)}\frac{\mathrm{d}L_ {2}}{\mathrm{d}t}C_{2}(\xi_{2},t),\ -L_{2}(0)-L_{1}(0)<\xi_{2}<-L_{1}(0),\] \[\frac{\partial C_{3}(\xi_{3},t)}{\partial t} =D_{2}\bigg{(}\frac{L_{2}(0)}{L_{2}(t)}\bigg{)}^{2}\frac{\partial ^{2}C_{3}(\xi_{3},t)}{\partial\xi_{3}^{2}}-\frac{1}{L_{2}(t)}\frac{\mathrm{d}L _{2}}{\mathrm{d}t}C_{3}(\xi_{3},t),\ L_{1}(0)<\xi_{3}<L_{1}(0)+L_{2}(0).\]
The time transformations are
\[T_{1}(t)=\int_{0}^{t}\left(\frac{L_{1}(0)}{L_{1}(s)}\right)^{2}\,\mathrm{d}s, \qquad T_{2}(t)=\int_{0}^{t}\left(\frac{L_{2}(0)}{L_{2}(s)}\right)^{2}\, \mathrm{d}s,\]
which give
\[\frac{\partial C_{1}(\xi_{1},T_{1})}{\partial T_{1}} =D_{1}\frac{\partial^{2}C_{1}(\xi_{1},T_{1})}{\partial\xi_{1}^{2 }}-\frac{L_{1}(t)}{L_{1}(0)^{2}}\frac{\mathrm{d}L_{1}}{\mathrm{d}t}C_{1}(\xi _{1},T_{1}),\ -L_{1}(0)<\xi_{1}<L_{1}(0),\] \[\frac{\partial C_{2}(\xi_{2},T_{2})}{\partial T_{2}} =D_{2}\frac{\partial^{2}C_{2}(\xi_{2},T_{2})}{\partial\xi_{2}^{2 }}-\frac{L_{2}(t)}{L_{2}(0)^{2}}\frac{\mathrm{d}L_{2}}{\mathrm{d}t}C_{2}(\xi _{2},T_{2}),\ -L_{2}(0)-L_{1}(0)<\xi_{2}<-L_{1}(0),\] \[\frac{\partial C_{3}(\xi_{3},T_{2})}{\partial T_{2}} =D_{2}\frac{\partial^{2}C_{3}(\xi_{3},T_{2})}{\partial\xi_{3}^{2 }}-\frac{L_{2}(t)}{L_{2}(0)^{2}}\frac{\mathrm{d}L_{2}}{\mathrm{d}t}C_{3}(\xi _{3},T_{2}),\ L_{1}(0)<\xi_{3}<L_{1}(0)+L_{2}(0).\]
If we define
\[f_{1}(T_{1})=\frac{L_{1}(t)}{L_{1}(0)^{2}}\frac{\mathrm{d}L_{1}}{\mathrm{d}t},\qquad f_{2}(T_{2})=\frac{L_{2}(t)}{L_{2}(0)^{2}}\frac{\mathrm{d}L_{2}}{ \mathrm{d}t},\]
and
\[C_{1}(\xi_{1},T_{1})=U_{1}(\xi_{1},T_{1})\frac{L_{1}(0)}{L_{1}(t)},\qquad C_{2 }(\xi_{2},T_{2})=U_{2}(\xi_{2},T_{2})\frac{L_{2}(0)}{L_{2}(t)},\qquad C_{3}( \xi_{3},T_{2})=U_{3}(\xi_{3},T_{2})\frac{L_{2}(0)}{L_{2}(t)},\]
it can be seen that \(U_{1}(\xi_{1},T_{1})\), \(U_{2}(\xi_{2},T_{2})\) and \(U_{3}(\xi_{3},T_{2})\) satisfy
\[\frac{\partial U_{1}(\xi_{1},T_{1})}{\partial T_{1}} =D_{1}\frac{\partial^{2}U_{1}(\xi_{1},T_{1})}{\partial\xi_{1}^{2 }},\ -L_{1}(0)<\xi_{1}<L_{1}(0),\] \[\frac{\partial U_{2}(\xi_{2},T_{2})}{\partial T_{2}} =D_{2}\frac{\partial^{2}U_{2}(\xi_{2},T_{2})}{\partial\xi_{2}^{2 }},\ -L_{2}(0)-L_{1}(0)<\xi_{2}<-L_{1}(0),\] \[\frac{\partial U_{3}(\xi_{3},T_{2})}{\partial T_{2}} =D_{2}\frac{\partial^{2}U_{3}(\xi_{3},T_{2})}{\partial\xi_{3}^{2 }},\ L_{1}(0)<\xi_{3}<L_{1}(0)+L_{2}(0),\]
which is the linear diffusion equation on multiple fixed domains. As for the previous multiple domain solutions, we impose the conservation of diffusive flux across the internal boundaries, noting that the velocity fields at the boundaries are continuous. We assume that there is a Dirac delta initial condition at \(x=x_{0}\) where \(-L_{1}(0)<x_{0}<L_{1}(0)\) and hence \(C_{1}(x,t)\) will have components arising from the initial mass and the re-introduced boundary flux, as before, while \(C_{2}(x,t)\) and \(C_{3}(x,t)\) will only have components from the internal boundary flux. Due to the growing domains, the weights for the boundary flux terms now depend on the transformations of \(t\) and \(C_{i}(x,t)\). We again require the weights sum to one and that, if \(D_{1}=D_{2}\), then \(C_{1}(L_{1},t)=C_{3}(L_{1},t)\) due to the continuity of the velocity fields. The weights are
\[w_{1}^{+}(t,\tau) =\frac{\sqrt{D_{2}}\sqrt{T_{1}(t,\tau)}L_{2}(\tau)/L_{2}(t)}{\sqrt{D _{2}}\sqrt{T_{1}(t,\tau)}L_{2}(\tau)/L_{2}(t)+\sqrt{D_{1}}\sqrt{T_{2}(t,\tau)}L _{1}(\tau)/L_{1}(t)},\] \[w_{3}^{-}(t,\tau) =\frac{\sqrt{D_{1}}\sqrt{T_{2}(t,\tau)}L_{1}(\tau)/L_{1}(t)}{\sqrt {D_{2}}\sqrt{T_{1}(t,\tau)}L_{2}(\tau)/L_{2}(t)+\sqrt{D_{1}}\sqrt{T_{2}(t,\tau)}L _{1}(\tau)/L_{1}(t)},\]
for the boundary at \(x=L_{1}(t)\), and
\[w_{1}^{-}(t,\tau) =\frac{\sqrt{D_{2}}\sqrt{T_{1}(t,\tau)}L_{2}(\tau)/L_{2}(t)}{\sqrt{D _{2}}\sqrt{T_{1}(t,\tau)}L_{2}(\tau)/L_{2}(t)+\sqrt{D_{1}}\sqrt{T_{2}(t,\tau)}L _{1}(\tau)/L_{1}(t)},\] \[w_{2}^{+}(t,\tau) =\frac{\sqrt{D_{1}}\sqrt{T_{2}(t,\tau)}L_{1}(\tau)/L_{1}(t)}{ \sqrt{D_{2}}\sqrt{T_{1}(t,\tau)}L_{2}(\tau)/L_{2}(t)+\sqrt{D_{1}}\sqrt{T_{2}(t, \tau)}L_{1}(\tau)/L_{1}(t)},\]
for the boundary at \(x=-L_{1}(t)\), where
\[T_{1}(t,\tau)=\int_{\tau}^{t}\left(\frac{L_{1}(\tau)}{L_{1}(s)}\right)^{2}\, \mathrm{d}s,\qquad T_{2}(t,\tau)=\int_{\tau}^{t}\left(\frac{L_{2}(\tau)}{L_{2} (s)}\right)^{2}\,\mathrm{d}s.\]
The solutions are therefore
\[C_{1}(x,t)=H(x,t)+\int_{0}^{t}w_{1}^{-}(t,\tau)F_{1}^{-}(\tau)P_{1}^{-}(x,-L_{ 1}(t),t-\tau)+w_{1}^{+}(t,\tau)F_{1}^{+}(\tau)P_{1}^{+}(x,L_{1}(t),t-\tau)\, \mathrm{d}\tau, \tag{15}\]
where
\[H(x,t)= \frac{1}{\sqrt{4\pi D_{1}T_{1}(t)}}\frac{L_{1}(0)}{L_{1}(t)} \Bigg{[}\exp\bigg{(}-\frac{\left(x\frac{L_{1}(0)}{L_{1}(t)}-x_{0}\right)^{2}}{ 4D_{1}T_{1}(t)}\bigg{)}\] \[+\Bigg{[}\sum_{j=1}^{\infty}(-1)^{j}\exp\bigg{(}-\frac{\left(x \frac{L_{1}(0)}{L_{1}(t)}-(-2jL_{1}(0)+(-1)^{j}x_{0})\right)^{2}}{4D_{1}T_{1 }(t)}\bigg{)}\] \[+(-1)^{j}\exp\bigg{(}-\frac{\left(x\frac{L_{1}(0)}{L_{1}(t)}-(2jL _{1}(0)+(-1)^{j}x_{0})\right)^{2}}{4D_{1}T_{1}(t)}\bigg{)}\Bigg{]}, \tag{16}\]
and
\[F_{1}^{-}(t)=-D_{1}\frac{\partial H}{\partial x}\bigg{|}_{x=-L_{1}(t)},\qquad F _{1}^{+}(t)=-D_{1}\frac{\partial H}{\partial x}\bigg{|}_{x=L_{1}(t)}.\]
The solution for domains two and three are
\[C_{2}(x,t) =\int_{0}^{t}w_{2}^{+}(t,\tau)F_{1}^{-}(\tau)P_{2}^{+}(x,-L_{1}(t ),t-\tau)\,\,\mathrm{d}\tau, \tag{17}\] \[C_{3}(x,t) =\int_{0}^{t}w_{3}^{-}(t,\tau)F_{1}^{+}(\tau)P_{3}^{-}(x,L_{1}(t ),t-\tau)\,\,\mathrm{d}\tau, \tag{18}\]
where \(P_{i}^{k}(x,x_{0},t)\) is defined as previously, albeit for the \(i\)th growing domain, rather than a fixed domain.
### Splitting and survival probabilities
The proportion of the individuals that have yet to cross either external boundary, known as the survival probability, is a key metric for diffusive processes on growing domains [25; 20; 21]. The proportion of the individuals that have crossed either the left or right external boundary, known as the left and right splitting probabilities, is similarly of interest. We can leverage the derived exact solutions to calculate these metrics, both for the single growing domain and multiple growing domain cases. Critically, we can obtain expressions for the time-varying splitting and survival probabilities, rather than just in the long-time limit.
For the case with a single growing domain, it is relatively straightforward to calculate the splitting and survival probabilities. The time-varying splitting probabilities for the boundaries at \(x=-L(t)\) and \(x=L(t)\), denoted \(S^{-}(t)\) and \(S^{+}(t)\), respectively, are simply the time integral of the appropriate boundary flux. The fluxes for the two boundaries are
\[F^{-}(t)=-D_{1}\frac{\partial C(x,t)}{\partial x}\bigg{|}_{x=-L(t )}= \sum_{j=0}^{\infty}(-1)^{j}\frac{(2j+1)L(0)+(-1)^{j}x_{0}}{\sqrt{4 \pi D_{1}T(t)^{3}}}\bigg{(}\frac{L(0)}{L(t)}\bigg{)}^{2}\] \[\times\exp\bigg{(}-\frac{((2j+1)L(0)+(-1)^{j}x_{0})^{2}}{4D_{1}T (t)}\bigg{)},\] \[F^{+}(t)=-D_{1}\frac{\partial C(x,t)}{\partial x}\bigg{|}_{x=L(t )}= \sum_{j=0}^{\infty}(-1)^{j}\frac{(2j+1)L(0)-(-1)^{j}x_{0}}{\sqrt{4 \pi D_{1}T(t)^{3}}}\bigg{(}\frac{L(0)}{L(t)}\bigg{)}^{2}\] \[\times\exp\bigg{(}-\frac{((2j+1)L(0)-(-1)^{j}x_{0})^{2}}{4D_{1}T (t)}\bigg{)}.\]
The time-varying splitting probabilities are therefore
\[S^{-}(t)=\int_{0}^{t}F_{1}^{-}(\tau)\ \mathrm{d}\tau=\sum_{j=0}^{ \infty}(-1)^{j}\mathrm{erfc}\bigg{(}\frac{(2j+1)L(0)+(-1)^{j}x_{0}}{\sqrt{4D _{1}T(t)}}\bigg{)}, \tag{19}\] \[S^{+}(t)=\int_{0}^{t}F_{1}^{+}(\tau)\ \mathrm{d}\tau=\sum_{j=0}^{ \infty}(-1)^{j}\mathrm{erfc}\bigg{(}\frac{(2j+1)L(0)-(-1)^{j}x_{0}}{\sqrt{4D _{1}T(t)}}\bigg{)}. \tag{20}\]
The survival probability, \(S(t)\), is simply the proportion of the population that has not crossed either boundary
\[S(t)=1-S^{-}(t)-S^{+}(t)= 1-\sum_{j=0}^{\infty}\Bigg{[}(-1)^{j}\mathrm{erfc}\bigg{(}\frac {(2j+1)L(0)+(-1)^{j}x_{0}}{\sqrt{4D_{1}T(t)}}\bigg{)}\] \[+(-1)^{j}\mathrm{erfc}\bigg{(}\frac{(2j+1)L(0)-(-1)^{j}x_{0}}{ \sqrt{4D_{1}T(t)}}\bigg{)}\Bigg{]}. \tag{21}\]
For the case with multiple growing domains, the process to calculate the splitting and survival probabilities is more complicated. For an individual to cross an external boundary, it must first cross an internal boundary. We therefore first calculate the internal splitting probabilities, \(S_{1}^{-}(t)\) and \(S_{1}^{+}(t)\). Note that this is the probability that an individual crosses an internal boundary and does not re-enter the internal domain. This is calculated by weighting the internal boundary flux as before and taking the integral over time
\[S_{1}^{-}(t)=\int_{0}^{t}w_{2}^{+}(t,\tau)F_{1}^{-}(\tau)\mathrm{d}\tau,\qquad S _{1}^{+}(t)=\int_{0}^{t}w_{3}^{-}(t,\tau)F_{1}^{+}(\tau)\mathrm{d}\tau.\]
A closed-form expression for the internal splitting probabilities can be obtained if \(\sqrt{T_{1}(t,\tau)}L_{1}(t)/L_{1}(\tau)=\sqrt{T_{2}(t,\tau)}L_{2}(t)/L_{2}(\tau)\) holds for all \(t\) and \(\tau\), that is, that the ratio of the new domain size to the original domain size is the same for both the internal and external domains. Where this holds, the weights reduce to a constant, and hence
\[S_{1}^{-}(t)=\frac{\sqrt{D_{1}}}{\sqrt{D_{1}}+\sqrt{D_{2}}}\sum_ {j=0}^{\infty}(-1)^{j}\mathrm{erfc}\bigg{(}\frac{(2j+1)L_{1}(0)+(-1)^{j}x_{0}} {\sqrt{4D_{1}T(t)}}\bigg{)},\] \[S_{1}^{+}(t)=\frac{\sqrt{D_{1}}}{\sqrt{D_{1}}+\sqrt{D_{2}}}\sum_ {j=0}^{\infty}(-1)^{j}\mathrm{erfc}\bigg{(}\frac{(2j+1)L_{1}(0)-(-1)^{j}x_{0}} {\sqrt{4D_{1}T(t)}}\bigg{)}.\]
If this condition does not hold, we can calculate the splitting probabilities via numerical integration. As before, the internal survival probability can be calculated according to \(S_{1}(t)=1-S_{1}^{-}(t)-S_{1}^{+}(t)\).
To calculate the external splitting probabilities, that is, the probability that an individual crosses the boundary at either \(x=-L_{1}(t)-L_{2}(t)\) or \(x=L_{1}(t)+L_{2}(t)\), we first require an expression for the flux through the appropriate boundary. Recall that the solutions in domains two and three correspond to an integral over solution kernels that represent mass initially located at \(x=\pm L_{1}(t)\). The flux through the boundary at \(x=L_{1}(t)+L_{2}(t)\) at time \(t\) due to mass that is placed at \(x=L_{1}(t)\) at \(t=\tau\) is
\[F_{3}^{+}(t,\tau)= -D_{2}\frac{\partial P_{3}(x,L_{1}(t),t-\tau)}{\partial x}\bigg{|} _{x=L_{1}(t)+L_{2}(t)}\] \[=\sum_{j=0}^{\infty}(-1)^{j}\frac{(2j+1)(L_{1}(\tau)+L_{2}(\tau) )-(-1)^{j}L_{1}(\tau)}{\sqrt{\pi D_{2}T_{2}(t,\tau)^{3}}}\bigg{(}\frac{L_{2}( \tau)}{L_{2}(t)}\bigg{)}^{2}\times\] \[\exp\bigg{(}-\frac{\bigg{(}(2j+1)(L_{1}(\tau)+L_{2}(\tau))-(-1)^ {j}L_{1}(\tau)\bigg{)}^{2}}{4D_{2}T_{2}(t,\tau)}\bigg{)}.\]
The survival probability at time \(t\) in domain three for the mass that is placed at \(x=L_{1}(t)\) at \(t=\tau\) is obtained through the relationship
\[1-S_{3}(t,\tau)=\int_{\tau}^{t}F_{3}^{+}(s,\tau)\ \mathrm{d}s=\sum_{j=0}^{ \infty}(-1)^{j}\mathrm{erfc}\bigg{(}\frac{(2j+1)(L_{1}(\tau)+L_{2}(\tau))-(-1) ^{j}L_{1}(\tau)}{\sqrt{4D_{2}T_{2}(t,\tau)}}\bigg{)},\]
noting that there is no flux from domain three into domain one, as can be seen from the form of the solution kernel. While agents in the simulation do cross the internal boundaries multiple times, these are accounted for in the kernel via the weighting of the flux. These results are for a solution kernel corresponding to a mass that is placed in a single instant; however the solution in domain three involves an integral over the mass placed at each instant. Therefore, the overall splitting probability for the boundary at \(x=L_{1}(t)+L_{2}(t)\) is
\[S^{+}(t)=\int_{0}^{t}F_{1}^{+}(\phi)\int_{\phi}^{t}w_{3}^{-}(s,\phi)F_{3}^{+}( s,\phi)\ \mathrm{d}s\ \mathrm{d}\phi.\]
The three components in the integral are the flux across the internal boundary (\(F_{1}^{+}(\phi)\)), the fraction of that flux that is placed into domain three (\(w_{3}^{-}(s,\phi)\)) and the flux across the external boundary (\(F_{3}^{+}(s,\phi)\)), given that there was mass placed at \(x=L_{1}(\phi)\). In the case where \(\sqrt{T_{1}(t,\tau)}L_{1}(t)/L_{1}(\tau)=\sqrt{T_{2}(t,\tau)}L_{2}(t)/L_{2}(\tau)\) we can use the expressions for the internal boundary flux and the survival probability for domain three to obtain a single integral expression for the splitting probability for the boundary at \(x=L_{1}(t)+L_{2}(t)\)
\[S^{+}(t)= \frac{\sqrt{D_{1}}}{\sqrt{D_{1}}+\sqrt{D_{2}}}\int_{0}^{t}\bigg{[} \sum_{j=0}^{\infty}(-1)^{j}\frac{(2j+1)L_{1}(0)-(-1)^{j}x_{0}}{\sqrt{4\pi D_{ 1}T_{1}(\phi)^{3}}}\bigg{(}\frac{L_{1}(0)}{L(\phi)}\bigg{)}^{2}\] \[\times\exp\bigg{(}-\frac{((2j+1)L_{1}(0)-(-1)^{j}x_{0})^{2}}{4D_{ 1}T_{1}(\phi)}\bigg{)}\bigg{]}\times\] \[\bigg{[}\sum_{j=0}^{\infty}(-1)^{j}\mathrm{erfc}\bigg{(}\frac{(2j +1)(L_{1}(\phi)+L_{2}(\phi))-(-1)^{j}L_{1}(\phi)}{\sqrt{4D_{2}T_{2}(t,\phi)}} \bigg{)}\bigg{]}\ \mathrm{d}\phi. \tag{22}\]
As before, if this relationship does not hold, we can use numerical techniques to evaluate the integral. Following similar arguments, we obtain the splitting probability for the boundary at \(x=-L_{1}(t)-L_{2}(t)\)
\[S^{-}(t)=\int_{0}^{t}F_{1}^{-}(\phi)\int_{\phi}^{t}w_{2}^{+}(s,\phi)F_{2}^{-}( s,\phi)\ \mathrm{d}s\ \mathrm{d}\phi,\]
which, provided \(\sqrt{T_{1}(t,\tau)}L_{1}(t)/L_{1}(\tau)=\sqrt{T_{2}(t,\tau)}L_{2}(t)/L_{2}(\tau)\) is satisfied, can be expressed
\[S^{-}(t)= \frac{\sqrt{D_{1}}}{\sqrt{D_{1}}+\sqrt{D_{2}}}\int_{0}^{t}\bigg{[} \sum_{j=0}^{\infty}(-1)^{j}\frac{(2j+1)L_{1}(0)+(-1)^{j}x_{0}}{\sqrt{4\pi D_{1 }T_{1}(\phi)^{3}}}\bigg{(}\frac{L_{1}(0)}{L(\phi)}\bigg{)}^{2}\] \[\times\exp\bigg{(}-\frac{((2j+1)L_{1}(0)+(-1)^{j}x_{0})^{2}}{4D_{ 1}T_{1}(\phi)}\bigg{)}\bigg{]}\times\] \[\bigg{[}\sum_{j=0}^{\infty}(-1)^{j}\text{erfc}\bigg{(}\frac{(2j+1) (L_{1}(\phi)+L_{2}(\phi))-(-1)^{j}L_{1}(\phi)}{\sqrt{4D_{2}T_{2}(t,\phi)}} \bigg{)}\bigg{]}\;\text{d}\phi. \tag{23}\]
The overall survival probability (i.e. that an agent remains in any domain) is given by
\[S(t)=1-S^{-}(t)-S^{+}(t). \tag{24}\]
## 3 Results
For all results we choose unit lattice widths and timesteps such that \(\Delta=\delta t=1\). We consider three functional forms for domain evolution:
* _Linear growth_. Here an individual domain undergoes (positive) growth that is linear in time according to \[L_{i}(t)=L_{i}(0)+\beta_{i}t,\] (25) where \(\beta_{i}>0\). The corresponding time transformation is \[T_{i}(t,\tau)=\bigg{(}\frac{L_{i}(0)+\beta_{i}\tau}{L_{i}(0)+\beta_{i}t} \bigg{)}(t-\tau).\] (26) Note that the time transformation \(T_{i}(t)\) is obtained by setting \(\tau=0\). It is possible to select \(\beta_{i}<0\); however, this requires the restriction that \(t<-L_{i}(0)/\beta_{i}\) to ensure that the domain size remains positive and the solution does not experience finite-time blow-up.
* _Exponential growth_. Here an individual domain undergoes growth that is exponential in time according to \[L_{i}(t)=L_{i,\min}+(L_{i}(0)-L_{i,\min})\exp(-\beta_{i}t).\] (27) For exponential growth we do not have a restriction on the sign of \(\beta_{i}\) as the domain will approach the finite size \(L_{i,\min}>0\) when \(\beta_{i}>0\), and hence we can study examples of both positive and negative growth. The corresponding time transformation is \[T_{i}(t,\tau)= \frac{\bigg{[}\exp(-\beta_{i}\tau)\big{(}L_{i}(0)-L_{i,\min}\big{)} +L_{i,\min}\bigg{]}^{2}}{\beta_{i}L_{i,\min}^{2}}\] \[\times \bigg{[}\big{(}L_{i}(0)-L_{i,\min}\big{)}\bigg{(}\frac{1}{L_{i, \min}\big{[}1-\exp(\beta_{i}\tau)\big{]}-L_{i}(0)}+\frac{1}{L_{i}(0)+L_{i,\min} \big{[}\exp(\beta_{i}t)-1\big{]}}\bigg{)}\] \[-\log\Big{(}L_{i}(0)+L_{i,\min}\big{[}\exp(\beta_{i}\tau)-1\big{]} \bigg{)}+\log\Big{(}L_{i}(0)+L_{i,\min}\big{[}\exp(\beta_{i}t)-1\big{]}\Big{)} \bigg{]}.\] (28)
* _Oscillatory evolution_. Here an individual domain undergoes evolution that oscillates in time according to \[L_{i}(t)=L_{i}(0)+(L_{i}(0)-L_{i,\min})\sin(\beta_{i}t).\] (29) Again, for oscillatory evolution, there is no restriction on the sign of \(\beta_{i}\) as this simply dictates whether the domain initially experiences positive or negative growth. The domain size is bounded
according to \(L_{i,\min}\leq L_{i}(t)\leq 2L_{i}(0)-L_{i,\min}\), where \(0<L_{i,\min}\leq L_{i}(0)\). The corresponding time transformation is
\[T_{i}(t,\tau)= \Bigg{(}\frac{L_{i}(0)+\big{(}L_{i}(0)-L_{i,\min}\big{)}\sin(\beta_ {i}\tau)}{L_{i}(0)+(L_{i}(0)-L_{i,\min})\sin(\beta_{i}t)}\Bigg{)}\] \[\times\Bigg{(}\frac{1}{\beta_{i}L_{i,\min}\big{[}2L_{i}(0)-L_{i, \min}\big{]}\big{[}L_{i}(0)^{2}-(L_{i}(0)-L_{i,\min})^{2}\big{]}^{1/2}}\Bigg{)}\] \[\times\Bigg{[}\big{[}L_{i}(0)-L_{i,\min}\big{]}\big{[}L_{i}(0)^{ 2}-(L_{i}(0)-L_{i,\min})^{2}\big{]}^{1/2}\] \[\times\Big{(}\cos(\beta_{i}t)\big{[}L_{i}(0)+(L_{i}(0)-L_{i,\min} )\sin(\beta_{i}\tau)\big{]}\] \[-\cos(\beta_{i}\tau)\big{[}L_{i}(0)+(L_{i}(0)-L_{i,\min})\sin( \beta_{i}t)\big{]}\Big{)}\] \[+2L_{i}(0)\big{[}L_{i}(0)+(L_{i}(0)-L_{i,\min})\sin(\beta_{i} \tau)\big{]}\big{[}L_{i}(0)+(L_{i}(0)-L_{i,\min})\sin(\beta_{i}\tau)\big{]} \big{[}L_{i}(0)+(L_{i}(0)-L_{i,\min})\sin(\beta_{i}\tau)\big{]}\] \[\times\Bigg{[}\tan^{-1}\Bigg{(}\frac{L_{i}(0)\Big{[}\tan\big{(} \beta_{i}t/2)+1\Big{]}-L_{i,\min}}{\big{[}L_{i}(0)^{2}-(L_{i}(0)-L_{i,\min})^{ 2}\big{]}^{1/2}}\Bigg{)}\] \[-\tan^{-1}\Bigg{(}\frac{L_{i}(0)\Big{[}\tan\big{(}\beta_{i}\tau/2 \big{)}+1\Big{]}-L_{i,\min}}{\big{[}L_{i}(0)^{2}-(L_{i}(0)-L_{i,\min})^{2} \big{]}^{1/2}}\Bigg{)}\Bigg{]}\Bigg{]}\] \[+\Big{[}L_{i}(0)+(L_{i}(0)-L_{i,\min})\sin\big{(}\beta_{i}\tau \big{)}\Big{]}^{2}\Bigg{[}\frac{2\pi L_{i}(0)}{\beta_{i}\big{[}L_{i}(0)^{2}-( L_{i}(0)-L_{i,\min})^{2}\big{]}^{3/2}}\Bigg{]}\] \[\times\Bigg{[}\Bigg{[}\frac{\beta_{i}t+\pi}{2\pi}\Bigg{]}-\Bigg{[} \frac{\beta_{i}\tau+\pi}{2\pi}\Bigg{]}\Bigg{]}. \tag{30}\]
To ensure that our solutions are consistent with those derived in previous investigations [20], we compare the average individual density in the random walk against the exact solution for a single fixed domain and a single growing domain (Appendix A, Figures 9 and 10). As in previous investigations [20], we observe a close match between the density profiles obtained from repeated realisations of the random walk and the exact solutions. A formal derivation of a continuum model from a lattice-based random walk on a growing domain can be found in [34]. We also examine whether the derived exact solutions for multiple fixed domains are consistent with the average behaviour in the random walk. We consider both the case where \(D_{1}=D_{2}\) and where \(D_{1}\neq D_{2}\) (Appendix A, Figures 11 and 12). In both cases we observe that the exact solutions match the average random walk behaviour well, which suggests that the derived solutions are valid.
We now impose domain growth and examine whether the derived exact solutions are valid for the case with multiple growing domains. As before, we verify the solutions via comparison against the average random walk behaviour. We first consider examples with positive linear domain growth for three different rates of growth and present the results in Figure 3. In each case we observe that the exact solutions match the average random walk behaviour in each of the three domains. The exact solutions for \(C_{1}(x,t)\), \(C_{2}(x,t)\) and \(C_{3}(x,t)\) are shown in cyan, orange and pink, respectively. The location of the internal boundary is shown via the dashed grey line, which increases in intensity as time increases. We observe that as we increase the rate of domain growth a smaller proportion of individuals leaves the inner domain. We note that as \(D_{1}=D_{2}\) and the ratio \(L_{1}(t)/L_{1}(0)=L_{2}(t)/L_{2}(0)=L_{3}(t)/L_{3}(0)\) is consistent between domains this problem is comparable to the problem of domain growth on a single
domain.
If the relative rates of domain growth are different between domains, we expect different behaviour to the single domain problem. We consider three examples where \(D_{1}=D_{2}\) and \(\beta_{1}\neq\beta_{2}\) and present solution profiles obtained from repeated realisations of the random walk and enumerating the exact solutions for three different domain growth rate pairs in Figure 4. In each example \(\beta_{1}=0.01\), while \(\beta_{2}=0.02\) in Figure 4(a), \(\beta_{2}=0.04\) in Figure 4(b) and \(\beta_{2}=0.08\) in Figure 4(c). The exact solutions match the average random walk behaviour well. As \(\beta_{2}\) increases, we see fewer individuals reaching the external boundaries, though the solution in the inner domain is largely consistent. The additional domain growth serves to stretch the solution profile (via advection) in the outer domains. However, this additional advection is not sufficient for the individuals to reach the outer boundary at higher \(\beta_{2}\) values.
We highlight the impact of heterogeneous domain growth in Figure 5. We consider the three possible relationships between \(\beta_{1}\) and \(\beta_{2}\) to investigate the differences between homogeneous and heterogeneous domain growth. That is, we examine: (i) \(\beta_{1}=\beta_{2}\) (homogeneous growth); (ii) \(\beta_{1}>\beta_{2}\) (heterogeneous growth), and; (iii) \(\beta_{1}<\beta_{2}\) (heterogeneous growth), noting that we ensure the total amount of domain growth is consistent across all three examples. These results illustrate a marked difference between homogeneous and heterogeneous domain growth. If the heterogeneity manifests in slower growth in the inner domain we see that the spread of the population is inhibited relative to the homogeneous growth case. In contrast, if the inner domain experiences more rapid growth then the population spreads faster than in the case of homogeneous domain growth.
We now consider examples where \(D_{1}\neq D_{2}\). The form of the exact solution suggests that there will be a jump discontinuity at the internal boundary that satisfies \(D_{1}C_{1}(L_{1}(t),t)=D_{2}C_{3}(L_{1}(t),t)\). The presence of jump discontinuities in the density has an intuitive explanation, as the net movement between two sites that have different probabilities of movement is balanced by an equivalent (but proportionally opposite) difference in the number of agents occupying the sites. For the three examples considered, presented in Figure 6, we observe this jump discontinuity where if \(D_{1}>D_{2}\) then \(C_{1}(L_{1}(t),t)<C_{3}(L_{1}(t),t)\) and if \(D_{1}<D_{2}\) then \(C_{1}(L_{1}(t),t)>C_{3}(L_{1}(t),t)\), as expected. The jump discontinuity is accurately captured in both the exact solution and the average random walk behaviour, and we again see that the exact solution closely describes the average behaviour in the random walk. A detailed discussion of similar jump discontinuities that arise across internal boundaries can be found
Figure 3: Comparison between the average behaviour in the lattice-based random walk \(\overline{C}(x,t)\) (black, dashed) and the exact solutions \(C_{1}(x,t)\) (cyan), \(C_{2}(x,t)\) (orange) and \(C_{3}(x,t)\) (pink) as defined in Equations (15)-(18) for multiple linearly growing domains (Equations (25)-(26)) and domain-independent diffusivities. Parameters used are \(D_{1}=D_{2}=0.5\), \(L_{1}(0)=L_{2}(0)=50\), \(N=1000\), \(x_{0}=0\), (a) \(\beta_{1}=\beta_{2}=0.01\), (b) \(\beta_{1}=\beta_{2}=0.05\), (c) \(\beta_{1}=\beta_{2}=0.1\). Solution profiles are presented at \(t=200\), \(t=500\) and \(t=1000\). The arrow indicates the direction of increasing time. Dashed grey lines correspond to the position of the boundary. Average random walk behaviour is obtained from \(5000\) identically-prepared realisations of the random walk.
Figure 4: Comparison between the average behaviour in the lattice-based random walk \(\overline{C}(x,t)\) (black, dashed) and the exact solutions \(C_{1}(x,t)\) (cyan), \(C_{2}(x,t)\) (orange) and \(C_{3}(x,t)\) (pink) as defined in Equations (15)-(18) for multiple linearly growing domains (Equations (25)-(26)) and domain-independent diffusivities. Parameters used are \(D_{1}=D_{2}=0.5\), \(L_{1}(0)=L_{2}(0)=50\), \(N=1000\), \(x_{0}=0\), (a) \(\beta_{1}=0.01\), \(\beta_{2}=0.02\), (b) \(\beta_{1}=0.01\), \(\beta_{2}=0.04\), (c) \(\beta_{1}=0.01\), \(\beta_{2}=0.08\). Solution profiles are presented at \(t=400\), \(t=1000\) and \(t=2000\). The arrow indicates the direction of increasing time. Dashed grey lines correspond to the position of the boundary. Average random walk behaviour is obtained from 5000 identically-prepared realisations of the random walk.
Figure 5: Comparison between heterogeneous and homogeneous domain growth. Exact solutions \(C_{1}(x,t)\) (cyan), \(C_{2}(x,t)\) (orange) and \(C_{3}(x,t)\) (pink) as defined in Equations (15)-(18) for multiple linearly growing domains (Equations (25)-(26)) with domain-independent diffusivities. Parameters used are \(D_{1}=D_{2}=0.5\), \(L_{1}(0)=L_{2}(0)=50\), \(\beta_{1}=\beta_{2}=0.025\) (homogeneous growth, dashed lines), \(\beta_{1}=0.05\), \(\beta_{2}=0\) (heterogeneous growth, solid lines), \(\beta_{1}=0\), \(\beta_{2}=0.05\) (heterogeneous growth, dotted lines). Solution profiles are presented at \(t=2000\). Grey lines correspond to the position of the boundary.
in [33].
Thus far, we have only considered examples where the domains have experienced linear growth. We now examine three cases of exponential growth, and present the average behaviour in the random walk and the profiles obtained from the exact solutions in Figure 7. In Figure 7(a), both domains decrease in size. In Figure 7(b), the inner domain experiences positive growth while the outer domains experience negative growth. Finally, in Figure 7(c), the inner domain undergoes negative growth and the outer domain undergoes positive growth. Critically, we select a positive value of \(L_{i,\min}\) so that even if the domain shrinks in size we do not experience finite-time blow-up in the solution due to the collision of the solution characteristics. In each case, there is close agreement between the average random walk behaviour and the exact solution, which confirms that our derived solutions are appropriate for describing negative domain growth.
One benefit of the exact solutions is that (near) steady state values of statistics of interest, such as the splitting and survival probabilities, can be enumerated without the need to perform simulations over long time periods. In Equations (19)-(24), we derived expressions for the time-varying left and right splitting probabilities and survival probability. We verify that these probabilities are correct for both the single and multiple growing domains by comparing the derived expressions against the probabilities obtained from repeated realisations of the random walk (Appendix A, Figure 13). We highlight the usefulness of the expressions by exploring how domain growth rate, initial location and domain-dependent diffusivity influence the splitting and survival probabilities in Figure 8. Uniformly increasing the diffusivity increases both the splitting probabilities while decreasing the survival probability. Interestingly, comparing the results where \(D_{1}=0.25\) and \(D_{2}=0.5\) (Figures 8(d)-(f)) against the results where \(D_{1}=0.5\) and \(D_{2}=0.25\) (Figures 8(g)-(i)) suggests that, in terms of an individual reaching the external boundary, an increased rate of movement in the outer domain is more important than in the inner domain.
Finally, we consider the question of survival probabilities on oscillatory evolving domains. That is, we consider domains that oscillate between a minimum (but positive) and maximum size as defined in Equation (29). Contractile cells, such as smooth muscle cells and cardiomyocytes, exhibit this behaviour [1]. Cardiomyocytes contract and relax with a regular rhythm as the heart beats, while
Figure 6: Comparison between the average behaviour in the lattice-based random walk \(\overline{C}(x,t)\) (black, dashed) and the exact solutions \(C_{1}(x,t)\) (cyan), \(C_{2}(x,t)\) (orange) and \(C_{3}(x,t)\) (pink) as defined in Equations (15)-(18) for multiple linearly growing domains (Equations (25)-(26)) and domain-dependent diffusivities. Inserts highlight the jump discontinuities (red). Parameters used are \(L_{1}(0)=L_{2}(0)=50\), \(N=1000\), \(x_{0}=0\), (a) \(D_{1}=0.5\), \(D_{2}=0.25\), \(\beta_{1}=0.01\), \(\beta_{2}=0.02\), (b) \(D_{1}=0.25\), \(D_{2}=0.5\), \(\beta_{1}=0.01\), \(\beta_{2}=0.01\), (c) \(D_{1}=0.5\), \(D_{2}=0.25\), \(\beta_{1}=0.02\), \(\beta_{2}=0.01\). Solution profiles are presented at (a) \(t=400\), \(t=1000\) and \(t=2000\), (b),(c) \(t=1000\), \(t=2000\) and \(t=5000\). The arrow indicates the direction of increasing time. Dashed grey lines correspond to the position of the boundary. Average random walk behaviour is obtained from 5000 identically-prepared realisations of the random walk.
various molecular species undergo diffusion within the cell [1]. We examine how the minimum domain size (\(L_{i,\min}\)) and the rate of oscillation (\(\beta_{i}\)) impact the survival probability, and present the results in Figure 9. We see that in all cases, after one oscillation has been completed, an individual is more likely to have reached the external boundary in comparison to the case where there is no oscillation (Figure 9(a)). This is interesting as the average length of the domain is not altered by the presence of oscillation. However, as the size of individuals' movement events is independent of the overall size of the domain, a single movement event on a domain that is at its minimum size is proportionally larger than if at the domain is at its initial size. We see that for smaller minimum domain sizes the survival probability decreases more rapidly. We explore how changing the rates of oscillation and minimum domain sizes affects the survival probability at \(t=8\times 10^{3}\) and \(t=1\times 10^{4}\) (Figures 9(b),(c)). The times and oscillation rates are chosen such that at \(t=1\times 10^{4}\) all domains have completed an integer number of oscillations and are at the initial domain size. Interestingly, in Figure 9(c), we see that all oscillation rates except for \(\beta_{i}=0\) have the same survival probability at \(t=1\times 10^{4}\). This suggests that the survival probability is insensitive to the rate of oscillation; rather it is impacted by the current phase of oscillatory evolution and the magnitude of the oscillation. This dependence on the phase of oscillation has been observed for diffusive particles undergoing oscillatory forcing on fixed domains [35]. If we consider a fixed minimum domain size and vary the rate of oscillation, as in Figure 9(d), we can see this impact clearly. There is an increased sensitivity to the phase of growth if there have been few oscillations, which occurs for \(\beta_{i}\) values with a low magnitude. This sensitivity decreases as the magnitude of the oscillation rate increases. Again, when the oscillating domains align in size (at \(t=1\times 10^{4}\)) we see that the rate of oscillation does not affect the survival probability, provided there is any oscillation. In the context of contracting cardiomyocytes, this suggests that the traversal of chemical species across the cell may not be impacted by transient changes to heart rate.
## 4 Discussion and conclusions
Diffusive processes occur on expanding and contracting domains, from the smallest of biological scales [1; 8] to the largest of cosmological scales [3]. Mathematical investigations have yielded detailed insight into the dynamics of diffusive processes on both single uniformly growing domains and multiple non-growing domains [22; 23; 24; 10; 11; 12; 13; 14; 16; 17; 18; 19; 20; 21]. However, certain diffusive processes of interest, such as drug delivery and cell migration, occur on domains that can grow in a
Figure 8: (a),(d),(g),(j) Left splitting probability, (b),(e),(h),(k) survival probability and (c),(f),(j),(l) right splitting probability obtained from the exact solutions defined in Equations (22)-(24) for multiple linearly growing domains (Equations (25)-(26)) for a range of domain growth rates and initial locations. Parameter used are \(L_{1}(0)=L_{2}(0)=50\), (a)-(c) \(D_{1}=D_{2}=0.5\), (d)-(f) \(D_{1}=0.25\), \(D_{2}=0.5\), (g)-(i) \(D_{1}=0.5\), \(D_{2}=0.25\), (j)-(l) \(D_{1}=D_{2}=0.25\). Probabilities are presented at \(t=10^{5}\).
Figure 9: (a) Survival probability over time as defined in Equations (22)-(24) for multiple oscillating domains (Equations (29)-(30)). The grey line corresponds to no oscillation (\(\beta_{i}=0\)). The solid lines correspond to \(L_{i,\min}=50\). The dashed lines correspond to \(L_{i,\min}=25\). Oscillation rates are \(\beta_{i}=-8\pi\times 10^{-4}\) (cyan), \(-4\pi\times 10^{-4}\) (black), \(4\pi\times 10^{-4}\) (orange), \(8\pi\times 10^{-4}\) (pink). (b),(c) Survival probability at (b) \(t=8\times 10^{3}\) and (c) \(t=1\times 10^{4}\) for a range of oscillation rates and minimum domain sizes. (d) Survival probability at \(t=8\times 10^{3}\) (black) and \(t=1\times 10^{4}\) (pink) for \(L_{i,\min}=35\). The black line in (b) and the pink line in (c) correspond to the profiles in (d). Parameters used are \(L_{1}(0)=L_{2}(0)=75\), \(D_{1}=D_{2}=0.5\), and \(x_{0}=0\).
spatially non-uniform manner [5, 8].
Here we have presented exact solutions to a mathematical model of a diffusive process on multiple growing domains. Each domain exhibits spatially-uniform growth and hence the overall domain can exhibit spatially non-uniform growth. We compare the enumerated exact solution profiles against profiles obtained from repeated realisations of a corresponding lattice-based random walk to ensure that the exact solutions are valid. In all cases, we observe that the derived exact solutions accurately describe the average random walk behaviour. From our exact solutions, we derive expressions for relevant time-varying statistics such as the survival probability and splitting probabilities. We reveal how domain-specific model parameters influence these statistics, including how the interplay between domain growth rates, domain-specific diffusivities and initial location drive long-term survival. Finally, we show how oscillating domain evolution enhances diffusion. Intriguingly, the rate of oscillation (provided it is non-zero) does not influence the survival probability, provided the comparison is made at a point of time such that the domains have completed an integer number of oscillations. Instead, the magnitude of the oscillation is the key factor influencing the survival probability. This result has interesting implications for biological phenomena that exhibit oscillatory domain evolution. For example, in the context of cardiomyocytes, this result suggests that the internal diffusion of chemical species should be robust to changes in heart rate, such as those due to exercise or stress.
The work presented here may be extended in several directions. We have only considered linearly diffusive processes (i.e. random motion of the agents). However, many processes exhibit characteristics attributable to nonlinear diffusion, such as compact support and a finite rate of spread [36, 37, 38]. These characteristics are particularly relevant for growing domains, as the relationship between the rate of spread and the rate of domain growth dictates whether it is possible for agents to reach the external domain boundary. It is instructive to determine which nonlinear diffusivity functions permit exact solutions following the transformation techniques presented here. Similarly, many biological processes require the inclusion of a reaction term to accurately capture the underlying population dynamics [36]. It is unclear which classes of reaction terms would admit exact solutions under the transformations examined here; however, it is possible that non-classical transformation techniques may yield analytical progress [39]. Here we have focused on one-dimensional processes. However, for diffusive processes on single growing domains, exact results have been derived for higher dimensions [25, 40]. An investigation into the domain geometries that permit exact solutions for multiple growing domains in higher dimensions would be instructive. In the model considered, there are two key timescales on each domain: the timescale of domain growth and the timescale of diffusive motion, which are accounted for via the transformation of the spatial and temporal variables. It would be of interest to investigate whether the approach developed here can be applied in the case where an additional timescale is present due to, for example, chemotactic motion [41].
#### Code availability
The code used to generate the results presented here can be found on Github at:
[https://github.com/DrStuartJohnston/heterogeneous-growing-domains](https://github.com/DrStuartJohnston/heterogeneous-growing-domains).
#### Acknowledgements
The authors would like to acknowledge the organisers of the MATRIX workshop "The mathematics of tissue dynamics," where this research project commenced.
#### Funding
This work was in part supported by the Australian Research Council (STJ: DE200100988, MJS: DP200100177). |
2305.09778 | Shortest Path to Boundary for Self-Intersecting Meshes | We introduce a method for efficiently computing the exact shortest path to
the boundary of a mesh from a given internal point in the presence of
self-intersections. We provide a formal definition of shortest boundary paths
for self-intersecting objects and present a robust algorithm for computing the
actual shortest boundary path. The resulting method offers an effective
solution for collision and self-collision handling while simulating deformable
volumetric objects, using fast simulation techniques that provide no guarantees
on collision resolution. Our evaluation includes complex self-collision
scenarios with a large number of active contacts, showing that our method can
successfully handle them by introducing a relatively minor computational
overhead. | He Chen, Elie Diaz, Cem Yuksel | 2023-05-16T20:05:10Z | http://arxiv.org/abs/2305.09778v1 | # Shortest Path to Boundary for Self-Intersecting Meshes
###### Abstract.
We introduce a method for efficiently computing the exact shortest path to the boundary of a mesh from a given internal point in the presence of self-intersections. We provide a formal definition of shortest boundary paths for self-intersecting objects and present a robust algorithm for computing the actual shortest boundary path. The resulting method offers an effective solution for collision and self-collision handling while simulating deformable volumetric objects, using fast simulation techniques that provide no guarantees on collision resolution. Our evaluation includes complex self-collision scenarios with a large number of active contacts, showing that our method can successfully handle them by introducing a relatively minor computational overhead.
Key words and phrases:Collision response, Computational geometry, geodesics, shortest path +
Footnote †: journal: Computer Vision and Pattern Recognition
We demonstrate that one important application of our method is solving arbitrary self-intersections after they appear in deformable simulations, allowing the use of cheaper integration techniques that do not guarantee complete collision resolution.
Our method is based on the realizations that (1) the shortest path must be fully contained within the geodesic embedding of the mesh and (2) it must be a line segment under Euclidean metrics. Based on these, given a candidate boundary point, our method quickly checks if the line segment to this point is contained within the mesh. Combined with a spatial acceleration structure, we can efficiently find and test the candidate closest boundary points until the shortest path is determined. We also describe a fast and robust tetrahedral traversal algorithm that avoids infinite loops, needed for checking if a path is within the mesh. Furthermore, we propose an additional acceleration that can quickly eliminate candidate boundary points based on local geometry without the need for checking their paths.
One application of our method is resolving intersections between separate objects and self-intersections alike within a fast physics-based simulation system that cannot guarantee intersection-free states. It can be used alone or as a backup for continuous collision detection to handle cases when the simulation system fails to resolve a previously-detected collision. In either case, we achieve a robust collision handling method that can solve extremely challenging cases, involving numerous deep self-intersections, using a fast simulation system that does not provide any guarantees about collision resolution. As a result, we can simulate highly complex scenarios with a large number of self-collisions and rest-in-contact conditions, as shown in Figure 1.
## 2. Related Work
One important application of our method is collision handling (Section 2.1), though we actually introduce a method for certain types of geodesic distances and paths (Section 2.2). A core part of our method is tetrahedral ray traversal (Section 2.3). In this section, we overview the prior in these areas and briefly present how our approach compares to them.
### Collision Handling
Collision handling is directly related to how they are detected, which can be done using either _continuous collision detection_ (CCD) or _discrete collision detection_ (DCD).
Starting with an intersection-free state, CCD can detect the first time of contact between elements (Canny, 1986), but requires maintaining an intersection-free state. Through the use of a strong barrier function, _incremental potential contact_ (IPC) (Li et al., 2020) provides guaranteed collision resolution combined with a CCD-aware line search. This idea was later extended to rigid (Ferguson et al., 2021) and almost rigid bodies (Lan et al., 2022). Incorporating projective dynamics into IPC offers performance improvement (Lan et al., 2022), but resolving all collisions still remains expensive. Even when the simulation system is able to resolve all collisions, CCD itself can fail due to numerical issues, in which case, it can no longer help with resolving the collision, resulting in objects linking together (Wang et al., 2021).
In contrast, DCD allows the simulation framework to start and recover from a state with existing intersections. DCD detects collisions at a single point in time, after they happen. That is why, extra computation is needed to determine how to resolve the collisions.
Collisions can be resolved by minimizing the penetration volume (Allard et al., 2010; Wang et al., 2012) or by applying constraints (Bouaziz et al., 2014; Macklin et al., 2016; Muller et al., 2007; Verschoor and Jalba, 2019), **penalty forces**(Belytschko and Neal, 1991; Ding and Schroeder, 2019; Drumwright, 2007; Hunek, 1993), or impulses (Kavan, 2003; Mirtich and Canny, 1995; O'Sullivan and Dingliana, 1999) that involve computing the _penetration depth_, the minimum translational distance to resolve the penetration (Hirota et al., 2000; Platt and Barr, 1988; Terzopoulos et al., 1987). The exact penetration depth can be computed using analytical methods based on geometric information of polygonal meshes (Barajf, 1994; Cameron, 1997; Hahn, 1988; Moore and Wilhelms, 1988), or it can be approximated using a volumetric mesh (Fisher and Lin, 2001), mesh partitioning (Redon and Lin, 2006), tracing rays (Hermann et al., 2008), or solving an optimization problem (Je et al., 2012). Heidelberger et al. (2004) proposed a consistent penetration depth by propagating penetration depth through the volumetric mesh. These methods, however, struggle with handling self-intersections. Starting with a self-intersecting shape, Li and Barbic (2018) proposed a method to separate the overlapping parts and create a bounding case mesh that represents the underlying geometry to allow "un-glued" simulation.
Using a _signed distance fields_ (SDF) is a more popular alternative for recent methods. They can be defined either on a volumetric mesh (Fisher and Lin, 2001) or a regular grid (Gascuel, 1993; Koschier et al., 2017; Macklin et al., 2020). Once built, both the penetration depth and the shortest path to the surface can be directly queried from the volumetric data structure. This provides an efficient solution at run time as long as the SDF does not need updating, though the returned penetration depth and shortest path are approximations (formed by interpolating pre-computed values). Also, the SDF is not well defined when there are self-intersections, as they cannot represent immersion, so it must be built using an intersection-free pose.
For handling self-intersections, SDFs of an intersection-free pose can be used (McAdams et al., 2011). This can provide sufficient accuracy for handling minor deformations, but quickly becomes inaccurate with large deformations and deep penetrations. Using a deformable embedding helps (Macklin et al., 2020), but requires splitting the object into pieces (Fisher and Lin, 2001, 2001, 2002; Macdlin et al., 2011; Teng et al., 2014). An alternative approach is bifurcating the SDF nodes during construction when a volumetric overlap, which can be formed by self-intersection, is detected (Mitchell et al., 2015). These solutions entirely circumvent the self-intersection problem by only considering intersections of separate pieces and self-intersections within a piece are ignored. Such approaches are particularly problematic with complex models and in cases when determining where to split is unclear ahead of time, since the splitting or bifurcation is usually pre-computed and expensive to update at run time. Also, the closest boundary point found within a piece is not necessarily the actual one for the entire mesh, as it might be contained in a separate piece. Even for cases
they can handle with sufficient accuracy, SDFs have a significant pre-computation and storage cost.
In comparison, our solution can find the _exact_ penetration depth for models with arbitrary complexity and the accurate shortest path to the boundary regardless of the type or severity of self-intersections. In addition, we do not require costly pre-computations or volumetric storage.
### Geodesic Path and Distances
Following the categorization of Crane et al. (2020), our method falls into the category of _multiple source geodesic distance/shortest path_ (MSGD/MSSP) problems. Actually, the problem we solve is a special case of MSSP, where the set of sources is the collection of all the boundary points of the mesh. Also, ours is an exact polygonal method that can compute global geodesic paths. MMP algorithm (Mitchell et al., 1987) is the first practical algorithm that can compute geodesic path between any two points on a polygonal surface. Succeeding methods (Chen and Han, 1990; Liu, 2013; Surazhsky et al., 2005; Xin and Wang, 2009) focus on optimizing its computation time and memory requirements. Yet, all of these method only aim at solving the _single source geodesic distance/shortest path_ (SSGD/SSSP) problems. For solving the _all-pairs geodesic distances/shortest paths_ (APGD/APSP) problem, a vertex graph that encodes the minimal geodesic distances between all pairs of vertices on the mesh can be built (Balasubramanian et al., 2008). These methods are general enough for handling 2D manifolds in 3D, but they do not offer an efficient solution for our MSSP problem. Our solution for MSSP, however, is limited to planar (2D, triangular) or volumetric (3D, tetrahedral) meshes, where we can rely on Euclidean metrics.
### Tetrahedral Ray Traversal
For handling tetrahedral meshes in 3D, our method uses a topological ray traversal. Tetrahedral ray traversal has been used in volumetric rendering (Marmitt and Slussallek, 2006; Parker et al., 2005; Sahstan et al., 2021). Methods that improve their computational cost include using scalar triple products (Lagae and Dutre, 2008) and Plucker coordinates (Maria et al., 2017). More recently, Aman et al. (2022) introduced a highly-efficient dimension reduction approach.
A common problem with tetrahedral ray traversal is that numerical inaccuracies can lead to infinite loops when a ray passes near an edge or vertex. Many rendering problems can safely terminate when an infinite loop is detected. In our case, however, we must detect and resolve such cases, because failing to do so would result in returning an incorrect shortest path, which can have catastrophic effects in simulation. Therefore, we introduce a robust variant of tetrahedral ray traversal.
## 3. Shortest Path to Boundary
A typical solution for resolving intersections (detected via DCD) is finding the closest boundary point for each intersecting point and then applying corresponding forces/constraints along the line segment toward this point, i.e. the shortest path to boundary. The length of this path is the penetration depth.
When two separate objects intersect, finding the closest boundary point is a trivial problem: it is the closest boundary point on the other object. In the case of self-intersections, however, even the definition of the shortest path to boundary is somewhat ambiguous.
Consider a point on the boundary and also inside the object due to self-intersections. Since this point is already on the boundary, its Euclidean closest boundary point would be itself. Yet, this information is not helpful for resolving the self-intersection.
In this section, we provide a formal definition of the shortest path to boundary based on the geodesic path of the object in the presence of self-intersections (Section 3.1). Then, we present an efficient algorithm to compute it for triangular/tetrahedral meshes in 2D/3D, respectively, (Section 3.2). We also describe how to handle meshes that contain some inverted elements, (Section 3.5). The resulting method provides a robust solution for handling self-collisions that can be used with various simulation methods and collision resolution techniques (using forces or constraints).
### Shortest Path to Boundary
Consider a self-intersecting model \(M\), such that a boundary point \(\mathbf{s}\) coincides with an internal point \(\mathbf{p}\). Figure 1(b) shows a 2D illustration, though the concepts we describe here apply to 3D (and higher dimensions) as well. In this case, \(\mathbf{s}\) and \(\mathbf{p}\) have the same geometric positions, but topologically they are different points. In fact, to fix the self-intersection, we need to apply a force/constraint that would move \(\mathbf{s}\) along \(\mathbf{p}\)'s geodesic shortest path to boundary.
To provide a formal definition of this geodesic shortest path, we consider a self-intersection-free form of this model as \(\overline{M}\) which we call undeformed pose, and a deformation \(\Psi\) that maps all points in \(\overline{M}\) to its current shape \(M\), such that \(M=\Psi(\overline{M})\). Note that our algorithm (explained in Section 3.2) does not actually need computing \(\overline{M}\) or \(\Psi\). For any point \(\overline{\mathbf{p}}\) in \(\overline{M}\), we represent its image under \(\Psi\) as \(\mathbf{p}\in M\), such that \(\mathbf{p}=\Psi(\overline{\mathbf{p}})\). In the following, we assume that \(\overline{M}\) is a path-connected (i.e. a single piece) manifold, though the concepts below can be trivially extended to models with multiple separate pieces.
To cause self-intersection, \(\Psi\) should not be injective. In this case, \(\Psi\) is an immersion of \(\overline{M}\) but not embedding, meaning multiple points from \(\overline{M}\) are mapped to the same position \(\mathbf{p}\) inside \(M\). To differentiate such points that coincide in \(M\), we label them using their unique
Figure 2. Illustrations of the notations. (a) Notations on the undeformed pose. (b) Notations on the deformed model. The image of the undeformed pose boundary \(\Psi(\partial\overline{M})\) is marked as the red curve.
positions in \(\overline{M}\). For simplicity, we say \(\mathbf{p}\)_as_\(\overline{\mathbf{p}}\), when we are referring \(\mathbf{p}\) as the image of \(\overline{\mathbf{p}}\).
For simplicity, let us consider non-degenerate \(\Psi\) that forms no inversion, i.e. \(\det(\nabla\Psi)>0\). We discuss inversions later in Section3.5. Note that under this \(\Psi\), the boundary of the undeformed model \(\partial\overline{M}\) does not completely overlap with the boundary of the deformed model \(\partial M\), i.e. \(\Psi(\partial\overline{M})\neq\partial M\), see Figure2b. We use \(\overline{M}^{\circ}\) to denote the set of interior points of \(\overline{M}\), such that \(\overline{M}=\partial\overline{M}\cup\overline{M}^{\circ}\).
Let \(\mathbf{s}\) as be a point on the boundary, i.e. \(\mathbf{s}\in\Psi(\partial\overline{M})\) and we refer to it as an undeformed pose boundary point \(\overline{\mathbf{s}}\). For a given point \(\mathbf{p}\) (as \(\overline{\mathbf{p}}\)), we can construct a path \(\mathbf{c}(t):[0,1]\mapsto M\) as a continuous curve that connects \(\mathbf{p}=\mathbf{c}(0)\) to \(\mathbf{s}=\mathbf{c}(1)\).
Definition 1 (Valid path).: _The path \(\mathbf{c}(t)\) from \(\mathbf{p}\) (as \(\overline{\mathbf{p}}\)) to \(\mathbf{s}\) (as \(\overline{\mathbf{s}}\)) is a valid path if there exists a continuous curve \(\overline{\mathbf{c}}(t):[0,1]\mapsto\overline{M}\) such that \(\mathbf{c}(t)=\Psi(\overline{\mathbf{c}}(t))\), \(\overline{\mathbf{c}}(0)=\overline{\mathbf{p}}\), \(\overline{\mathbf{c}}(1)=\overline{\mathbf{s}}\)._
Based on this definition, a _valid path_ must be the image of a path that is fully contained within \(\overline{M}\), which connects the two points on the undeformed pose we are referring to. Any path that moves outside of \(\overline{M}\) is considered an _invalid path_, see Figure3(d). Our goal is to find the shortest valid path from a given point \(\mathbf{p}\) (as \(\overline{\mathbf{p}}\)) to the boundary.
Definition 2 (Shortest path to boundary).: _For an interior point \(\mathbf{p}\) (as \(\overline{\mathbf{p}}\)), the shortest path to boundary is the shortest curve \(\mathbf{c}(t)\) in \(M\) that connects \(\mathbf{p}\) to a boundary point \(\mathbf{s}\) (as \(\overline{\mathbf{s}}\)) that is a valid path between \(\mathbf{p}\) and \(\mathbf{s}\)._
Definition 3 (Closest boundary point).: _For an interior point \(\mathbf{p}\) (as \(\overline{\mathbf{p}}\)), the closest boundary point is the boundary point \(\mathbf{s}\) (as \(\overline{\mathbf{s}}\)) at the other end of \(\mathbf{p}\)'s shortest path to boundary \(\mathbf{c}(t)=\Psi(\overline{\mathbf{c}}(t))\), such that \(\mathbf{s}=\mathbf{c}(1)\) and \(\overline{\mathbf{s}}=\overline{\mathbf{c}}(1)\)._
Here we must emphasize that the definition of the shortest path is dependent on the pre-image point we are referring to. For a point located at the overlapping part of \(M\), referring to it as a different point on the undeformed pose may lead to a different shortest path to the boundary (see Figure3c). Also, this definition is equivalent to the image of \(\overline{\mathbf{p}}\)'s global geodesic path to boundary in \(\overline{M}\) evaluated under the metrics pulled back by \(\Psi\). Thus the shortest path we defined is a special class of geodesics.
To construct an efficient algorithm for finding the shortest path, we rely on two properties:
* First, by definition, the shortest path must be a continuous curve that is fully contained inside undeformed model \(\overline{M}\).
* Second, the shortest path (under the Euclidean distance metrics) that connects two points in the deformed model \(M\) must be a line segment.
Based on these properties, we can construct and prove the fundamental theorem of our algorithm:
Theorem 1 ().: _For any point \(\mathbf{p}\in M\) (as \((\overline{\mathbf{p}})\), its shortest path to the boundary is the shortest line segment from \(\mathbf{p}\) to a boundary point \(\mathbf{s}\in\Psi(\partial\overline{M})\) (as \(\overline{\mathbf{s}}\)), that is a valid path._
Here we verbally prove the theorem, we also provide a formal proof in the appendix. If the shortest path is not a line segment, we can continuously deform it into a line segment, while keeping the end points fixed. This procedure can induce a deformation on the undeformed pose, which continuously deforms the pre-image of that curve to the pre-image of the line segment, while keeping the end points fixed. This is always achievable because the curve cannot touch the boundary of the undeformed pose during the deformation, otherwise, we will form an even shorter path to the boundary. Thus the line segment is also a valid path.
Based on these properties, our algorithm investigates a set of candidate boundary points \(\mathbf{s}\) and checks if the line segment from the interior point \(\mathbf{p}\) to \(\mathbf{s}\) is a valid path. This is accomplished without having to construct \(\overline{M}\) or determine the deformation \(\Psi\) by relying on the topological connections of the given discretized model.
### Shortest Path to Boundary for Meshes
In practice, models we are interested in are discretized in a piecewise linear form. These are triangular meshes in 2D and tetrahedral meshes in 3D. We refer to each piecewise linear component as an _element_ (i.e. a triangle in 2D and a tetrahedron in 3D) and the one-dimension-lower-simplex shared by two topologically-connected elements as a _face_ (i.e. an edge between two triangles in 2D and a triangular face between two tetrahedra in 3D). This discretization makes it easy to test the validity of a given path, without constructing a self-intersection-free \(\overline{M}\) or the related deformation \(\Psi\).
We propose the concept of _element traversal_ for meshes, as a sequence of topologically connected elements:
**Definition 4** (Element traversal).: _For a mesh \(M\), and two-point \(\mathbf{a}\in\mathbf{e}_{\mathbf{a}},\mathbf{b}\in\mathbf{e}_{\mathbf{b}}\), we define a element traversal from \(\mathbf{a}\) to \(\mathbf{b}\) as a list of elements \(\mathcal{T}\left(\mathbf{a},\mathbf{b}\right)=(\epsilon_{0},\epsilon_{1}, \epsilon_{2},\ldots,\epsilon_{k})\), where \(\epsilon_{i}\) is a element of \(M\), \(\epsilon_{0}=\epsilon_{\mathbf{a}}\), \(\epsilon_{k}=\epsilon_{\mathbf{b}}\), and \(\epsilon_{i}\cap\epsilon_{i\neq 1}\) must be a face._
Specifically, we call it _tetrahedral traversal_ for 3D meshes, and _triangular traversal_ for 2D meshes.
Let \(\mathbf{c}(t)\) be a line segment from a point \(\mathbf{p}\) inside an element \(\epsilon_{\mathbf{p}}\) to a boundary point \(\mathbf{s}\) of a boundary element \(\epsilon_{\mathbf{s}}\) (with a boundary face that contains \(\mathbf{s}\)). If \(\mathbf{c}(t)\) is a valid path, there must be a corresponding piecewise linear path \(\mathbf{\bar{c}}(t)\) in \(\mathbf{\bar{M}}\) from \(\mathbf{\bar{p}}\) to \(\mathbf{\bar{s}}\) that passes through an element traversal of \(\mathbf{\bar{M}}\). Actually, an element traversal containing \(\mathbf{c}(t)\) is the sufficient and necessary condition for \(\mathbf{c}(t)\) being a valid path. Please see the appendix for a rigorous proof.
Thus, evaluating whether \(\mathbf{c}(t)\) is a valid path, is equivalent to searching for an element traversal from \(\mathbf{\bar{s}}\) to \(\mathbf{\bar{p}}\), and a piece-wise linear curve \(\mathbf{\bar{c}}(t):I\mapsto\mathbf{\bar{M}}\) defined on it, such that \(\mathbf{c}(t)=\Psi(\mathbf{\bar{c}}(t))\). Such an element traversal and piece-wise linear curve can be efficiently constructed in \(M\).
Going through the element traversal, \(\mathbf{\bar{c}}(t)\) must pass through faces shared by neighboring elements at points \(\mathbf{\bar{\tau}}_{i}\in e_{i}\cap e_{i+1}\), where \(i=0,1,2,\ldots,k-1\). When \(\Psi\) forms no inversion, corresponding face points \(\mathbf{\tau}_{i}\) must be along the line segment \(\mathbf{c}(t)\), i.e. \(\mathbf{\tau}_{i}=\mathbf{c}(t_{i})\) for some \(t_{i}\in[0,1]\), see Figure 3(a). If we can form such an element traversal using the topological connections of the model, we can safely conclude that the path is valid.
This gives us an efficient mechanism for testing the validity of the shortest path from \(\mathbf{p}\) to \(\mathbf{s}\). Starting from \(\epsilon_{\mathbf{p}}\), we trace a ray from \(\mathbf{p}\) towards \(\mathbf{s}\) and find the first face point \(\mathbf{r}_{0}\). If \(\mathbf{r}_{0}\) is not on the boundary, this face must connect \(\epsilon_{\mathbf{p}}\) to a neighboring element \(\epsilon_{1}\). Then, we enter \(\epsilon_{1}\) from \(\mathbf{r}_{0}\) and trace the same ray to find the exit point \(\mathbf{r}_{1}\) on another face. We continue traversing until we reach \(\epsilon_{\mathbf{s}}\), in which case we can conclude that this is a valid path, see Figure 3(a). This also includes the case \(\epsilon_{\mathbf{p}}=\epsilon_{\mathbf{s}}\). If we reach a face point \(\mathbf{r}_{i}\) that is on the boundary (see Figure 3(b)) or we pass-throughs a without entering \(\epsilon_{\mathbf{s}}\), \(\mathbf{s}\) cannot be the closest boundary point to \(\mathbf{p}\).
This process allows us to efficiently test the validity of a path to a given boundary point, but we have infinitely many points on the boundary to test. Fortunately, we are only interested in the shortest path and we can use the theorem below to test only a single point per boundary face.
**Theorem 2**.: _For each interior point \(\mathbf{p}\) (as \(\mathbf{\bar{p}}\)), if its closest boundary point \(\mathbf{s}\) (as \(\mathbf{\bar{s}}\)) is on the boundary face \(f\), \(\mathbf{s}\) must also be the Euclidean closest point to \(\mathbf{p}\) on \(f\)._
The proof is similar to Theorem 1, which is included in the appendix. Based on Theorem 2, we only need to check a single point (the Euclidean closest point) on each boundary face to find the closest boundary point. If we test these boundary points in the order of increasing distance from the interior point \(\mathbf{p}\), as soon as we find a valid path to one of them, we can terminate the search by returning it as the closest boundary point. In practice, we use a BVH (bounding volume hierarchy) to test these points, which allows testing them approximately (though not strictly) in the order of increasing distance and, once a valid path is found, quickly skipping the further away bounding boxes.
### Robust Topological Ray Traversal
The process we describe above for testing the validity of the linear path to a candidate boundary point involves traversing a ray through the mesh. This ray traversal is significantly simpler than typical ray traversal algorithms used for rendering with ray tracing. This is because it directly follows the topological connections of the mesh.
At each step, the ray enters an element through one of its faces and must exit from one of its other faces. Therefore, we do not need to rely on an acceleration structure to quickly determine which faces to test ray intersections, as they are directly known from the mesh topology. In fact, we do not need to check each one of the other faces individually, since the ray exits from exactly one of them. Therefore, we can quickly test all possible exit faces together.
For example, Aman et al. (2022) present such a tetrahedral traversal algorithm in 3D. Yet, due to limited numerical precision, this algorithm is prone to forming infinite loops. Such infinite loops are easy to detect and terminate (e.g. using a maximum iteration count), but such premature terminations are entirely unacceptable in our case. This is because incorrectly deciding on the validity of a path would force our algorithm to pick an incorrect shortest path to boundary, which can be arbitrarily far from the correct one. Therefore, the simulation system that relies on this shortest path to boundary can place strong and arbitrarily incorrect forces/constrains in an attempt to resolve the self-intersection.
Our solution for properly resolving such cases that arise from limited numerical precision is three fold:
1. We allow ray intersections with more than one face by effectively extending the faces using a small tolerance parameter \(\epsilon_{i}\) in the intersection test. This forms branching paths when a ray passes between multiple faces and, therefore, intersects (within \(\epsilon_{i}\)) with more than one of them.
2. We keep a list of traversed elements and terminate a branch when the ray enters an element that was previously entered.
3. We keep a stack containing all the candidate intersecting faces from the intersection test. After a loop is detected, we pick the latest element from it and continue the process.
Figure 3: (a) An example of a triangular traversal, marked by red triangles. A line segment connecting \(\mathbf{p}\) and \(\mathbf{s}\) is included in this triangular traversal. (b) An example of a line segment being an invalid path when there are self-intersections, the triangular traversal (marked by the red triangles) stops at the boundary of the mesh but the line segment penetrates the boundary and continues going.
Please see our appendix for the pseudo-code and more detailed explanations of our algorithm.
In practice such branching happens rarely, but solution ensures that we never incorrectly terminate the ray traversal. Note that \(\epsilon_{i}\) is a conservative parameter for extending the ray traversal through branching to prevent problems of numerical accuracy issues. It does not introduce any error to the final shortest paths we find. Using an unnecessarily large \(\epsilon_{i}\) would only have negative, though mostly imperceptible, performance consequences. We verified this by making the \(\epsilon_{i}\) ten times larger, which did not result in a measurable performance difference.
One error case is when the internal point \(\mathbf{p}\) (as \(\overline{\mathbf{p}}\)) and the boundary point \(\mathbf{s}\) (as \(\overline{\mathbf{s}}\)) coincide, such that \(\mathbf{p}=\mathbf{s}\) (within numerical precision). This forms a line segment with zero length and, therefore, does not provide a direction for us to traversal. This happens when testing self-intersections of boundary points, which pick themselves as their first candidate for the closest boundary point. This zero-length line segment cannot be a valid path. Fortunately, since we know we are testing self-intersection for \(\mathbf{s}\), when the BVH query returns the boundary face includes \(\mathbf{s}\), we can directly reject it.
### Intersections of Different Objects
Although our method is mainly designed for solving self-intersections, it is still needed for handling intersections of different objects when they may have self-intersections as well. As shown in Figure 5, an object \(M_{2}\) intersects with a self-intersecting object \(M_{1}\), where a surface point of \(M_{2}\) is overlapping with an interior point \(\mathbf{p}_{1}\in M_{1}\). Simply querying for \(\mathbf{p}_{1}\)'s Euclidean closest boundary point in \(M_{1}\) will give us \(\mathbf{s}^{\prime}\), which does not help resolve the penetration. This is because \(\mathbf{p}_{1}\mathbf{s}^{\prime}\) is not a valid path between \(\mathbf{p}_{1}\) (as \(\overline{\mathbf{p}}_{1}\in\overline{M_{1}^{\prime}}\)) and \(\mathbf{s}^{\prime}_{1}\) as (\(\overline{\mathbf{s}^{\prime}}\in\partial\overline{M_{1}}\) ). What is actually needed is \(\mathbf{p}_{1}\)'s shortest path to boundary as \(\overline{\mathbf{p}}_{1}\), which is the same problem as the self-intersection case, a surface point of \(M_{1}\) is overlapping with an interior point \(\mathbf{p}_{1}\in M_{1}\).
### Inverted Elements
Our derivations in Section 3.1 assume that \(det(\nabla\Psi)>0\) everywhere. For a discrete mesh, this would mean no inverted or degenerate elements. Unfortunately, though inverted elements are often highly undesirable, they are not always unavoidable. Fortunately, the algorithm we describe above can be slightly modified to work in the presence of certain types of inverted elements.
If the inverted elements are not a part of the mesh boundary, we can still test the validity of paths by allowing the ray traversal to go backward along the ray. This is because the ray would need to traverse backward within inverted elements. In addition, we cannot simply terminate the traversal once the ray passes through the target point, because an inverted element further down the path may cause backward traversal to reach (or pass through) the target point, see Figure 5(b). Therefore, ray traversal must continue until a boundary point is reached. We also need to allow the ray to go behind the starting point, see Figure 5(c).
A consequence of this simple modification to our algorithm is that, when we begin from an internal point \(\mathbf{p}\) toward a boundary point \(\mathbf{s}\), it is unclear if we would reach \(\mathbf{s}\) by beginning the traversal toward \(\mathbf{s}\) or in the opposite direction. While one may be more likely, both are theoretically possible.
To avoid this decision, in our implementation we start the traversal from the target boundary point \(\mathbf{s}\). In this case, there is no ambiguity, since there is only one direction we can traverse along the ray. This also allows using the same traversal routine for the first element and the other elements along the path by always entering an element from a face. Therefore, it is advisable even in the absence of inverted elements.
Nonetheless, our algorithm is not able to handle all possible inverted elements. For example, if the inverted element is on the boundary, as shown in Figure 7, the inversion itself can cause self-intersection. In such a case, a surface point \(\mathbf{s}\) is overlapping with an interior point \(\mathbf{p}\) (as \(\overline{\mathbf{p}}\)). Our algorithm will not be able to try to construct a tetrahedral traversal between those two points because we cannot determine a ray direction for a zero-length line segment. Actually, in this case, the very definition of the closest boundary point can be ambiguous.
Our solution is to skip the self-intersection detection of inverted boundary elements. As a result, the only way for us to solve such self-intersections caused by inverted boundary elements is to resolve the inversion itself. Fortunately, inverted elements are undesirable for most simulation scenarios, and they are often easier to fix for boundary elements. Unfortunately, if the inverted boundary elements have global self-intersections with other parts of the mesh, our solution ignores them. Though this does not form a complete solution, because the inverted boundary elements are rare, the other boundary elements surrounding the inverted elements are often enough to solve the global self-intersection.
### Infeasible Region Culling
In a lot of cases, it is possible to determine that a given candidate boundary point \(\mathbf{s}\) cannot be the closest boundary point to an interior point \(\mathbf{p}\), purely based on the local information about the mesh around \(\mathbf{s}\), without performing any ray traversal. For this test we construct a particular region of space, i.e. the _feasible region_, around \(\mathbf{s}\). When \(\mathbf{p}\) is outside of this region of \(\mathbf{s}\), thus in its _infeasible region_, we can safely conclude that \(\mathbf{s}\) is not the closest boundary point.
Figure 5. _An object \(M_{2}\) intersects with a self-intersecting object \(M_{1}\). A surface point of \(M_{2}\) is overlapping with an interior point \(\mathbf{p}_{1}\in M_{1}\). \(\mathbf{s}\) and \(\mathbf{s}^{\prime}\) are \(\mathbf{p}_{1}\)’s closest boundary point by our definition and Euclidean closest boundary point, respectively._
The formulation of the feasible region can be viewed as a discrete application of the well-known Hilbert projection theorem. The construction of this feasible region depends on whether \(\mathbf{s}\) is on a vertex, edge, or face.
Vertex Feasible RegionIn 2D, when \(\mathbf{s}\) is on a vertex, the feasible region is bounded by the two lines passing through the vertex and perpendicular to its two boundary edges, as shown in Figure 8a. For a neighboring boundary edge of \(\mathbf{s}\) and its perpendicular line that passes through \(\mathbf{s}\), if \(\mathbf{p}\) is on the same side of the line as the edge, based on Theorem 2, there must be a closer boundary point on the face. More specifically, for any neighboring boundary vertex \(\mathbf{v}_{i}\) connected to \(\mathbf{s}\) by a edge, if the following inequality is true, \(\mathbf{p}\) is in the infeasible region:
\[(\mathbf{p}-\mathbf{s})\cdot(\mathbf{s}-\mathbf{v}_{i})<0 \tag{1}\]
The same inequality holds in 3D for all neighboring boundary vertices \(\mathbf{v}_{i}\) connected to \(\mathbf{s}\) by an edge (Figure 8b). The 3D version of the vertex feasible region is actually the space bounded by a group of planes perpendicular to its neighboring edges.
Edge Feasible RegionIn 3D, when \(\mathbf{s}\) is on the edge of a triangle, its feasible region is the intersection of 4 half-spaces defined by four planes: two planes that contain the edge and perpendicular to its two adjacent faces, and two others that are perpendicular to the edge and pass through its two vertices, as shown in Figure 8c. Let \(\mathbf{v}_{0}\) and \(\mathbf{v}_{1}\) be the two vertices of the edge and \(\mathbf{n}_{0}\) and \(\mathbf{n}_{1}\) be the two neighboring face normals (pointing to the interior of the mesh). \(\mathbf{p}\) is in the infeasible region if any of the following is true:
\[(\mathbf{p}-\mathbf{v}_{0})\cdot(\mathbf{v}_{1}-\mathbf{v}_{0}) < 0 \tag{2}\] \[(\mathbf{p}-\mathbf{v}_{1})\cdot(\mathbf{v}_{0}-\mathbf{v}_{1}) < 0\] (3) \[(\mathbf{p}-\mathbf{s})\cdot(\mathbf{n}_{0}\times(\mathbf{v}_{1} -\mathbf{v}_{0})) < 0\] (4) \[(\mathbf{p}-\mathbf{s})\cdot(\mathbf{n}_{1}\times(\mathbf{v}_{0} -\mathbf{v}_{1})) < 0 \tag{5}\]
note that \(\mathbf{n}_{0}\) is from the face whose orientation accords to \(\mathbf{v}_{0}\rightarrow\mathbf{v}_{1}\).
Face Feasible RegionWe can similarly construct the feasible region when \(\mathbf{s}\) in on the interior of a face as well. Nonetheless, this particular feasible region test is unnecessary, because when \(\mathbf{s}\) is the closest point on the face to \(\mathbf{p}\), which is how we pick our candidate boundary points (based on Theorem 2), \(\mathbf{p}\) is guaranteed to be in the feasible region.
Our _infeasible region culling_ technique performs the tests above and skips the ray traversal if \(\mathbf{p}\) is determined to be in the infeasible region, quickly determining that \(\mathbf{s}\) cannot be the closest boundary
Figure 8: The feasible region, shaded in blue, for (a) a boundary vertex in 2D, (b) a boundary vertex in 3D, and (c) a boundary edge in 3D. Note that the 3D meshes in (b) and (c) are observed from the inside.
Figure 6: (a) A part of the undeformed pose of a triangular mesh \(\overline{M}\), which is inversion free. \(\overline{\mathbf{p}}\in\overline{M}\), \(\overline{\mathbf{s}}\in\partial\overline{M}\). A surface edge is marked with red color. (b) The image of \(\overline{M}\) under \(\Psi\), the tetrahedron \(t_{2}\) (colored with gray), is inverted by \(\Psi\). The green line illustrates \(\mathbf{p}\)’s global geodesics to the surface, it has a self-overlapping part, which is marked by the two-sided arrow. (c) An interior tetrahedron is inverted and got out of the surface. In this case, the global geodesics to the surface path can go backward.
Figure 7: (a) A part of the undeformed pose of a triangular mesh \(\overline{M}\), which is inversion free. The surface edges are marked with red color. (b) After deformation, a triangle (marked by gray color), is inverted and folded into the interior of the mesh. A deformed surfaces point \(\mathbf{s}\) overlaps with the interior point \(\mathbf{p}\).
point. Due to numerical precision, the feasible region check can return false results when \(\mathbf{p}\) is close to the boundary of the feasible region. There are two types of errors: false positives and false negatives. A false positive is not a big problem: it will only result in an extra traversal. But if a false negative happens, there is a risk of discarding the actual closest surface point. In practice, however, we replace the zeros on the right-hand-sides of the inequalities above with a small negative number \(\epsilon_{r}\) to avoid false-positives due to numerical precision limits. In our tests, we have observed that infeasible region culling can provide more than an order of magnitude faster shortest path query.
## 4. Collision Handling Application
As mentioned above, an important application of our method is collision handling with DCD. When DCD finds a penetration, we can use our method to find the closest point on the boundary and apply forces or constraints that would move the penetrating point towards this boundary point.
In our tests with tetrahedral meshes, we use two types of DCD: vertex-tetrahedron and edge-tetrahedron collisions. For vertex-tetrahedron collisions, we find the closest surface point for the colliding vertex. For edge-tetrahedron collisions, we find the center of the part of the edge that intersects with the tetrahedron and then use our method to find the closest surface point to that center point. If an edge intersects with multiple tetrahedra, we choose the intersection center that is closest to the center of the edge. The idea is by keep pushing the center of the edge-tetrahedron intersection towards the surface, which eventually resolves the intersection.
This provides an effective collision handling method with XPBD (Macklin et al., 2016). Once we find the penetrating point \(\mathbf{x}\) we use the standard PBD collision constraint (Muller et al., 2007)
\[c(\mathbf{x},\mathbf{s})=(\mathbf{x}-\mathbf{s})\cdot\mathbf{n} \tag{6}\]
where \(\mathbf{s}\) is the closest surface point computed by our method when this collision is from DCD, or the colliding point when it is from CCD, and \(\mathbf{n}\) is the surface normal at \(\mathbf{s}\). If \(\mathbf{s}\) is on a surface edge or vertex, we use the area-weighted average of its neighboring face normals. The XPBD integrator applies projections on each collision constraint \(c\) to satisfy \(c(\mathbf{x},\mathbf{s})\geq 0\). We also apply friction, following Bender et al. (2015). Please see the supplementary material for the pseudocode of our XPBD framework.
Unlike CCD alone, DCD with our method significantly improves the robustness of collision handling when using a simulation system like XPBD that does not guarantee resolving all collision constraints. This is demonstrated in Figure 9, comparing different collision detection approaches with XPBD. Using only CCD leads to missed collisions when XPBD fails to resolve the collisions detected in previous steps, because CCD can no longer detect them. This quickly results in objects completely penetrating through each other (Figure 9a). Our method with only DCD effectively resolves the majority of collisions (Figure 9b), but it inherits the limitations of DCD. More specifically, using only DCD with sufficiently large time steps and fast enough motion, some collisions can be missed and deep penetrations can resolve the collisions by moving the objects in incorrect directions, again resulting in object parts passing through each other. Furthermore, our method only provides the closest path to the boundary and properly resolving the collisions is left to the simulation system. Unfortunately, XPBD cannot provide any guarantees in collision resolution, so detected penetrations may remain unresolved.
Figure 10. _Dropping 6 octopi into a box simulated using implicit Euler. This simulation contains 30K vertices and 88K tetrahedra and it takes an average of 15s to simulate each frame. Please see the supplementary material for the details of our implicit Euler framework._
Figure 9. _Dropping 8 octopi to a box simulated with (a) CCD only, (b) DCD with our shortest path query only, and (c) CCD and DCD with our shortest path query. The bottom row shows the bottom view of the final state. The blue tint highlights the intersecting geometry. The octopus model is from Zhou and Jacobson (2016)._
We recommend a hybrid solution that uses both CCD and DCD with our method. This hybrid solution performs DCD in the beginning of the time step to identify the preexisting penetrations or collisions that were not properly resolved in the previous time step. The rest of the collisions are detected by CCD without requiring our method to find the closest surface point. The same simulation with this hybrid approach is shown in Figure 9c. Since all penetrations are first detected by CCD and proper collision constraints are applied immediately, deep penetrations become much less likely even with large time steps and fast motion. Yet, this provides no theoretical guarantees. The addition of CCD allows the simulation system to apply collision constraints immediately, before the penetrations become deep, and DCD with our method allows it to continue applying collision constraints when it fails to resolve the initial collision constraints. Note that, while this significantly reduces the likelihood of failed collisions, they can still occur if the simulation system keeps failing to resolve the detected collisions.
The collision handling application of our method is not exclusive to PBD. Our method can also be used with force-based simulation techniques for defining a penalty force with penetration potential energy
\[E_{c}=\frac{1}{2}\,k\,\big{(}(\mathbf{p}-\mathbf{s})\cdot\mathbf{n}\big{)}^{2} \tag{7}\]
where \(k\) is the collision stiffness. An example of this is shown in Figure 10.
## 5. Results
We use XPBD (Macklin et al., 2016) to evaluate our method, because it is one of the fastest simulation methods for deformable objects, providing a good baseline for demonstrating the minor computation overhead introduced by our method. We use mass-spring or NeoHookean (Macklin and Muller, 2021) material constraints implemented on the GPU. We handle the collision detection and handling part on the CPU, including the position updates of the collision constraints. We use the hybrid collision detection approach that combines CCD and DCD, as explained above.
We implement both collision detection and closest point query on CPU using Intel's Embree framework (Wald et al., 2014) to create BVH. We generate our timing results on a computer with an AMD Ryzen 5950X CPU, an NVIDIA RTX 3080 Ti GPU, and 64GB of RAM. We acknowledge that our timings are affected by the fact that we copy memory from GPU to CPU every iteration in order to do collision detection and handling, and the whole framework can be further accelerated by implementing the collision detection and shortest path querying on the GPU. As to the parameters of the algorithm, we set \(\epsilon_{r}\) to \(-0.01\). \(\epsilon_{i}\) is related to the scale and unit of the object, when the object is at a scale of a few meters, we set \(\epsilon_{i}\) to \(1^{-10}\).
### Stress Tests
Figure 11 shows a squishy ball with thin tentacles compressed on two sides and flattened to a thickness that is only \(1/20\) of its original radius. Notice that all collisions, including self-intersections of tentacles, are properly resolved even under such extreme compression. Also, the model was able to revert to its original state after the the two planes compressing it were removed.
Figure 12 shows a high-speed head-on collision of two squishy balls. Though the tentacles initially get tangled with frictional-contact right after the collision, all collisions are properly resolved and the two squishy balls bounce back, as expected.
Figure 13 shows two challenging examples of self-collisions caused by twisting a thin beam and two elastic rods. Both instances have shown notable buckling after the twisting. A different
Figure 11. _Flattening a squishy ball (774K vertices, 2.81M tetrahedra) using two planes. (a-c) the flattening process, (d) Side view of the flattened ball to to \(1/20\) of its radius, and (e) the other side of the flattened squishy ball._
Figure 12. _Simulation of two squishy balls in head-on collision that come to contact at a relative speed of \(50m/s\). Both self-collisions and collisions between the two squishy balls are handled using our method._
frame for the same thin beam is also included in Figure 1. Such self-collisions are particularly challenging for prior self-collision handling methods that pre-split the model into pieces, since it is unclear where the self-collisions might occur before the simulation.
Another challenging self-collision case is shown in Figure 14, where nested knots were formed by pulling an elastic rod from both sides. In this case, there is a significant amount of sliding frictional self-contact, changing pairs of elements that collide with each other. Though a substantial amount of force is applied near the end, the simulation is able to form a stable knot.
Figure 14(a) shows the same experiment using naive closest boundary point computation for the collisions between the two squishy balls (by picking the closest boundary point on the other object purely) based on Euclidean distances), only handling self-collisions with our method. Notice that it includes (temporarily) entangled tentacles between the two squishy balls and visibly more deformations of tentacles elsewhere, as compared to using our method (Figure 14(b)). This is because, in the presence of self-collisions, naively handling closest boundary point queries between different objects is prone to picking incorrect boundary points that do not resolve the collision, resulting in prolonged contact and inter-locking.
### Solving Existing Intersections
Our method can successfully resolve existing self collisions. A demonstration of this is provided in Figure 17. In this example, the initial state (Figure 16(a)) is generated by dropping a noodic model without handling self-collisions. When we turn on self-collisions, all existing self-intersections are quickly resolved within 10 substeps (Figure 16(b)), resulting in numerous inverted elements due to strong collision constraints. Then, the simulation resolves them (Figure 16(c)) and finally the model comes to a rest with self-contact (Figure 16(d)). In this experiment, we perform collision projections for all vertices (not only for surface vertices) and all tetrahedra's centroids to resolve the intersection in the completely overlapping parts. Note that we do not provide a theoretical guarantee to resolve all the existing intersections. In practice, however, in all our tests all collisions are resolved after just a few iterations/substeps.
Another example is shown in Figure 16, generated by compressing a squishy ball with two planes on either side, similar to Figure 11 but without handling self-collisions. This results in a significant number of complex unresolved self-collisions (Figure 15(a)), which are quickly resolved within a few substeps when self-collision handling is turned on (Figure 15(b)).
### Large-Scale Experiments
An important advantage of our method is that, by providing a robust collision handling solution, we can use fast simulation techniques
Figure 16. _Simulation of a squishy ball compressed from either side, as in Figure 11, (a) with self-collisions turned off to produce a state with many complex self-collisions, where the blue tint highlights deeper penetrations caused by subsurface scattering artifacts due to intersecting geometry, and (b) a few frames after self-collisions are turned on, showing that our method with XPBD quickly recovers them._
Figure 13. _Twisting (top-middle) a thin beam with 400K vertices \(\&\) 1.9M tetrahedra, and (bottom) two elastic rods with 281K vertices \(\&\) 1.3M tetrahedra, demonstrating unpredictable self-collisions and buckling._
Figure 15. _Two squishy balls in head-on collision comparing collision handling between two different objects (a) by naively accepting the Euclidean closest point as the closest boundary point and (b) with our method. All self-collisions are handled using our method in both cases._
for scenarios involving a large number of objects and complex collisions. An example of this is demonstrated in Figure 18, showing 600 deformable octopus models forming a pile. Due to its complex geometry, the octopus model can cause numerous self-collisions and inter-object collisions. Both collision types are handled using the same model as the model.
Figure 19: _Simulation of 16 squishy balls (a total of 11.2 million tetrahedra) dropped into a bowl, forming a stable pile with active collisions._
Figure 20: _Simulation of a long noodle (a) presenting unpredictable complex self-collisions and (b) forming a large pile with self-collisions._
Figure 17: _Solving existing collisions. Starting from (a) an initial state with many self-collisions, (b) after collision handling is enabled, (c) our method can quickly resolve them, and (d) achieve a self-collision-free final state._
Figure 18: _600 deformable octopus models (3.1\(M\) vertices and 8.88\(M\) tetrahedra in total) dropped into a container, forming a pile with collisions._
our method. At the end of the simulation, a stable pile is formed with 185K active collisions per time step.
Figure 19 shows another large-scale experiment involving 16 squushy balls. Another frame from this simulation is also included in Figure 1. At the end of the simulation, the squushy balls form a stable pile and remain in rest-in-contact with active self-collisions (12K) and inter-object collisions (125K) between neighboring squushy balls.
We also include an experiment with a single long noodle piece in Figure 20 that is dropped into a bowl. This simulation forms numerous complex and unpredictable self-collisions (Figure 20a). At the end of the simulation, we achieve a stable pile with 104K active self-collisions per time step in this example. Figure 1 includes a rendering of this final pile without the bowl and a cross-section view, showing that the interior self-collisions are properly resolved.
### Performance
We provide the performance numbers for the experiments above in Table 1. Notice that, even though we are using a highly efficient material solver that is parallelized on the GPU, our method provides a relatively small overhead. This includes some highly-challenging experiments, involving a large number of complex collisions. The highest overhead of our method is in experiments in which deliberately disabled self-collisions to form a large number of complex self-collisions. Note that all collision detection and handling computations are performed on the CPU, and a GPU implementation would likely result in a smaller overhead.
We demonstrate the effect of our infeasible region culling by simulating a squishy ball dropped to the ground with and without this acceleration. The computation time breakdown of all frames are visualized in Figure 21. In this example, using our infeasible region culling, the shortest path query gains a speed-up of 10-30\(\times\) for some frames, providing identical results. Additionally, the accelerated shortest path query results in a more uniform computation time, avoiding the peaks visible in the graph.
### Comparisons to Rest Shape Shortest Paths
A popular approach in prior work for handling self-collisions is using the rest shape of the model that does not contain self-collisions
Figure 21. _The computation time statistics of each simulation component on stacked bar charts: (top) without infeasible region culling and (bottom) with infeasible region culling. The middle row shows the simulation at frames 0, 75, 125, 175 respectively._
Figure 22. _Comparison between the rest pose closest boundary point and our closest boundary point. (a) The rest pose the cuboid model. (b) We deform the cuboid to a certain shape, then drop a cube on top of it. (c) In the simulation using the rest pose closest boundary point, the cube got incorrectly pulled up. (d) Using our exact closest point, the cube successfully slides down. (e) The shortest path to the surface for an example point, showing that using the closest surface point queried from the rest shape results in an incorrect and longer path._
for performing the shortest path queries. This makes the computation much simpler, but obviously results in incorrect shortest boundary paths. With sufficient deformations, these incorrect boundary paths can lead to large enough errors and instabilities.
Figure 22 shows a simple example, where a small cube is dropped onto a deformed object. Notice that the rest shape of the object (Figure 22a) is sufficiently different from the deformed shape (Figure 22b). With collision handling using this rest shape, the cube moves against gravity and eventually bounces back (Figure 22c), instead of sliding down the surface, as simulated using our method (Figure 22d). Figure 22e shows a 2D illustration of example shortest paths generated by both methods. Notice that using the rest shape results in a longer path to the surface that corresponds to higher collision energy. In contrast, our method minimizes the collision energy by using the actual shortest path to the boundary.
Figure 23 shows a more complex example with self-collisions that is initially simulated using our method (Figure 13) until complex self-collisions are formed. When we switch to using the rest shape to find the boundary paths, the simulation explodes following a number of incorrectly-handled self-collisions.
In general, using the rest shape not only generates incorrect shortest boundary paths, but also injects energy into the simulation. This is because an incorrect shortest boundary path is, by definition, longer than the actual shortest boundary path, thereby corresponds to higher potential energy.
## 6. Discussion
An important advantage of our method is that it can work with simulation systems that do not provide any guarantees about resolving collisions. Therefore, we can use fast simulation techniques like XPBD to handle complex scenarios involving numerous self-collisions, as demonstrated above.
Yet, our method cannot handle all types of self-collisions and it requires a volumetric mesh. We cannot handle collisions of codimensional objects, such as cloth or strands. Our method would also have difficulties handling meshes with thin volumes or no interior elements.
Our method is essentially a shortest boundary path computation method. It is based on the fact that an interior point's shortest path to the boundary is always a line segment. This assumption always holds for objects like tetrahedral meshes in 3D or triangular mesh in 2D Euclidean space. Therefore, our method cannot handle shortest boundary paths in non-Euclidean spaces, such as geodesic paths on surfaces in 3D.
Using our method for collision handling with DCD inherits the limitations of DCD. For example, when with large time steps and sufficiently fast motion, penetration can get too deep, and the shortest boundary path may be on the other side of the penetrated model, causing undesirable collision handling. In practice, this problem can be efficiently solved by coupling CCD and DCD, as we demonstrate with our results above.
\begin{table}
\begin{tabular}{|l|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|} \hline & & \multicolumn{2}{c|}{Number of} & \multicolumn{2}{c|}{Avrg. Collisions} & \multicolumn{2}{c|}{Avrg. Operations} & \multicolumn{2}{c|}{Time Step} & \multicolumn{2}{c|}{Frame Time} & \multicolumn{2}{c|}{Average Time \%} \\ & & \multicolumn{1}{c|}{Vert.} & \multicolumn{1}{c|}{Tet.} & CCD & \multicolumn{1}{c|}{DCD} & \multicolumn{1}{c|}{Q} & \multicolumn{1}{c|}{Tz.} & \multicolumn{1}{c|}{Tet.} & \multicolumn{1}{c|}{Size \% Iter.} & \multicolumn{1}{c|}{Avrg.} & \multicolumn{1}{c|}{Max.} & \multicolumn{1}{c|}{XpBD} & CCD & \multicolumn{1}{c|}{DCD} & \multicolumn{1}{c|}{**Ours**} \\ \hline Flattened Squishy Ball & (Fig. 11) & 774 K & 2.81 M & 16.8 K & 7.1 K & 56 & 6.6 & 5.2 & 3.3e-4 x 3 & 10.89 & 18.04 & 30.9 \% & 29.0 \% & 31.7 \% & **8.3 \%** \\ Twisted Thin Beam & (Fig. 13) & 400 K & 1.9 M & 8.3 K & 3.1 K & 45 & 5.7 & 7.2 & 3.3e-4 x 3 & 8.16 & 15.92 & 29.7 \% & 31.3 \% & 32.1 \% & **6.9 \%** \\ Twisted Rods & (Fig. 13) & 281 K & 1.3 M & 4.8 K & 2.6 K & 33 & 5.3 & 4.7 & 3.3e-4 x 3 & 5.25 & 11.17 & 42.1 & 26.9 \% & 27.0 \% & **4.0 \%** \\ Nested Knots & (Fig. 14) & 38.1 K & 103 K & 3.1 K & 0.6 K & 31 & 4.2 & 4.4 & 5.5e-4 x 3 & 0.25 & 0.32 & 61.8 & 23.3 \% & 9.5 \% & **5.2 \%** \\
2 Squishy Balls & (Fig. 12) & 418 K & 1.4 M & 22.4 K & 1.3 K & 36 & 9.6 & 11.3 & 3.5e-4 x 3 & 1.96 & 2.87 & 52.2 \% & 28.4 \% & 18.5 \% & **10.9 \%** \\ Pre-Intersect. Noodle & (Fig. 17) & 40 K & 110 K & N/A & 15.2 K & 65 & 12.0 & 13.6 & 8.3e-4 x 3 & 0.21 & 0.45 & 51.6 \% & 17.6 \% & 18.4 \% & **12.4 \%** \\ Pre-Intersect. Squishy Ball & (Fig. 16) & 219 K & 704 K & N/A & 45.8 K & 89 & 12.0 & 14.0 & 3.4e-4 x 3 & 1.54 & 2.63 & 44.3 \% & 18.2 \% & 19.1 \% & **18.4 \%** \\
600 Octopi & (Fig. 13) & 3.1 M & 8.88 M & 104.0 K & 6.4 K & 12 & 3.6 & 4.1 & 8.3e-4 x 3 & 16.40 & 17.90 & 68.3 \% & 15.4 \% & **13.4 \%** & **2.9 \%** \\
16 Squishy Balls & (Fig. 19) & 3.5 M & 11.2 M & 118.5 K & 8.5 K & 29 & 4.5 & 6.6 & 3.3e-4 x 3 & 18.50 & 22.0 & 49.3 \% & 25.0 \% & 21.8 \% & **3.9 \%** \\ Long Noodle & (Fig. 20) & 860 K & 2.29 M & 102.6 K & 6.1 K & 11 & 3.6 & 3.2 & 8.3e-4 x 3 & 4.10 & 4.50 & 67.6 \% & 14.8 \% & 14.9 \% & **2.7 \%** \\
8 Octopi CCD Only & (Fig. 9a) & 40 K & 118 K & 2.1 K & N/A & N/A & N/A & N/A & 3.3e-3 x 5 & 0.028 & 0.036 & 86.6 \% & 13.4 \% & N/A & N/A \\
8 Octopi DCD Only & (Fig. 9b) & 40 K & 118 K & N/A & 2.4 K & 13 & 3.7 & 4.1 & 3.3e-3 x 5 & 0.038 & 0.045 & 79.2 \% & N/A & 10.7 \% & **10.1 \%** \\ S Octopi hybrid & (Fig. 9c) & 40 K & 118 K & 2.3 K & 0.2 K & 11 & 3.3 & 3.9 & 3.3e-3 x 5 & 0.035 & 0.038 & 79.3 \% & 10.1 \% & 96.5 \% & **1.0 \%** \\ \hline \end{tabular}
\end{table}
Table 1. _Performance results. Time step size and frame times are given in seconds, where frame times are measured at 60 FPS. Operations Q, Tr., and Tet. represent the number of BVH queries, traversals, and total tetrahedra visited on average per time step, respectively._
Figure 23. _Simulation of twisting a thin beam, shown in Figure 13, soon after replacing our method with using the rest pose for finding the closest boundary point: (a) instabilities caused by incorrect closest boundary points found using this approach, and (b) exploded simulation after a few frames._
## 7. Conclusion
We have presented a formal definition of the shortest path to boundary in the context of self-intersections and introduced an efficient and robust algorithm for finding the exact shortest boundary paths for meshes. We have shown that this approach provides an effective solution for handling both self-collisions and inter-object collisions using DCD in combination with CCD, using a simulation system that does not provide any guarantees about resolving the collision constraints. Our results show highly complex simulation scenarios involving collisions and rest-in-contact conditions that are properly handled with our method with a relatively small computational overhead.
###### Acknowledgements.
We thank Alper Sahistan, Yin Yang, and Kui Wu for their helpful comments and suggestions. We also thank Alec Jacobson and Tiantian Liu for providing online volumetric mesh datasets. This project was supported in part by NSF grant #1764071.
|
2304.06762 | Shall We Pretrain Autoregressive Language Models with Retrieval? A
Comprehensive Study | Large decoder-only language models (LMs) can be largely improved in terms of
perplexity by retrieval (e.g., RETRO), but its impact on text generation
quality and downstream task accuracy is unclear. Thus, it is still an open
question: shall we pretrain large autoregressive LMs with retrieval? To answer
it, we perform a comprehensive study on a scalable pre-trained
retrieval-augmented LM (i.e., RETRO) compared with standard GPT and
retrieval-augmented GPT incorporated at fine-tuning or inference stages. We
first provide the recipe to reproduce RETRO up to 9.5B parameters while
retrieving a text corpus with 330B tokens. Based on that, we have the following
novel findings: i) RETRO outperforms GPT on text generation with much less
degeneration (i.e., repetition), moderately higher factual accuracy, and
slightly lower toxicity with a nontoxic retrieval database. ii) On the LM
Evaluation Harness benchmark, RETRO largely outperforms GPT on
knowledge-intensive tasks, but is on par with GPT on other tasks. Furthermore,
we introduce a simple variant of the model, RETRO++, which largely improves
open-domain QA results of original RETRO (e.g., EM score +8.6 on Natural
Question) and significantly outperforms retrieval-augmented GPT in both
fine-tuning and zero-shot evaluation settings. Our findings highlight the
promising direction of pretraining autoregressive LMs with retrieval as future
foundation models. We release our code and model at:
https://github.com/NVIDIA/Megatron-LM/blob/main/tools/retro/README.md | Boxin Wang, Wei Ping, Peng Xu, Lawrence McAfee, Zihan Liu, Mohammad Shoeybi, Yi Dong, Oleksii Kuchaiev, Bo Li, Chaowei Xiao, Anima Anandkumar, Bryan Catanzaro | 2023-04-13T18:04:19Z | http://arxiv.org/abs/2304.06762v3 | # Shall We Pretrain Autoregressive Language Models with Retrieval?
###### Abstract
Large decoder-only language models (LMs) can be largely improved in terms of perplexity by retrieval (_e.g._, Retro), but its impact on text generation quality and downstream task accuracy is unclear. Thus, it is still an open question: _shall we pretrain large autoregressive LMs with retrieval?_ To answer it, we perform a comprehensive study on a _scalable pretrained_ retrieval-augmented LM (i.e., Retro) compared with standard GPT and retrieval-augmented GPT incorporated at fine-tuning or inference stages. We first provide the recipe to reproduce Retro up to 9.5B parameters while retrieving a text corpus with 330B tokens. Based on that, we have the following novel findings: _i)_ Retro outperforms GPT on text generation with much less degeneration (i.e., repetition), moderately higher factual accuracy, and slightly lower toxicity with a nontoxic retrieval database. _ii)_ On the LM Evaluation Harness benchmark, Retro largely outperforms GPT on knowledge-intensive tasks, but is on par with GPT on other tasks. Furthermore, we introduce a simple variant of the model, Retro++, which largely improves open-domain QA results of original Retro (e.g., EM score \(+8.6\) on Natural Question) and significantly outperforms retrieval-augmented GPT across different model sizes. Our findings highlight the promising direction of pretraining autoregressive LMs with retrieval as future foundation models. We release our implementation at: [https://github.com/NVIDIA/Megatron](https://github.com/NVIDIA/Megatron) -LM#retro.
## 1 Introduction
Large language models (LMs), including masked LMs (e.g., BERT Devlin et al. (2018)), autoregressive LMs (e.g., GPT Brown et al. (2020)), and encoder-decoder LMs (e.g., T5 Raffel et al. (2020), BART Lewis et al. (2020)), have obtained state-of-the-art results for various NLP tasks. Among them, the autoregressive LMs like GPT-3 Brown et al. (2020) and GPT-4 OpenAI (2023) demonstrate noticeable in-context learning ability and excellent long-form text generation results. Due to its importance, the community has spent considerable efforts to scale up such autoregressive generative LMs with more data and parameters and observed significant breakthroughs in a variety of real-world applications (e.g., Brown et al., 2020), including open-ended text generation and various downstream tasks (e.g., question answering). The successful public examples include GPT-3 (w/ 170B parameters) Brown et al. (2020), Gopher (280B) Rae et al. (2021), Megatron-Turing (530B) Smith et al. (2022), and PaLM (540B) Chowdhery et al. (2022).
Although large-scale autoregressive LMs have achieved huge successes, they also suffer from several weaknesses. First, it requires a huge number of model parameters to memorize the world knowledge, which makes it costly for deployment. Second, it is difficult to safeguard factual accuracy, which may provide users with incorrect information Lee et al. (2022). Third, it is expensive to update the model knowledge learned during pretraining with up-to-date facts Meng et al. (2022), yielding outdated answers Lewis et al. (2020).
To mitigate the problems above, one line of research proposes to improve language models with retrieval. The retrieval process can be integrated into LMs at: _i)_ fine-tuning stage Karpukhin et al. (2020); Lewis et al. (2020); Guu et al. (2020), or _ii)_ pretraining stage Borgeaud et al. (2022); Izacard et al. (2022). Most previous work augments BERT or encoder-decoder LMs with retrieval at fine-tuning stage, demonstrating successes for knowledge-intensive NLP tasks Guu et al. (2020); Karpukhin et al. (2020); Lewis et al. (2020); Khandelwal et al. (2020). However, it re
mains relatively underexplored to pretrain _autoregressive_ (decoder-only) LMs _with retrieval_, especially considering the noticeable success of Chat-GPT (OpenAI, 2022) that underscores the extreme importance of the autoregressive LMs.
Most recently, Retro(Borgeaud et al., 2022) proposes to pretrain autoregressive LMs with a retrieval module, which is practically scalable to large-scale pretraining from scratch by retrieving billions of token and largely reduces model parameters while achieving lower perplexity than standard GPT. It also provides the flexibility to update the knowledge stored in LMs (Petroni et al., 2019) by updating the retrieval database without training LMs again. The success of pretraining LMs with retrieval raises an important question for the community if we want to pretrain autoregressive LMs in the future: _Shall we pretrain autoregressive (decode-only) LMs with retrieval by default or not?_ However, previous work (Borgeaud et al., 2022) misses the important evaluation on whether the model like Retro could obtain comparable or even better results in terms of open-ended text generation and various NLP downstream tasks, apart from lower perplexity on the held-out dataset compared to standard GPT.
To answer the above _question_ and bridge the missing gap, we perform an extensive study on Retro, as to the best of our knowledge, Retro is the only retrieval-augmented autoregressive LM that supports large-scale pretraining with retrieval on the massive pretraining corpus with hundreds of billion or trillion tokens. Our comprehensive study sheds light on the promising direction of pertaining autoregressive LMs with retrieval to serve as future foundation models, as they overall outperform standard GPT models in terms of perplexity, text generation quality, and downstream task performances, especially for knowledge-intensive tasks, including open-domain QA.
## 2 Key Findings
We successfully reproduce and pretrain Retro(Borgeaud et al., 2022) from scratch 1, with parameter sizes ranging from 148M up to 9.5B by retrieving from a text corpus with over 330B tokens. In addition, we discuss the inference strategy of Retro for text generation that is not covered in Borgeaud et al. (2022), and perform a large-scale evaluation in different scenarios.
Footnote 1: The official implementation and pretrained checkpoints are not open-sourced.
To minimize the discrepancy variables between Retro and GPT, we use the same decoder architecture, same hyper-parameters, and same pre-training corpus to pre-train Retro and GPT given the same number of pre-training steps. We highlight our novel findings for Retro and GPT as follows:
### Text Generation
We conduct a systematic study (see SS5) to understand and analyze Retro by evaluating its open-ended text generation quality via human and automatic evaluations. Retro exhibits better performance than GPT with considerably less _repetition_, moderately higher _factual accuracy_, and slightly lower _toxicity_ levels. Retro is on par with GPT in terms of _fluency_, _coherence_.
### LM Evaluation Harness Benchmark
In terms of zero-shot evaluation on the standard benchmark, Retro can overall improve upon the GPT across different tasks, significantly outperforming GPT on knowledge-intensive tasks such as Hellaswag and BoolQ while achieving similar performance on other tasks. Specifically, we evaluate the zero-shot capabilities of Retro and GPT on nine representative NLP downstream classification tasks (see SS6). Additionally, our findings demonstrate that Retro can leverage retrieved neighbors and significantly improves accuracy for knowledge-intensive tasks in zero-shot evaluations. In contrast, incorporating these retrieved neighbors directly during the inference stage can hurt GPT's performance. These results further substantiate the potential of Retro, which is pre-trained with retrieval capabilities, as a promising approach.
### Open-domain QA
For open-domain QA tasks, Retro achieves considerably superior performance than retrieval-augmented GPT that incorporates retrieval during fine-tuning across different model sizes and datasets. Specifically, we propose a variant of the model, Retro++, for open-domain QA that feeds the most relevant evidence into the decoder and more evidence into its encoder, which is different from the original version (Borgeaud et al., 2022). Retro++ can largely improve the exact matching score (EM) on Natural Question from 40.9% to 54.1%, which is significant higher than the 45.5% reported by the original Retro.
## 3 Related Work
Retrieval has been applied in various NLP tasks for years, including question answering (QA) (e.g., Bilotti et al., 2007), machine translation (e.g., Zhang et al., 2018), and conversation (Shuster et al., 2021; Thoppilan et al., 2022; Komeili et al., 2021). In particular, language models have been augmented with retrieval at different stages, including inference time (Khandelwal et al., 2020; Yogatama et al., 2021), fine-tuning stage (Karpukhin et al., 2020; Lewis et al., 2020; Guu et al., 2020), and pretraining stage (Borgeaud et al., 2022; Izacard et al., 2022).
LMs have been augmented with retrieval at the fine-tuning stage for downstream tasks, primarily for open-domain QA. DPR (Karpukhin et al., 2020) finetunes one BERT to encode questions and the other BERT to encode answers within a dual encoder framework, using a contrastive loss to align the hidden representations of question and corresponding answer. RAG (Lewis et al., 2020) studies the fine-tuning recipe for retrieval-augmented generation models, especially on open-domain QA tasks. FiD (Izacard and Grave, 2021) improves RAG with a better LM backbone T5, and fuses multiple retrieved passages to the decoder during fine-tuning to further improve QA accuracy. WebGPT (Nakano et al., 2021) leverages web search engine and fine-tunes GPT using reinforcement learning with human feedback (RLHF) for reference generation and factuality improvement, which is orthogonal to our work that focuses on pretraining with retrieval. The proposed RLHF can be applied to Retro as well.
REALM (Guu et al., 2020) performs both unsupervised pretraining and supervised fine-tuning strategies for retrieval-augmented BERT model in open-domain QA. Their pretraining involves asynchronous re-embedding and re-indexing all documents every several hundred training steps, which quickly becomes impractical for training corpus with trillion tokens. Atlas (Izacard et al., 2022) uses a similar approach but augments the T5 architecture (Raffel et al., 2020) with retrieval at both pre-training and fine-tuning. Before pretraining, it first initializes the encoder-decoder LM backbone with pretrained T5, and the dense retriever with pretrained Contriever (Izacard et al.). During pre-training, it also applies asynchronous index refresh every 1000 steps.
In contrast, Retro(Borgeaud et al., 2022) embeds and indexes the whole training corpus at chunk-level (e.g., chuck size = 64) with a frozen BERT before pretraining. During pretraining, the model relies on a trainable bidirectional encoder to embed the retrieved chunks of raw text. The GPT decoder further "select" the relevant piece of evidence from the encoder side by a chunk-wise cross-attention. This architecture design enables LM pretraining on hundreds of billion tokens by retrieving from trillion tokens. See Table 1 for a complete comparison of retrieval-augmented LMs.
## 4 Model and Implementation
In this section, we first introduce preliminaries of Retro, then provide detailed recipe of our implementation, including retrieval database, pretraining, and retrieval-augmented finetuning and generation.
### Preliminaries of Retro
Retro is an autoregressive language model enhanced with a retrieval module that utilizes chunk-wise retrieval, enabling it to scale up to trillions of tokens. The model splits both the input sequence and retrieval datastore into sequences of chunks.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Model & \# / Retrieval & When to & & & \\ Name & Tokens & Involve Retrieval & & & \\ \hline Retro (Borgeaud et al.) & \(O(10^{12})\) & Pretraining & decoder-only & From Scratch / Pretrained GPT & No \\ Atlas (Izacard et al.) & \(O(10^{9})\) & Pretraining & encoder-decoder & Pretrained T5 & Yes \\ REALM (Guu et al.) & \(O(10^{9})\) & Pretraining & encoder-only & Pretrained BERT & Yes \\ \hline RAG (Lewis et al.) & \(O(10^{9})\) & Fine-tuning & encoder-decoder & Pretrained BART & No \\ DPR (Karpukhin et al.) & \(O(10^{9})\) & Fine-tuning & encoder-only & Pretrained BERT & No \\ FiD (Izacard and Grave) & \(O(10^{9})\) & Fine-tuning & encoder-decoder & Pretrained T5 & No \\ \hline KNN-LM (Khandelwal et al.) & \(O(10^{9})\) & Inference & decoder-only & Pretrained GPT & No \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of different retrieval-augmented models in terms of #/ retrieval tokens, which stage to incorporate retrieval into LMs, the architecture of the backbone LM, whether it requires initialization from the existing LM checkpoint, and whether it requires expensive re-indexing. Retro is the most scalable retrieval-augmented LM due to its chunk-level retrieval and scalable decoder-only autoregressive LM backbone (Thoppilan et al., 2022; Brown et al., 2020; Smith et al., 2022; Chowdhery et al., 2022) without expensive retrieval index refresh.
RETRO retrieves nearest neighbor chunks from the retrieval database using the previous chunk and fuses this information with the context from preceding chunks to guide the generation of the next chunk. To maintain causality, the model can only use the nearest neighbors of the previous chunk for the autoregressive generation.
### Implementation
As Retro has no official open-source implementation and pretrained checkpoints, we reproduce and pretrain Retro from scratch on our own.
#### 4.2.1 Retrieval Database
We build the retrieval database with the whole pretraining dataset mentioned in SSB. In this way, Retro and standard GPT of similar size are fair comparisons, as they are pretrained using the same information from the pretraining corpus. The retrieval database is a key-value database, where values are chunks split from the pretraining corpus, and the keys are corresponding BERT embeddings. Our pertaining dataset with 330B tokens yields a retrieval database consisting of 5.3B chunks in total with chunk size \(m=64\).
**Retrieval Index.** We use the Faiss index Johnson et al. (2019) as the implementation for the dense retriever to search for approximate nearest neighbors in the BERT embedding space. We configure the Faiss index to cluster the dense embeddings into \(2^{22}\) centroids accelerated with Hierarchical Navigable Small World graphs Malkov and Yashunin (2018) to speed up the query. We also encode the embeddings with optimized product quantization Gray and Neuhoff (1998); Ge et al. (2014) to compress memory overhead and further improve the query throughput. As a result, we can achieve 4_ms_ per query over the whole pretraining corpus averaged for each chunk on a DGX-2H node. One may find more details in Appendix SSA.
#### 4.2.2 Pretraining Retro Models
We use the same transformer configurations (#/ layers, hidden size, attention heads) for both Retro and standard GPT. Specifically, we pretrain Retro across different parameter sizes, ranging from 148M (Small), 410M (Medium), 1.5B (XL), and 9.5B (XXL). We also use the same pretraining schedules to pretrain Retro and GPT given the same number of steps. We list the validation perplexity of GPT and Retro after pretraining in Table 2. We present more details in Appendix SSB.
#### 4.2.3 Retrieval-augmented Generation
We discuss the generation and inference recipe in the batch-processing mode for Retro, which is missing from the previous literature.
**"Left Padding" Rule.** The chunk-wise retrieval of Retro improves scalability but enforces chunk-wise alignment constraints, leading to issues in conditional generations with short contexts. When the sequence length is less than the chunk size, Retro cannot utilize its retrieval capability as there is no previous chunk for retrieval. Instead, Retro adds padding tokens to the left of the context, allowing Retro to leverage the retrieved neighbors from the previous context to guide the generation of the next token (Figure 0(a)). We summarize this general principle in Retro as the "left padding" rule, as it can leverage the contextual information for retrieval to the most. This rule remains preferable for input sequences larger than the chunk size, as it ensures the closest and rightmost context is used for retrieval, making it more relevant for next token prediction (see Figure 0(b)).
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline & Small & Medium & XL & XXL \\ \hline GPT & 17.76 & 13.18 & 10.18 & 7.86 \\ Retro (\(k=2\)) & 12.99 & 10.06 & 8.10 & 6.72 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Validation perplexity of pretrained GPT and Retro on the held-out dataset. We report the results with \(k=2\) neighbors in this Table, and we observe the same trend of improvements with larger \(k\) as in Borgeaud et al. (2022).
Figure 1: Visualization of padding design for Retro.
**Frequency of Retrieval.** In order to efficiently generate long sequences with Retro, we note a flexible trade-off between retrieval-augmented generation and computation overhead. The direct method involves retrieval at every decoding step, maximizing the use of the retrieval module but increasing computational overhead. Another approach retrieves neighbors at the frequency of the chunk size, reducing overhead but sacrificing accuracy (Figure 0(b), retrieval step \(=64\)). To balance these factors, we introduce a flexible retrieval step, which allows model practitioners to choose how many tokens to generate with the current retrieved neighbors before updating the context. Smaller retrieval steps are preferred for downstream tasks with short answers to ensure accurate neighbors, while larger steps are used for efficient generation of long passages. We provide more details in SSC.
#### 4.2.4 Batched Training for Downstream Tasks
When fine-tuning Retro for downstream tasks (e.g., QA), it is crucial to separate context or question from the candidate answer chunk to maintain causality in autoregressive modeling. This leads to a modified "left padding" rule: pad _context chunks_ from the left and _answer chunks_ from the right (Figure 0(c)). Padding aligns input sequences with the chunk size, enabling batch-mode training and inference for faster evaluation. By adding padding chunks to the right, sequences with varying chunk numbers can be processed together, further improving efficiency.
## 5 Open-ended Text Generation
In this section, we delve into the problem of open-ended text generation, which refers to tasks of generating coherent continuation given the preceding prompt. Given that this problem for Retro has never been studied before, we manage to bridge the gap and evaluate the open-ended text generation of Retro compared to GPT from three aspects: \(a)\) text quality, \(b)\) factuality, and \(c)\) toxicity.
### Text Quality
We perform both automatic and human evaluations.
#### 5.1.1 Automatic Evaluation
**Evaluation Metrics.** We follow prior work Holtzman et al. (2019); Zhu et al. (2018) and consider the following metrics: **Repetition %** measures percentage of the generations containing repetitive phrases, **SELF-BLUE** evaluates the diversity of the generations, and **Zipf Coefficient** measures the use of vocabulary. See detailed definition and evaluation setup in Appendix SSD.1.
**Experimental Results.** Our results are shown in Table 3. We note that Retro can reduce the percentage of repetition compared with GPT by a large margin across different sizes. Specifically, Retro averagely mitigates 21% of repetitions compared with GPT across different sizes. This suggests the retrieval module can help reduce text degeneration by referencing retrieved human text. Regarding vocabulary use and generation diversity, we do not observe major differences between GPT and Retro, which implies these properties are primarily dependent on the decoder component of LMs.
#### 5.1.2 Human Evaluation
We also conduct human evaluations to further verify the quality of the generated text.
**Evaluation Metrics.** We ask human annotators to annotate each generation with fluency scores, which measure the human readability and grammatical errors from 1 (Not human-readable) to 5 (Very fluent), and coherence scores, which measure the relevance between the prompt and the corresponding continuations from 1 (Not Relevant) to 5 (Very Relevant). More details can be found in SSD.2.
**Experimental Results.** We present the human vote histogram in Appendix Figure 4. We observe that most votes concentrate on the regime of scores \(>=3\) for both relevance and fluency, which indicates that our generated text from both models is of high quality and closely related to the prompts. The differences between GPT and Retro are subtle, with average relevance (3.726) and fluency (3.826) scores of Retro slightly outperforming the average relevance score (3.715) and fluency (3.818) scores of GPT.
From both automatic and human evaluation, we can conclude that although the generation of
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{**Metrics**} & \multicolumn{2}{c|}{**Small**} & \multicolumn{2}{c|}{**Medium**} & \multicolumn{2}{c|}{**XL**} & \multicolumn{2}{c}{**XXL**} \\ & GPT & Retro & GPT & Retro & GPT & Retro & GPT & Retro \\ \hline \hline \multirow{2}{*}{Repetition \%} & \(2.86\%\) & \(\mathbf{2.26\%}\) & \(1.70\%\) & \(\mathbf{1.50\%}\) & \(1.44\%\) & \(\mathbf{0.96\%}\) & \(1.40\%\) & \(\mathbf{1.12\%}\) \\ Self-BLEU & \(0.29\) & \(0.3\) & \(0.29\) & \(0.3\) & \(0.29\) & \(0.29\) & \(0.31\) & \(0.31\) \\ Zipf Coefficient & \(0.98\) & \(0.98\) & \(0.96\) & \(0.98\) & \(0.97\) & \(0.98\) & \(0.96\) & \(0.96\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Automatic evaluation on text generation quality for Retro and GPT across different sizes.
Retro adds some complexity, we do not see any sign of the degeneration of Retro compared to GPT. Moreover, Retro is shown to be able to reduce the repetition and slightly improve text generation quality.
### Factuality
Factuality refers to being coherent to provide ground truth knowledge sources in NLP. We leverage two well-established benchmarks (Lee et al., 2022; Lin et al., 2021) to evaluate the factual accuracy of Retro and GPT. We put detailed experimental setup and metrics in the SSE.
**Factuality Prompts.** We present the results on FactualityPrompts benchmark in Table 3(a), where we follow the protocol (Lee et al., 2022) and use different decoding strategies for evaluation. We observe that overall Retro demonstrates better factual accuracy and makes fewer hallucinations. Our observation also aligns with (Lee et al., 2022) that different from top-\(p\) sampling with randomness, greedy decoding gives the much better factual accuracy by sacrificing sample diversity. Retro shows a significant improvement over GPT using both top-p and greedy decoding algorithms.
**TruthfulQA.** We also evaluate truthfulness in Table 3(b), where we observe the consistent conclusion that Retro tend to be more resilient to human falsehoods than GPT across different prompts (demonstrated in Appendix Table 10 and 11).
Since using the null format yields consistently better performance, we thus fix the prompt and conduct ablation studies with different retrieval databases. We observe that the pretraining database and DPR database yield the best factual accuracy. This suggests that (\(i\)) finetuned retriever or (\(ii\)) retrieval from a larger corpus are helpful to improve factual accuracy. In addition, the improvement margin for Retro on TruthfulQA is not as significant as on FactualityPrompts. This is likely because TruthfulQA is an adversarial dataset containing difficult question types, such as logical falsehoods and indexical errors (e.g., "Are you an artificial intelligence?", more examples can be found in Appendix Table 12). In these cases, retrieval from the database does not effectively aid in answering such questions.
### Toxicity
The toxicity of LMs refers to the possibility of LMs that output toxic generations. In this study, we follow RealToxicityPrompts benchmark (Gehman et al., 2020) to evaluate the potential toxicity of Retro and GPT.
**Evaluation Metrics.** Following Gehman et al. (2020), we report the _Expected Maximum Toxicity_, which evaluates the toxicity of the worst-case generation, as well as _Toxicity Probability_ that estimates the empirical frequency of generating toxic language. See more details and setup in SSF.
**Experimental Results.** The toxicity of LMs are shown in Table 5. Compared to GPT, we note that Retro with the pretraining corpus even increases the toxicity of the generations. Moreover, we observe more toxicity increases in toxic prompts than in nontoxic prompts. This suggests that when prompting Retro with toxic contexts, it is more likely to retrieve toxic evidence and thus amplify the issues. To confirm the toxicity amplification issue, we further conduct two sets of ablation studies: (\(i\)) We save the retrieval evidence and calculate the _Expected Mean Toxicity_ of both generations and retrieval evidence. We observe that the toxicity of retrieval evidence is \(0.177\), higher than the toxicity of the generations (\(0.146\)). (\(ii\)) We change the retrieval database to the Wikipedia database, which shows lower toxicity for retrieval evidence (\(0.132\)). As a result, we observe that Retro with the Wikipedia retrieval database can help mitigate the toxicity of GPT as shown in Table 5, with the toxicity probability dropping from \(37\%\) to \(35\%\). We also note that it is not very helpful to use a larger \(N\) as nearest neighbors and filter the retrieval evidence by toxicity. We hypothesize the reason is that the similarity between input and retrieval evidence is limited with larger \(N\), thus yielding low cross-attention on the retrieval evidence.
\begin{table}
\end{table}
Table 4: Evaluation of factuality and truthfulness of Retro (XL).
## 6 LM Evaluation Harness Benchmark
Besides the open-ended text generation, it is also important to examine the generalization of Retro on various downstream tasks, which is also missing from the literature. Therefore, we use LM Evaluation Harness Benchmark Gao et al. (2021) and consider the following nine representative NLP downstream tasks. See more details in SSG.
**Zero-shot evaluation.** We present the zero-shot evaluation results in Table 6. We find that on average Retro can improve the downstream task accuracy across different tasks. Moreover, we observe larger improvements in knowledge-intensive tasks such as Hellaswag and BoolQ (6 of 8 cases), which require factual knowledge to guide the reasoning. Note that the zero-shot evaluation results are susceptible to prompt formats, so the results have certain variances.
**Retrieval-augmented GPT at Inference time.** We have seen that retrieval significantly improves Retro across different downstream tasks in the zero-shot setting. In this ablation study, we append the retrieval evidence of Retro to the beginning of the context to see whether retrieval can also be helpful for GPT at inference time. We evaluate the zero-shot accuracy after prepending the top-\(1\) retrieval evidence. The results are shown in Appendix Table 14. We observe that directly prepending the evidence from the retrieval database messes up the GPT context in the zero-shot setting, yielding low accuracy of around \(24.5\%\). We hypothesize the reason is that the retrieval evidence can be noisy. Without pretraining or proper fine-tuning, GPT in the zero-shot learning setting puts too much attention on the noisy evidence, thus giving low downstream accuracy.
## 7 Open-domain Question Answering
In this section, we study two widely used open-domain QA datasets, Natural Question (NQ) and TriviaQA.
### Experimental Setup
**Retrieved evidence as context** The original Retro work leverages the retrieved evidence (i.e. passages) by feeding them all into the encoder. We
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{**Tasks**} & \multicolumn{2}{c|}{**Small**} & \multicolumn{2}{c|}{**Medium**} & \multicolumn{2}{c|}{**XL**} & \multicolumn{2}{c}{**XXL**} \\ & GPT & Retro & GPT & Retro & GPT & Retro & GPT & Retro \\ \hline \multicolumn{10}{l}{_Knowledge-intensive Tasks_} \\ \hline HellaSwag & \(31.3\) & \(36.2\)\(\pm\)\(\pm\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\)\
argue that the top most relevant evidence is more important than others and should be used as the context for the question. Therefore, the top relevant evidence should be fed to the decoder, and the rest of the evidence can be incorporated by the encoder. For the implementation in our experiments, we append the top-1 relevant passage at the beginning of the decoder input, and reformat the input with Template A: "title: {title}, source: {source} \(\backslash\)n question: {question} \(\backslash\)n answer: {answer}". For the models without retrieved evidence in the context, we follow Borgeaud et al. (2022) to format the input with Template B: "question: {question} \(\backslash\)n answer: {answer}".
In additional to several baseline methods in Table 7, we compare the following models: 1) **GPT (close-book)** simply finetunes a pretrained GPT model with the input Template B without using any retrieved documents. 2) **RAG\({}_{GPT}\)** applies RAG finetuning Lewis et al. (2020) for GPT, which puts retrieved evidence as its context. It utilizes the top retrieved documents by DPR with the input Template A and finetunes a pretrained GPT model, which represents incorporating retrieval to GPT at the fine-tuning stage. 3) **Retro** encodes the retrieved evidence using the encoder and finetunes a pretrained Retro model with the input Template B. 4) **Retro++** finetunes a pretrained Retro model with the top retrieved evidence included input Template A while leaving the rest of the evidence to the encoder. More details can be found in SSH.
### Results and Analysis
Table 7 shows the results on NQ and TriviaQA. Our Retro++ achieves Exact Match (EM) score 54.1, which is 8.6 higher than the original Retro paper. We find the key to the success of Retro is to incorporate the top retrieved document from DPR to the decoder as the context, which gives us 13.2 absolute improvement by comparing our Retro and Retro++. Note that our Retro has lower EM score (40.91) than the original paper (45.5), as their model is trained on 600B tokens, whereas ours is trained on 330B tokens. By comparing RAG\({}_{GPT}\) with Retro++, we show that pretraining autoregressive LM with retrieval (i.e., Retro++) yields better QA accuracy than only fine-tuning autoregressive LM with retrieval (i.e., RAG\({}_{GPT}\)). Appendix SSH.3 gives qualitative studies on NQ.
**Scaling of model sizes.** Figure 2 shows the EM score when scaling model sizes for RAG\({}_{GPT}\), and Retro++ on NQ and TriviaQA. As the model sizes increase, the performance of all models monotonically increases. Retro++ achieves the best performances across all tasks and model sizes.
## 8 Conclusion
In this work, we perform a comprehensive study of retrieval-augmented LM to answer the question: _Shall we pretrain decoder-only LMs with retrieval?_ We observe consistent improvements in text generation quality, factual accuracy, lower toxicity, and downstream task accuracy, especially for knowledge-intensive tasks, including open-domain QA. Given the \(\sim 25\%\) percentage of additional GPU hours for pretraining, we argue pretraining generative language models with retrieval is a promising direction.
\begin{table}
\begin{tabular}{l|c|c} \hline \hline Method & NQ & TriviaQA \\ \hline GPT (close book) & 36.1 & 45.1 \\ REALM Guu et al. (2020) & 40.4 & - \\ DPR Karpukhin et al. (2020) & 41.5 & 56.8 \\ RAG\({}_{GPT}\) Lewis et al. (2020) & 44.5 & 56.1 \\ RAG\({}_{GPT}\) & 50.9 & 60.9 \\ FiDi\({}_{\textit{large}}\) Izacard and Grave (2021) & 51.4 & 67.6 \\ Retro Ours & 40.9 & 59.9 \\ Retro Borgeaud et al. (2022) & 45.5 & - \\ Retro++ (Ours) & **54.1** & 66.7 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Comparisons of our Retro and existing QA models. We report the best results with the largest model configuration respectively.
Figure 2: Comparisons among RAG\({}_{GPT}\) and Retro++ models on NQ and TriviaQA. Larger models achieve better performances and Retro++ is consistently better than RAG\({}_{GPT}\) |
2305.11137 | Parallel development of social preferences in fish and machines | What are the computational foundations of social grouping? Traditional
approaches to this question have focused on verbal reasoning or simple
(low-dimensional) quantitative models. In the real world, however, social
preferences emerge when high-dimensional learning systems (brains and bodies)
interact with high-dimensional sensory inputs during an animal's embodied
interactions with the world. A deep understanding of social grouping will
therefore require embodied models that learn directly from sensory inputs using
high-dimensional learning mechanisms. To this end, we built artificial neural
networks (ANNs), embodied those ANNs in virtual fish bodies, and raised the
artificial fish in virtual fish tanks that mimicked the rearing conditions of
real fish. We then compared the social preferences that emerged in real fish
versus artificial fish. We found that when artificial fish had two core
learning mechanisms (reinforcement learning and curiosity-driven learning),
artificial fish developed fish-like social preferences. Like real fish, the
artificial fish spontaneously learned to prefer members of their own group over
members of other groups. The artificial fish also spontaneously learned to
self-segregate with their in-group, akin to self-segregation behavior seen in
nature. Our results suggest that social grouping can emerge from three
ingredients: (1) reinforcement learning, (2) intrinsic motivation, and (3)
early social experiences with in-group members. This approach lays a foundation
for reverse engineering animal-like social behavior with image-computable
models, bridging the divide between high-dimensional sensory inputs and social
preferences. | Joshua McGraw, Donsuk Lee, Justin Wood | 2023-05-18T17:32:59Z | http://arxiv.org/abs/2305.11137v1 | # Parallel development of social preferences in fish and machines
###### Abstract
What are the computational foundations of social grouping? Traditional approaches to this question have focused on verbal reasoning or simple (low-dimensional) quantitative models. In the real world, however, social preferences emerge when high-dimensional learning systems (brains and bodies) interact with high-dimensional sensory inputs during an animal's embodied interactions with the world. A deep understanding of social grouping will therefore require embodied models that learn directly from sensory inputs using high-dimensional learning mechanisms. To this end, we built artificial neural networks (ANNs), embodied those ANNs in virtual fish bodies, and raised the artificial fish in virtual fish tanks that mimicked the rearing conditions of real fish. We then compared the social preferences that emerged in real fish versus artificial fish. We found that when artificial fish had two core learning mechanisms (reinforcement learning and curiosity-driven learning), artificial fish developed fish-like social preferences. Like real fish, the artificial fish spontaneously learned to prefer members of their own group over members of other groups. The artificial fish also spontaneously learned to self-segregate with their in-group, akin to self-segregation behavior seen in nature. Our results suggest that social grouping can emerge from three ingredients: (1) reinforcement learning, (2) intrinsic motivation, and (3) early social experiences with in-group members. This approach lays a foundation for reverse engineering animal-like social behavior with image-computable models, bridging the divide between high-dimensional sensory inputs and social preferences.
**Keywords:** social preferences, social grouping, reinforcement learning, curiosity-driven learning, collective behavior, fish
## Introduction
Social preferences are widespread across the animal kingdom. Individuals spontaneously organize into social groups, including bird blocks, fish shoals, insect swarms, and human crowds. Many animals, including humans, develop social preferences early in life, rapidly learning to favor "us" over "them" during social interactions (e.g., Al-Imari & Gerlai 2008; Engeszer et al. 2004; Kinzler et. al., 2007; Molenberghs, 2013). What are the origins of such social preferences? Despite significant interest in social preferences across psychology and neuroscience, we know relatively little about the core learning mechanisms that drive social preferences in human and nonhuman animals. What is the nature of the learning mechanisms--present in newborn animals--that cause social preferences to emerge so rapidly and flexibility?
It has not generally been possible to address this question because the field lacked experimental platforms for directly comparing the development of social preferences across newborn animals and computational models formalizing candidate learning algorithms. As a result, researchers could not test whether particular algorithms can actually learn animal-like social preferences. To evaluate whether a computational model learns like a newborn animal, we argue that an experimental platform must have two features. First, the animals and models must be raised in the same environments. This is essential because social preferences emerge both from the _learning algorithms_ and the _training data_ (experiences) acquired by the agent. Any observed differences in social preferences across animals and models could be due to differences in the algorithms, training data, or some combination of the two factors. Thus, evaluating whether computational models learn like animals requires 'raising' models in the same environments as newborn animals, giving the models and animals access to the same training data. Second, the animals and models must be tested with the same tasks. Psychologists have long recognized that preference behavior is task-dependent. To confirm that animals and models learned the same social preferences, the animals and models must be tested with the same tasks, thereby ensuring that any observed differences across the animals and models are not due to differences in the tasks themselves.
Here we introduce "digital twin" experiments that meet both requirements, allowing newborn animals and artificial animals (embodied learning algorithms) to be raised in the same environments and tested with the same tasks. Digital twin experiments involve first selecting a target animal study, and then creating digital twins (virtual replicas) of the animal environments in a video game engine. Artificial animals are then raised and tested in those virtual environments and their behavior is compared with the behavior of the real animals in the target study. By raising and testing real and artificial animals in the same environments, we can test whether animals and models spontaneously learn common social preferences.
As a starting point, we focused on the social preferences of newborn fish. We chose fish because they can be reared in controlled visual environments, are mobile on the first day after hatching, and rapidly learn social preferences based on visual information (Engeszer et al., 2004; Geng & Peterson, 2019; Hinz & de Polavieja, 2017; Ogi et al., 2021; Tallafuss & Bally-Cuif, 2003). For the target animal study, we focused on Engeszer et al. (2004). In the study, newly-hatched zebrafish (_Danio rerio_) were reared for several months in controlled visual environments that
contained social partners (other fish) with one of two possible pigment types. Once the fish had been reared with social partners of one pigment type, the researchers used a two-alternative forced-choice (2AFC) task to test whether the fish had developed a preference for the familiar pigment type over the novel pigment type. The fish developed a strong preference for the familiar pigment type over the novel pigment type - independently of the fishes' own pigment type. This experiment thus reveals an important role of visual learning in the development of social preferences. To explore which learning algorithms can drive these early-emerging social preferences, we performed two experiments with artificial fish whose behavior was learned through intrinsically motivated reinforcement learning.
## Experiment 1: 2AFC Task
We first explored whether artificial fish learn similar social preferences as real fish when they are reared in the same environments and tested with the same task. To match the 2AFC testing conditions of the fish in Engeszer et al. (2004), we tested the artificial fish in virtual fish tanks that mimicked the real fish tank proportions (Figure 1).
To develop fish-like social preferences, the artificial fish needed to spontaneously learn to prefer other social agents, in the absence of explicit rewards or supervision. The artificial fish also needed to learn how to move through space, developing knowledge of their location and direction (ego-motion) so that they could navigate to their preferred social group. None of these abilities (ego-motion nor social preferences) were hardcoded into the artificial fish.
## Methods
Using a video game engine (Unity), we raised (trained) artificial fish in realistic virtual environments akin to the fish tanks described in Engeszer et al. (2004). Due to its flexibility and power, Unity is an ideal testbed for AI simulation. In particular, Unity's development team actively supports a package known as 'ML-Agents Toolkit' (Juliani et al., 2018), which allows researchers to train artificial agents in virtual worlds. We used the _Unity ML-Agents_ Package version 2.0.1 with Python 3.8.10 and the Python _mlagents_ library version 0.26.0.
**Artificial Fish.** Real fish learn from raw sensory inputs and perform actions in 3D environments, driven by self-supervised learning objectives. Thus, to directly compare the real and artificial fish, we used 'pixels-to-actions' artificial fish that learn from raw sensory inputs and perform actions in 3D environments, driven by self-supervised learning objectives (intrinsic motivation).
We generated the artificial fish by embodying self-supervised learning algorithms in virtual animated fish bodies. The fish bodies measured 1.2 units (length) by 0.7 units (height) and the fish received visual input through an invisible forward-facing camera attached to its head (64\(\times\)64 pixel resolution). The artificial fish could move themselves around the 3D environment by moving either forward, left,
Figure 1: **Experimental Design.****(A)** Rearing conditions of the artificial fish. Each group (orange & blue) was reared in a small white virtual cup—akin to the rearing conditions from Engeszer et al. (2004). **(B)** Testing conditions of the artificial fish. We tested the fish in two tasks. In Experiment 1, we used the 2AFC task from the target animal study (Engeszer et al., 2004). In Experiment 2, we used a self-segregation task that measured whether the fish spontaneously group into “us” versus “them” during social interactions. **(C)** The artificial neural network used in the artificial fish. The blue boxes denote visual encoders (CNNs), the green box denotes the curiosity module (intrinsic reward), and the orange box denotes the policy network used to select actions.
or right on every step. The plane of motion was restricted to a single flat plane in order to mimic the action space of a thin, shallow, layer of water commonly used in zebrafish research to prevent motion along the height axis.
To construct the artificial fish brains (Figure 1C), we used two biologically-inspired learning mechanisms: (a) deep reinforcement learning and (b) curiosity-driven learning. Deep reinforcement learning provides a computational framework for learning adaptive behaviors from high-dimensional sensory inputs. In reinforcement learning, agents maximize their long-term rewards by performing actions in response to their environment and internal state. To succeed in environments with real-world complexity, agents must learn abstract and generalizable features to represent the environment. Deep reinforcement learning combines reinforcement learning with deep neural networks in order to transform raw sensory inputs into efficient representations that support adaptive behavior. Artificial agents trained through deep reinforcement learning can develop human and animal abilities. For example, artificial agents can learn to play simple and complex video games (e.g., Atari: Mnih et al., 2015; Quake III: Jaderberg et al., 2019), develop advanced motor behaviors (e.g., walking, running, and jumping: Haarnaja et al., 2019), and learn to navigate 3D environments (Banino et al., 2018). We used a standard reinforcement learning algorithm called Proximal Policy Optimization (PPO) (Schulman et al., 2017).
The second mechanism--curiosity-driven learning--can drive the development of complex behaviors through self-supervised learning objectives (e.g., Haber et al., 2018; Oudeyer & Smith, 2016; Schmidhuber, 2010). Curiosity is a popular approach for endowing artificial agents with intrinsic motivation and involves rewarding agents based on how surprised they are by the world. Curiosity promotes learning by motivating agents to seek out informative experiences. By seeking out less predictable (and more informative) experiences, agents can gradually expand their knowledge of the world, continuously acquiring useful experiences for improving perception and cognition. This type of self-supervised learning is thought to resemble learning in real animals, who often learn about the world not by pursuing any specific goal, but rather by playing and exploring for the novelty of the experience (Gopnik et al., 2017).
We implemented curiosity-driven learning using a popular algorithm called Intrinsic Curiosity Module (ICM) (Pathak et al., 2017; Burda et al., 2018). The ICM module contains two separate neural networks: a forward and an inverse model. The inverse model is trained to take the current and next visual observation received by the agent, encode both observations using a single encoder, and use the result to predict the action that was taken between the two observations. The forward model is then trained to take the encoded current observation and action to predict the encoded next observation. The difference between the predicted and real encodings is then used as the intrinsic reward, and fed to the PPO algorithm. Bigger differences mean bigger surprise, which in turn means bigger intrinsic reward. By linking these two models together, the reward captures not only surprising things, but surprising things that the agent has control over (based on the agent's actions). This artificial curiosity allows machines to be trained without any extrinsic rewards from the environment, with learning driven entirely by intrinsic motivation signals.
We used an off-the-shelf ICM algorithm implemented in ML-Agents. ML-Agents allows artificial agents to be configured according to several hyperparameters, including the learning policy, the learning rate, and other common neural network configuration settings (e.g. batch size, layer width, etc.). We used the hyperparameters listed in Table 1. All of the artificial fish had the same network architecture: 2-layer CNN connected to a multilayer perceptron (see Table 1 for hyperparameters).
**Rearing Conditions.** To simulate the rearing conditions of real fish, we created two groups of differently-pigmented artificial fish: four blue fish and four orange fish. The orange fish were reared together in one group, and the blue fish were reared together in a separate group. Each group was reared in a small white virtual cup--akin to the rearing conditions from Engeszer et al. (2004).
During training (Figure 1A), the artificial fish experienced 1,000 training episodes. At the beginning of each episode, the fish were spawned at the center of the environment and rotated in a random direction. Each episode lasted for 1,000 actions ('steps') in the environment. The artificial fish were provided with a 140deg field of view and could take one full action on every frame of the simulation. One full action was the result of two discrete movement sets: (a) [forward of stay] and (b) [rotate left, rotate right, or no rotation]. For example, an agent might decide to move [forward] + [left] on one frame, and then move [forward] + [no rotation] on the next frame. A sharp right turn would then be the result of [stay] + [rotate right] for several frames. Each rotation was \(\sim\)2 degrees of rotation along the Y-axis per step.
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline
**trainer\_type: ppo** & **reward\_signals: curiosity** \\ \hline mm\_layers: 3 & mm\_layers: 3 \\ \hline hidden\_units: 512 & hidden\_units: 128 \\ \hline learning\_rate: 0.0003 & learning\_rate: 0.0003 \\ \hline batch\_size: 256 & strength: 1.0 \\ \hline buffer\_size: 2048 & gamma: 0.99 \\ \hline beta: 0.01 & vis encode type: simple (2-layer CNN) \\ \hline epsilon: 0.2 & normalize: false \\ \hline lambd: 0.95 & max\_steps: 1000000 \\ \hline learning\_rate\_schedule: linear & time\_horizon: 128 \\ \hline \end{tabular}
\end{table}
Table 1: ML Agents Trainer configuration.
All of the artificial fish had the same learning algorithms, hyperparameters, and network architectures. However, each artificial fish started with a different random initialization of connection weights, and each fish's connection weights were shaped by its own particular experiences during the training phase. The artificial fish were trained for 1 million time steps, using a rack of eight NVIDIA A10 GPUs on a single Linux server environment.
**Testing Conditions.** After the training phase, the brain weights of the artificial fish were frozen for the test phase (i.e., the algorithms did not receive any rewards during the test phase and learning was discontinued). To mimic the testing conditions of the real fish, we tested the artificial fish using a 2AFC task (Figure 1B). Each of the eight artificial fish agents were tested separately across 1,000 test trials. At the start of each test trial, the test fish was placed in the center of the chamber in a random orientation. The chamber contained two shoaling groups (n = 11 fish): one group had a familiar pigment type (i.e., "orange" for fish reared in the orange group and "blue" for fish reared in the blue group), while the other group had a novel pigment type (i.e., "orange" for fish reared in the blue group and "blue" for fish reared in the orange group).
The fish in the two shoals generated the same swimming motion as the fish in the training phase. However, the shoaling fish remained in a stationary configuration across the test trials to prevent the movements of the shoaling fish from influencing the behavior of the focal (test) fish. As a result, the behavior of the fish in the blue and orange groups were identical. Each test trial consisted of 3,000 actions ('steps'). At every time step, we recorded the position of the test fish in (X, Y) coordinates. As with the real fish, we measured the proportion of time that the artificial fish spent in proximity to the group with the familiar pigment type versus the novel pigment type (measured as the distance to the center of the shoal).
### Results & Discussion
**Group performance.** Figure 2A shows the social preferences of the artificial fish. On the group level, the artificial fish spent significantly more time near the group with the familiar pigment type versus the novel pigment type (\(t(7)=5.27\), \(p<.00001\)). On average, the group spent 85.9% (SEM = 8%) of their time with in-group versus out-group fish (chance = 50%), indicating that artificial fish can develop fish-like social preferences when reared in similar environments as real fish.
**Individual-subject performance.** Since we collected a large number of test trials from each artificial fish, we also explored whether individual differences emerged across the
Figure 2: **(A)** Results on the 2AFC task. On the group level, the fish reared in the orange group spent more time with orange fish versus blue fish, and the fish reared in the blue group spent more time with blue fish versus orange fish. On the individual level, all of the artificial fish developed statistically significant social preferences for in-group members. The black dashed line denotes chance performance, and the red line denotes the fish performance from Engeszer et al. (2004). **(B)** Results on the self-segregation task. On the group level, the fish reared in the orange group self-segregated with orange fish versus blue fish, and the fish reared in the blue group self-segregated with blue fish versus orange fish. On the individual level, all eight of the artificial fish were more likely to self-segregate with in-group members. Lighter colors indicate smaller distances between fish. Significance levels are indicated: (*p \(\leq\).05, **p \(\leq\).01, ***p \(\leq\).001).
fish. To test for the presence of individual differences, we examined whether the identity of the subject was a predictor of social grouping behavior. A one-way ANOVA showed that the identity of the subject was a strong predictor of performance: \(F(7,\,999)=8.923,\,p<.00001\). All eight of the artificial fish showed a statistically significant preference for the in-group versus the out-group (all \(P\)s \(<.00001\)). Five of the artificial fish developed a strong preference for in-group members, spending more than 90% of their time with the familiar group. The other three fish developed less strong social preferences, spending 55% to 63% of their time with in-group members. Despite being trained in identical visual environments, the artificial fish developed different social behaviors as one another.
## 2 Experiment 2: Self-Segregation Task
A defining signature of social preferences in the real world is that they drive animals to self-segregate into social groups. To test whether our artificial fish developed this signature of social grouping, we created a self-segregation task that involved placing all of the artificial fish in the same environment and measuring whether the fish spontaneously self-segregate into groups based on their pigment type. To be clear, Exp. 2 was not a digital twin experiment of a prior animal study, but rather a validity check that the 2AFC task used in Exp. 1 reliably captures the core construct under investigation: social grouping. If so, then the artificial fish from Exp. 1 should self-segregate into groups.
**Rearing Conditions.** The rearing conditions were identical to those used in Experiment 1.
**Testing Conditions.** The self-segregation task (Figure 1B) involved placing all of the 8 trained artificial fish (4 orange fish and 4 blue fish) into a single environment. At the start of each trial, the fish were centered in the environment, oriented randomly, and then allowed to freely move and interact with other fish. To measure whether the fish self-segregated, we measured the Euclidean distance between each fish and every other fish at each time step, then computed the in-group distance (i.e., the average distance to fish of the same color) and the out-group distance (i.e., the average distance to fish of unfamiliar color). We tested the fish across 1,000 trials. Each trial lasted 3,000 time steps.
### Results & Discussion
**Group performance.** Figure 2B shows the self-segregation behavior of the artificial fish. On the group level, the artificial fish spent significantly more time near fish with familiar versus novel colors, \(t(7)=15.5,\,p<.0001\). The distance to in-group members was significantly smaller than the distance to out-group members, indicating that the artificial fish spontaneously learned to self-segregate into social groups.
**Individual-subject performance.** As illustrated in Figure 2B, all eight of the artificial fish showed a statistically significant preference for in-group versus out-group members (all \(P\)s \(<.00001\)). To test for the presence of individual differences, we examined whether the identity of the subject was a predictor of self-segregation behavior. A one-way ANOVA showed that the identity of the subject was a strong predictor of performance: \(F(7,\,999)=720.5,\,p<.00001\). These results extend the results from Experiment 1, showing that artificial fish can spontaneously develop self-segregation behavior, favoring "us" over "them."
## 3 Discussion
We performed digital twin experiments, in which newborn fish and artificial fish were raised and tested in the same visual environments. This approach permits direct comparison of whether animals and machines learn common social preferences when exposed to the same experiences (training data). In this paper, we explored whether artificial fish can spontaneously learn fish-like social preferences, leading to social grouping and a preference for "us" over "them." We found that fish-like social grouping spontaneously develops in artificial fish who are equipped with reinforcement learning and curiosity-driven learning. These social preferences emerged when artificial fish were reared in similar visual (and social) environments as real fish [11].
**Origins of social grouping.** Biologists and psychologists have proposed a range of theories about the origins and computational foundations of social behavior [e.g., 14, 15]. Some theorists have argued that social behavior emerges from innate, domain-specific learning mechanisms for representing social partners and for categorizing the social world into groups [e.g., 13], whereas others have argued that social behavior emerges from domain-general learning mechanisms that become specialized for social cognition during development [e.g., 12]. It has not been possible to distinguish between these theories because researchers could not test whether candidate learning mechanisms are sufficient to produce animal-like social behavior. Do embodied agents actually need innate, domain-specific mechanisms to learn about the social world? Or can social knowledge emerge from domain-general mechanisms shaped by embodied social experiences?
Our experiments provide an existence proof that social grouping can emerge from domain-general mechanisms. Neither reinforcement learning nor curiosity-driven learning were designed to produce social grouping in embodied agents. Nevertheless, when these learning algorithms are embodied and trained in realistic social environments, animal-like social preferences spontaneously develop. Thus, we hypothesize that social grouping is an _emergent property_ of three ingredients: reinforcement learning, intrinsic motivation (e.g., curiosity-driven learning), and embodied social experiences with in-group members.
Consequently, evolution would not have needed to discover innate, domain-specific mechanisms to produce social behavior. Rather, once evolution discovered domain-general mechanisms, animals could have rapidly
learned social behavior during early development. Thus, we speculate that the computational foundations of social behavior are _domain-general learning mechanisms_ (e.g., reinforcement learning and intrinsic motivation), which allow animals to rapidly learn domain-specific social knowledge through early social experiences.
Why do social preferences emerge from domain-general learning mechanisms? Perhaps counterintuitively, we found that curiosity (i.e., a preference for the unpredictable) during early stages of life drove social grouping (i.e., a preference for familiar groupmates). This preference for familiar groupmates developed in the artificial fish because the other fish were the most unpredictable things in the environment. Accordingly, curiosity-driven learning motivates agents to learn about groupmates, leading to the development of collective behavior (Lee, Wood, & Wood, 2021) and social grouping. We emphasize that curiosity-driven learning must be subject to a learning window to produce social grouping; if learning remains permanently "active," then agents will continue to seek unpredictable experiences and will likely not develop a preference for in-group members. Critically, there is ample evidence for learning windows in nature. For example, filial imprinting occurs during a short sensitive period immediately after birth, during which animals develop a lasting attachment to groupmates seen early in life (Horn, 2004; Patterson & Bigler, 2006; Tinbergen, 1951). We speculate that reverse engineering animal behavior and intelligence will require not only finding the right learning mechanisms, but will also require raising animals and machines in the same environments and constraining machine learning to similar learning windows as real animals.
**Image computable models of social grouping.** Lee et al. (2021) introduced pixels-to-actions models of collective behavior that spontaneously develop animal-like grouping behavior. Our study extends this approach to the domain of social preferences. Pixels-to-actions models are valuable because they formalize the mechanisms underlying the entire learning process, from sensory inputs to behavioral outputs. As a result, these image computable models can be directly compared against real animals, falsified, and refined.
These results set the stage for many exciting future directions. With the discovery of a model class that spontaneously learns similar social preferences as real animals, we can now search through the model class to find particularly strong models, via a continuous cycle of model creation, model prediction, and model testing against experimental results. This approach can move the field forward by encouraging the creation of neurally mechanistic pixels-to-actions models that learn from the same embodied data streams as newborn animals. Over time, we can cull models that are less accurate and focus attention on improving and extending the most accurate models.
The most promising models can then be investigated in richer detail, leading to greater intuitive understanding of the underlying mechanisms. Controlled comparisons with different architectures, objective functions, and learning rules could define the necessary and sufficient learning algorithms for animal-like social behavior. Likewise, controlled comparisons using the same learning algorithms, but different training data (from different controlled-rearing experiments), could reveal which experiences are necessary and sufficient to develop animal-like social behavior.
Finally, pixels-to-actions models are valuable because they allow integration of research findings across diverse fields. For instance, while it was not our primary goal, we observed robust evidence for 'personality differences' in the artificial fish, with some fish strongly preferring to spend most of their time with groupmates and other fish developing less strong social preferences. We emphasize that the artificial fish had identical learning algorithms and were raised in the same environment. The only difference between the fish was the initial random configuration of their brain weights and the particular experiences that the fish acquired during the training phase. Nevertheless, these differences were sufficient to produce personality differences in machines. We suspect that the complex group dynamics that emerged during the training phase resulted in different learning opportunities (training data) for each fish, which led each fish to learn different policies (personalities) from one another. Future research could explore this idea directly by characterizing how the particular experiences encountered by artificial fish lead to various personality differences (e.g., sociality, boldness, novelty seeking).
Importantly, pixels-to-actions models allow researchers to study different phenomena (i.e., social preferences and personality differences) in parallel, using the same integrative pixels-to-actions model. Future studies might further expand the reach of these integrative models, by extending this approach to other collective behavior topics, including leadership, foraging, and navigation.
## Conclusion
In sum, we present a pixels-to-actions model of social preferences, which indicates that we have isolated a set of learning mechanisms that are sufficient for learning social preferences in embodied agents. When trained in naturalistic environments, domain-general learning mechanisms are sufficient to drive social grouping. We thus see these results as a starting point for building an engineering-level understanding of the mechanisms that underlie social behavior. Our results complement a growing body of work using deep neural networks to model the visual (Yamins et al., 2014), auditory (Kell et al., 2018), and motor (Michaels et al., 2020) systems, and extend this "reverse engineering" approach to the study of social grouping: a behavior with deep historical roots in psychology, neuroscience, and biology.
## Acknowledgments
Funded by NSF CAREER Grant BCS-1351892 and a James S. McDonnell Foundation Understanding Human Cognition Scholar Award. |
2304.11578 | A non-singular bouncing cosmology in $ f(R,T) $ gravity | We investigate a bounce realization in the framework of higher order
curvature in $ f(R,T) $ modified theory of gravity. We perform a detailed
analysis of the cosmological parameters to explain the contraction phase, the
bounce phase, and the expansion phase. Furthermore, we observe a violation of
the null energy condition, instability of the model, and a singularity upon
deceleration at the bouncing point, which are the supporting results for a
bouncing cosmology. The outcome of the slow roll parameters is satisfactory to
understand the inflation era and the equation of state parameter exhibits a
ghost condensate behavior of the model near the bounce. Additionally, we
discuss the stability of the model using linear perturbations in the Hubble
parameter as well as the energy density. | J. K. Singh, Shaily, Akanksha Singh, Aroonkumar Beesham, Hamid Shabani | 2023-04-23T08:23:44Z | http://arxiv.org/abs/2304.11578v1 | # A non-singular bouncing cosmology in \(f(R,T)\) gravity
###### Abstract
We investigate a bounce realization in the framework of higher order curvature in \(f(R,T)\) modified theory of gravity. We perform a detailed analysis of the cosmological parameters to explain the contraction phase, the bounce phase, and the expansion phase. Furthermore, we observe a violation of the null energy condition, instability of the model, and a singularity upon deceleration at the bouncing point, which are the supporting results for a bouncing cosmology. The outcome of the slow roll parameters is satisfactory to understand the inflation era and the equation of state parameter exhibits a ghost condensate behavior of the model near the bounce. Additionally, we discuss the stability of the model using linear perturbations in the Hubble parameter as well as the energy density.
PACS numbers: 98.80 Cq.
Keywords: Bouncing cosmology, \(f(R,T)\) gravity, FLRW metric, Parametrization.
## I Introduction
To study the different eras of the Universe, it is very important to analyze the underlying scenario from inflationary cosmology to structure formation. This is useful to observe the initial structure of the Universe. The present work in standard cosmology raises many issues related to inflationary cosmology like the initial singularity problem, and to avoid this problem, non-singular bounce cosmology is discussed by various authors. While the models related to bouncing cosmology face serious challenges like ghost and gradient instability problems [1; 2; 3; 4; 5], it is believed that the Universe can change its phase from contraction to expansion without encountering a singularity similar to the initial singularity in standard cosmology. As a result, the big bang cosmic singularity can be replaced by a non-singular cosmic bounce, which shows a smooth transition from contraction to an expansion of the Universe. In a number of toy models, it is observed that the null energy condition (NEC) is violated in non-singular bouncing cosmology. Indeed, the violation of the NEC can occur in e.g., generalized Galileon theories that support the probability of non-singular cosmology. In recent times, big bounce cosmology has been a very interesting feature in the field of modified theories of gravity. In the literature, various kinds of bouncing cosmological models are discussed in different modified theories of gravity. The bounce cosmology in, inter alia, \(f(R)\) gravity [6; 7; 8; 9; 10; 11; 12; 13], \(f(R,T)\) gravity [14; 15; 16; 17; 18], \(f(R,G)\) gravity [19; 20; 21; 22; 23], \(f(T,B)\) gravity [24] has been discussed. Also, smooth and slow phase transitions have also been studied by various authors.
In order to explain the bouncing scenario of the Universe, the following conditions will be examined. (1) The NEC should be violated as the Universe changes its phase from contraction to expansion (\(H\) should change sign), _i.e._, in the neighborhood of the bouncing point, the NEC should be violated [15; 25; 26]. (2) From the bouncing point (\(H=0\)), the scale factor starts increasing as the Universe enters its expansion phase. The deceleration parameter will be singular at \(H=0\) as \(q=-1-\dot{H}/H^{2}\). In late times, the deceleration parameter indicates cosmic acceleration. (3) In a successful bouncing model, the EoS parameter (\(w\)) crosses the phantom line \(w=-1\) near the transition point [27; 28; 29]. For a successful bouncing model, these three conditions are required to be fulfilled. But, in some bouncing models like the Loop Quantum Bouncing model, the NEC needs not to be violated [30]. Other types of theories are called non-singular matter bounce scenarios which is a cosmological model with an initial state of matter-dominated contraction and a non-singular bounce [31]. These models provide an alternative to inflationary cosmology for creating the observed spectrum of cosmological fluctuations [32; 33; 34; 35]. In these theories, some matter fields are introduced in such a way that the WEC is violated in order to make \(\dot{H}>0\) at the bounce. It is accessible that putting aside the correction term leads to negative values for the time derivative of the Hubble parameter for all fluids which respect WEC. Therefore, in order to obtain a bouncing cosmology it is necessary to either go beyond the GR framework or else to introduce new forms of matter which violate the key energy conditions, i.e. the null energy condition (NEC) and the WEC.
For a successful bounce, it can be shown that within the context of SCM the NEC and thus the WEC is violated for a period of time around the bouncing point. In the perspective of matter bounce scenarios, many authors have studied quintom matter [36], Lee-Wick matter [37], ghost condensate field [38], Galileon field and phantom field [39; 40; 41]. They have also constructed bouncing models using various approaches to modified gravity such as \(f(R)\) gravity [42; 43; 44], teleparallel \(f(T)\) gravity [45; 46], Einstein-Cartan theory [47; 48; 49; 50; 51; 52] and others [53]. Some other cosmological models such as Ekpyrotic model [54] and string cosmology [55; 56] which are alternatives to both inflation and matter bounce scenarios have also been discussed. To improve the consistency of the model with observations, a potential function is included in the action. For the flat FLRW space-time metric, a scalar field is associated with the potential function. From the background dynamics of the inflaton field, it is realized that during a sufficiently long inflationary phase, the slow roll parameters turn out to be much less than unity [57; 58].
There are different motivations to include the trace of the energy-momentum tensor in the action; i) the existence of some exotic matter which affects the evolution of the Universe [59; 60] ii) there may be some unknown matter-gravity interactions iii) quantum effects in the form of conformal anomalies may work [2 ]. Cai _et al._[4] discuss a non-singular bounce cosmology with a single scalar field and matter. They explain the role of slow roll parameters as well as the spectral index and tensor-to-scalar ratio. Odintsov _et al._[61] also discusses many parameters to describe the inflaton cosmology in a bouncing scenario. Singh _et al._ and Mishra _et al_ study the dynamical parameters of the bouncing Universe in \(f(R,T)\) gravity [14; 15]. S. Nojiri _et al._[62], place an emphasis on bouncing cosmology in various modified theories of gravity. With the theory of bouncing cosmologies, an exact scale-invariant power spectrum of primordial curvature perturbations can be obtained along with a fine description. For example, during the contraction era, the matter bounce scenario is observed; during the expansion era, entropy is conserved and the perturbation modes increase with cosmic time. And this, the continuous cycle of cosmological bounces can be stopped if a crushing singularity takes place at the end of the expanding era. Odintsov _et al._ studied a deformed matter bounce scenario [63], where it was observed that the infinite repeating evolution of the Universe stops at the final attractor of the theory, which is a Big Rip singularity.
In this article, we discuss our work in the following order: Sec. II starts with the action principle and basic formulas to discuss the Einstein field equations (EFE) for higher-order Ricci curvature. Further, to examine the bouncing scenario, we discuss some physical criteria, e.g., the cosmological parameters, the null energy condition, scalar field theory, and the stability of the model in Sec. III. And afterward, in Sec. IV, we conclude our work with physically admissible results for a non-singular bouncing cosmology. In the following, we use units \(c=h=1\).
## II Einstein field equations in \(f(R,t)\) gravity
For modified \(f(R,T)\) gravity theory, the total gravitational action is given as
\[\mathcal{A}=\int\Bigg{[}\frac{f(R,T)}{16\pi G}+\mathcal{A}_{m}\Bigg{]}\sqrt{-g} d^{4}x, \tag{1}\]
where \(\mathcal{A}_{m}\) is the matter Lagrangian, \(R\) is the Ricci curvature, \(T\) is the trace of the energy momentum tensor (EMT), \(G\) is the gravitational constant and the function \(f(R,T)\) is taken to be the following coupling of \(R\) and \(T\), _i.e._, \(f(R,T)=f_{1}(R)+2f_{2}(T)\)[64]. By taking the variation in action (1) _w.r.t._ the metric tensor \(g_{ij}\), the gravitational equations leads to [65]
\[f_{1}^{R}(R)R_{ij}-\frac{1}{2}g_{ij}(f_{1}(R)+2f_{2}(T))+(g_{ij}\Box-\nabla_{i }\nabla_{j})f_{1}^{R}(R)=8\pi GT_{ij}-2f_{2}^{T}(T)(T_{ij}+\Theta_{ij}), \tag{2}\]
where \(f_{1}^{R}(R)\) and \(f_{2}^{T}(T)\) represent the derivatives of functions \(f_{1}\), \(f_{2}\)_w.r.t._\(R\) and \(T\), respectively. \(\nabla_{i}\) denotes the covariant derivative _w.r.t_\(g_{ij}\), and \(\Box\) is the D'Alembert operator. \(\Theta_{ij}\) can be written as
\[\Theta_{ij}=g^{\mu\nu}\frac{\delta T_{\mu\nu}}{\delta g_{ij}}=-2T_{ij}+g_{ij} \mathcal{A}_{m}-2g^{\mu\nu}\frac{\delta^{2}\mathcal{A}_{m}}{\delta g_{ij} \delta g_{\mu\nu}}. \tag{3}\]
Now, we assume, the matter Lagrangian \(\mathcal{A}_{m}=-p\) for a perfect fluid whose EMT is given as \(T_{ij}=(\rho+p)u_{i}u_{j}-pg_{ij}\), where \(\rho\) and \(p\) denote the energy density and pressure in the Universe. Using these relations, we obtain \(\Theta_{ij}=-2T_{ij}-pg_{ij}\). Therefore, Eq. (2) can be written as
\[f_{1}^{R}(R)R_{ij}-\frac{1}{2}g_{ij}(f_{1}(R)+2f_{2}(T))+(g_{ij}\Box-\nabla_{i }\nabla_{j})f_{1}^{R}(R)=8\pi GT_{ij}+2f_{2}^{T}(T)(T_{ij}+pg_{ij}). \tag{4}\]
Eq. (4) can be expressed in the standard form as
\[G_{ij}=R_{ij}-\frac{1}{2}Rg_{ij}=\frac{8\pi G}{f_{1}^{R}(R)}(T_{ij}+T_{ij}^{*}). \tag{5}\]
where
\[T_{ij}^{*}=\tfrac{1}{8\pi G}\Big{[}\tfrac{1}{2}g_{ij}((f_{1}(R)+2f_{2}(T))-Rf_ {1}^{R}(R))+(\nabla_{i}\nabla_{j}-g_{ij}\Box)f_{1}^{R}(R)+2f_{2}^{T}(T)(T_{ij} +pg_{ij})\Big{]}\]
Next, we consider a homogeneous and isotropic flat Friedmann-Lemaitre-Robertson-Walker (FLRW) metric
\[ds^{2}=dt^{2}-a^{2}(t)(dx^{2}+dy^{2}+dz^{2}). \tag{6}\]
Here \(a(t)\) is the scale factor. From the EMT expression, the trace \(T\) can be calculated as
\[T=\rho-3p, \tag{7}\]
and the value of the Ricci scalar is
\[R=-6(2H^{2}+\dot{H}). \tag{8}\]
By considering \(f_{1}(R)=R+\zeta R^{m}\) and \(f_{2}(T)=\lambda T^{k}\), we can find the Einstein field equations (EFE) as
\[3H^{2}=\frac{1}{1+\zeta mR^{m-1}}[8\pi\rho+\frac{1}{2}(\zeta(1-m)R^{m}+2 \lambda T^{k})-3H\zeta m(m-1)R^{m-2}\dot{R}+(\rho+p)2\lambda kT^{k-1}],\]
or
\[3H^{2}=\frac{1}{1+\zeta mR^{m-1}}[8\pi\rho+\frac{1}{2}(\zeta(1-m )(-6(2H^{2}+\dot{H}))^{m}+2\lambda T^{k})]+\] \[\frac{1}{1+\zeta mR^{m-1}}[-3H\zeta m(m-1)(-6(2H^{2}+\dot{H}))^{m -2}(-6(4H\dot{H}+\ddot{H}))+(\rho+p)2\lambda kT^{k-1}],\]
\[2\dot{H}+3H^{2}=\frac{-1}{a^{2}(1+\zeta mR^{m-1})}[8\pi pa^{2}+\frac{1}{2}(-a ^{2})(\zeta(1-m)R^{m}+2\lambda T^{k})+a\zeta m(m-1)R^{m-3}(a((m-2)\dot{R}^{2}+ R\ddot{R})+3\dot{a}R\dot{R})].\]
or
\[2\dot{H}+3H^{2}=\frac{-1}{a^{2}(1+\zeta mR^{m-1})}[8\pi pa^{2}+ \frac{1}{2}(-a^{2})(\zeta(1-m)(-6(2H^{2}+\dot{H}))^{m}+2\lambda T^{k})+\] \[a\zeta m(m-1)(-6(2H^{2}+\dot{H}))^{m-3}(a((m-2)(-6(4H\dot{H}+ \ddot{H}))^{2}+(-6(2H^{2}+\dot{H}))(-6(4H\ddot{H}+4\dot{H}^{2}+\dddot{H})))+\] \[3\dot{a}(-6(2H^{2}+\dot{H}))(-6(4H\dot{H}+\ddot{H})))]\]
Since the above equations will not be easy to solve thus for making the calculations easier, we have taken \(m=2\) and \(k=1\). Now, with \(f_{1}(R)=R+\zeta R^{2}\), which was introduced by Starobinsky [66] and \(f_{2}(T)=\lambda T\), the Einstein field equations for the model will be
\[(8\pi+3\lambda)\rho-\lambda p=3H^{2}+18\zeta(\dot{H}^{2}-6H^{2}\dot{H}-2H\ddot {H}), \tag{9}\]
\[(8\pi+3\lambda)p-\lambda\rho=-2\dot{H}-3H^{2}+6\zeta(26\dot{H}H^{2}+2\ddddot{ H}+14H\ddot{H}+9\dot{H}^{2}). \tag{10}\]
The energy density and pressure can be obtained from Eqs. (9) and (10) but for further discussion, a parametrization technique is needed. Thus, to draft a bouncing cosmological model with the conditions mentioned in Sec. I, we consider the bouncing scale factor as \(a(t)=(\alpha+\beta t^{2})^{\frac{1}{n}}\)[67], where \(\alpha\), \(\beta\), and \(n\) are positive constants. And thereafter, using the relations \(H=\dot{a}/a\) and \(q=-1-\dot{H}/H^{2}\), we acquire the value of the Hubble parameter and the deceleration
parameter as follows:
\[H = \frac{2\beta t}{n(\alpha+\beta t^{2})}, \tag{11}\] \[q = \frac{1}{2}\left(-\frac{\alpha n}{\beta t^{2}}+n-2\right). \tag{12}\]
From Eqs. (11) and (12) one finds that in the early and the late times the Hubble parameter vanishes and the deceleration parameter gets the limiting value \((n-2)/2\). Also, \(H=0\), and \(q\) diverges at the bouncing point. Since \(\alpha\), \(\beta\), and \(n\) contain some positive values, to investigate the cosmological parameters we fix the value of \(\alpha\) at \(0.887\), and take small variations for the constants \(\beta\) and \(n\), which is mentioned in the figures. In Fig. 1, all three plots of the Hubble parameter, scale factor, and deceleration parameter are favorable with the prescribed bouncing conditions. In the upper left panel of Fig. 1, the Hubble parameter starts with a negative value (contracting phase), and afterward crosses the bouncing point (\(H=0\)) and enters the expanding phase (\(H>0\)). At the bouncing point, the scale factor has its least value and thereafter starts increasing (see the upper right panel of Fig. 1). Also, the graphical behavior of the deceleration parameter can be noted in the lower panel of Fig. 1. \(q\) diverges at the singularity at the bouncing point and approaches the asymptotic value of \(-1\) at late times. Furthermore, to analyze the dynamic behavior of the Universe, we are proceeding with the framed value of \(a(t)\).
## III Dynamical analysis of the bouncing evolution
The EoS parameter plays a pivotal role in the investigation of bounce cosmology. For this purpose, the energy density and pressure can be calculated from Eqs. (9), (10) as
\[\rho=\beta\frac{a_{1}t^{6}+a_{2}t^{4}+a_{3}t^{2}+a_{4}}{2n^{3}( \lambda+2\pi)(\lambda+4\pi)(\alpha+\beta t^{2})^{4}}, \tag{13}\]
where
\[a_{1} = n\beta^{3}\big{[}\lambda(n+6)+24\pi\big{]},\] \[a_{2} = \beta^{2}\Big{\{}48\beta\zeta(7\lambda+36\pi)+\lambda n^{2}( \alpha-36\beta\zeta)+12n\big{[}\lambda(\alpha+5\beta\zeta)+4\pi(\alpha-9\beta \zeta)\big{]}\Big{\}},\] \[a_{3} = \alpha\beta\Big{\{}-48\beta\zeta(7\lambda+36\pi)-\lambda n^{2}( \alpha-216\beta\zeta)+6n\big{[}\lambda(\alpha-12\beta\zeta)+4\pi(\alpha+60 \beta\zeta)\big{]}\Big{\}},\] \[a_{4} = n\alpha^{2}\Big{\{}36\beta\zeta\big{[}4\pi-\lambda(n-3)\big{]}- \alpha\lambda n\Big{\}},\]
and
\[p=\beta\frac{b_{1}t^{6}+b_{2}t^{4}+b_{3}t^{2}+b_{4}}{2\lambda n^{3}(\lambda+2 \pi)(\lambda+4\pi)(\alpha+\beta t^{2})^{4}}, \tag{14}\]
where
\[b_{1} =n\lambda\beta^{3}\big{[}3\lambda(n-2)+8\pi(n-3)\big{]},\] \[b_{2} =\beta^{2}\Bigg{\{}3\lambda^{2}(n-4)\big{[}\alpha(144\beta+n)-12 \beta\zeta(3n+7)\big{]}+8\pi\lambda\big{[}324\alpha\beta(n-4)+\alpha(n-6)n-6\beta \zeta(n-4)(6n+41)\big{]},\] \[+3456\pi^{2}\beta(n-4)(\alpha-\zeta)\Bigg{\}},\]
Figure 1: The graphical behavior of \(H\), \(a\) and \(q\). Note that the values \(\alpha=0.887\), \(\zeta=0.1\) and \(\lambda=2.5\) have been used in Figs. 1-6. Also, we have used the same graphical options in these figures.
\[b_{3}=\alpha\beta\Bigg{\{}-3\lambda^{2}\big{[}24\beta\zeta\left(-9n^{2 }+3n+14\right)+96\alpha\beta(5n-6)+\alpha n(n+2)\big{]}-2304\pi^{2}\beta(5n-6)( \alpha-\zeta),\] \[-8\pi\lambda\big{[}984\beta\zeta+216\alpha\beta(5n-6)+\alpha n(n+ 3)-36\beta\zeta n(6n+13)\big{]}\Bigg{\}},\] \[b_{4}=\alpha^{2}n\Bigg{\{}-1152\pi^{2}\beta(\alpha-\zeta)-3 \lambda^{2}\big{[}\alpha(48\beta+n)+36\beta\zeta(n-3)\big{]}-8\pi\lambda\big{[} \alpha(108\beta+n)+18\beta\zeta(2n-9)\big{]}\Bigg{\}},\]
and the EoS parameter (\(w\)) can be obtained from the equation of state as,
\[w=\frac{p}{\rho}. \tag{15}\]
By considering Eqs. (13)-(15) the following limiting results can be obtained
\[\lim_{t\to 0}\rho=\beta\frac{a_{4}}{d}, \lim_{t\rightarrow\infty}\rho=0, \tag{16}\] \[\lim_{t\to 0}p=\frac{\beta}{\lambda}\frac{b_{4}}{d}, \lim_{t\rightarrow\infty}p=0,\] (17) \[\lim_{t\to 0}w=\frac{1}{\lambda}\frac{b_{4}}{a_{4}}, \lim_{t\rightarrow\infty}w=\frac{1}{\lambda}\frac{b_{1}}{a_{1}},\] (18) \[d=2\alpha^{4}(\lambda+2\pi)(\lambda+4\pi)n^{3}. \tag{19}\]
Therefore, by fixing the model parameters \(n\), \(\alpha\), \(\beta\), \(\zeta\) and \(\lambda\) one can model a bouncing scenario consistent with the theoretical requirements. In this regards, without loss of generality, the model parameters \(\zeta\) and \(\lambda\) are constrained to the values of \(0.1\) and \(2.5\), respectively. The graphical representation of the energy density (\(\rho\)), matter pressure (\(p\)) and the EoS parameter (\(w\)) can be seen in Fig. 2. The energy density acquires its maximum value at the bouncing position, and just before and after it, it drops (see the upper left of Fig. 2). After a while, the value of \(\rho\) increases for a period and then decreases. In the late times, the energy density is decreasing monotonically. In the upper right panel of Fig. 2, the matter pressure \(p\) has its least negative value at the bouncing time. After a while, it starts increasing and becomes positive for a short interval of time. In the times, \(p\) remains negative, which indicates the acceleration nature of the Universe. In the lower panel of Fig. 2, it can be observed that the EoS parameter is very well shaped near the bouncing point. \(w\) crosses the phantom line, \(w=-1\), near the bounce in the late times, and again crosses the \(\Lambda CDM\) line (\(w=-1\)), and enters into a quintessence region, remaining there. At the bouncing point, our model is behaving like a ghost condensate model.
### Violation of NEC
Energy conditions come to light when one studies the Raychaudhuri equation for the expansion, which is completely a geometric statement and, in essence, it makes no commitment to any gravitational field equations [68; 69; 70]. Also, \(R_{ij}u^{i}u^{j}\geq 0\) is obtained from the condition for attractive gravity. In general relativity (GR), we can write this condition in terms of the energy momentum tensor as \(T_{ij}u^{i}u^{j}\geq 0\). However, in a non-singular bouncing model, a quintom context is associated with the violation of the null energy condition (\(\rho+p\geq 0\)), which is attained with various quantum instabilities [4].
In our model, we can observe in the left panel of Fig. 3, \(\rho+p<0\), _i.e._, the NEC is violated near the bounce, and in
the right panel of Fig. 3, we can observe that \(\dot{H}\) is positive near the bounce, which supports the standard conditions [15; 25; 26].
### Scalar field analysis
In GR, it is interesting to study a bouncing cosmology in the quintom model. Generally, quintessence-like and phantom-like scalar fields are discussed in the quintom model. To make our model observationally consistent, it is required to take \(w\approx-1\), which means the kinetic energy should be very much less as compared to the potential energy (\(\dot{\phi}^{2}<<V(\phi)\)).
To entice to reinterpret the source of matter as that of a scalar field, which is minimally coupled to gravity with a positive kinetic and potential term, and such kind of representation is also used by various authors [71; 72; 73]. Therefore,
the present model can also be expressed as Friedmann universes containing a scalar field. Now, to discuss a non-singular bouncing cosmological model using scalar fields in \(f(R,T)\) gravity, let us consider a universe classified by the FLRW metric and a scalar field (\(\phi\)), with the action
\[\mathcal{A}=\int\Bigg{[}\frac{R}{16\pi G}-\frac{1}{2}\partial_{i}\phi\partial^{ i}\phi-V(\phi)\Bigg{]}\sqrt{-g}d^{4}x, \tag{20}\]
where the scalar field \(\phi\) can be taken as quintessence-like or phantom-like. For the scalar field action, the EMT can be written as
\[T_{ij}=\partial_{i}\phi\partial_{j}\phi-g_{ij}(\frac{1}{2}g^{ij}\partial_{i} \phi\partial_{j}\phi-V(\phi)), \tag{21}\]
and for the flat FLRW metric, we have \(T_{0}^{0}=\rho\) and \(T_{j}^{i}=-p\delta_{j}^{i}\). Therefore the energy density and total pressure in the Universe filled by scalar fields (both quintessence-like and phantom-like), can be obtained as:
\[\rho_{qu}=\frac{1}{2}\dot{\phi}_{qu}^{2}+V(\phi_{qu}),\qquad p_{qu}=\frac{1}{2 }\dot{\phi}_{qu}^{2}-V(\phi_{qu}) \tag{22}\]
and
\[\rho_{ph}=-\frac{1}{2}\phi_{ph}^{2}+\dot{V}(\phi_{ph}),\qquad p_{ph}=-\frac{1} {2}\phi_{ph}^{2}-\dot{V}(\phi_{ph}), \tag{23}\]
where the suffixes "\(ph\)" and "\(qu\)" denote phantom-like and quintessence-like scalar fields, respectively.
Figure 3: The presentation of NEC as well as time variation of the Hubble parameter. One observes that \(\dot{H}>0\) and \(\rho+p<0\) in the neighbourhood of bouncing point.
From Eqs. (22) and (23), the kinetic and potential energies can be evaluated as:
\[\frac{1}{2}\dot{\phi}_{qu}^{2}=\rho_{qu}+p_{qu},\qquad\frac{1}{2} \dot{\phi}_{ph}^{2}=-\rho_{ph}-p_{ph}, \tag{24}\] \[V(\phi_{qu})=\rho_{qu}-p_{qu},\qquad V(\phi_{ph})=\rho_{ph}-p_{ph}. \tag{25}\]
Now, using the value of energy density and pressure from Eqs. (13) and (14), we can draw the plots for kinetic and potential energies (see Fig. 4).
In Fig. 4, we depict the kinetic and potential energies, and analyse their variation for both quintessence-like and phantom-like scalar fields. Negative kinetic energy indicates the presence of dark energy. The upper left and right panels of Fig. 4 show that kinetic energy is negative for the quintessence-like scalar field, and positive for the phantom
Figure 4: The plots of kinetic and potential energies.
like scalar field in the vicinity of the bouncing point. The potential energy for both scalar fields is equal and can be characterized from the lower panel of Fig. 4.
As is well-known, \(w<-1\) for phantom-like scalar fields and \(-1<w<0\) for quintessence-like scalar fields. Also, for the quintom nature of the model, the kinetic energy for quintessence-like scalar fields (\(\dot{\phi}_{qu}^{2}/2\)) will be numerically equal to the kinetic energy for phantom-like scalar fields (\(\dot{\phi}_{ph}^{2}/2\)) when \(w\) crosses the quintom line \(w=-1\), and this is the obligatory condition for having a bouncing model [74; 75]. From the upper left and right panels of Fig. 4, it can be observed that this condition holds good in the present model.
To study the inflation era, we can also analyse the behavior of the slow roll indices in scalar theories. Slow roll indices are defined as
\[\epsilon=-\frac{\dot{H}}{H^{2}},\hskip 28.452756pt\eta=\frac{\ddot{\phi}}{H \dot{\phi}}. \tag{26}\]
Slow roll parameters for a scalar field should be much less than unity, which means that \(\frac{1}{2}\dot{\phi}^{2}<<V(\phi)\). Firstly, the slow-roll condition ensures the inflationary era in the first place, while the second ensures that inflation lasts for a sufficient amount of time [62]. It is observed in Fig. 5 that \(\epsilon\) and \(\eta\) obeys the required condition for the maximum time period.
### Stability of the model
We analyze the cosmological perturbation to study the evolution of the universe from the Big Bang. Several authors have worked on perturbation analysis for a homogeneous background cosmology [76; 77; 78; 79; 80; 81]. It is believed that due to a tiny perturbation in energy density, the cosmic fluid is not stable. Therefore, this concept gives a new sight into general relativity as it totally relates to the gravitational and pressure forces. Whenever the feeble pressure is equal to the gravitational force, then density perturbation is produced. Thus, to study the structure of the universe,
Figure 5: The illustration of the behavior of the slow roll parameters.
we come across the analysis of small perturbations. In our work, we investigate linear perturbations to discuss the stability of the model. We inspect the linear perturbations in the Hubble parameter and energy density as
\[H_{p}(t)=H(t)(1+\delta(t)) \tag{27}\]
and
\[\rho_{p}(t)=\rho(t)(1+\delta_{m}(t)), \tag{28}\]
where, \(H_{p}(t)\) and \(\rho_{p}(t)\) indicate the perturbed Hubble parameter and energy density respectively and \(\delta(t)\) and \(\delta_{m}(t)\) are taken as the perturbation terms of the Hubble parameter and the energy density respectively. Using the conservation equation for the matter field, the perturbation equation is obtained as
\[\dot{\delta_{m}}(t)+3H(t)\delta(t)=0. \tag{29}\]
The expression that relates the geometrical and the matter perturbations is given as
\[b_{m}\delta_{m}(t)=-6\left[H(t)\right]^{2}\delta(t), \tag{30}\]
where \(b_{m}=\kappa\rho_{m0}\). Matter perturbation measures the whole perturbation for a cosmological solution in GR. Now, by eliminating \(\delta(t)\) from Eqs. (29) and (30), we obtain the perturbation equation of the first order as
\[\dot{\delta_{m}}(t)-\frac{b_{m}}{2H(t)}\delta_{m}(t)=0. \tag{31}\]
After integrating the Eq. (31), we obtain
\[\delta_{m}(t)=D\ exp\left[\frac{1}{2}\int\frac{b_{m}}{H(t)}dt\right]. \tag{32}\]
Similarly, the evolution of perturbation \(\delta(t)\) is given as
\[\delta(t)=-\frac{b_{m}}{6\left[H(t)\right]^{2}}D\ exp\left[\frac{1}{2}\int \frac{b_{m}}{H(t)}dt\right]. \tag{33}\]
The terms \(\delta_{m}(t)\) and \(\delta(t)\) can be calculated as a function of redshift \(z\) as
\[\delta_{m}(z)=D\ exp\left[\frac{1}{2}\int\frac{b_{m}}{\left(1+z\right)\ \left[H(z)\right]^{2}}dz\right] \tag{34}\]
and
\[\delta(z)=-\frac{b_{m}}{6\left[H(z)\right]^{2}}D\ exp\left[\frac{1}{2}\int \frac{b_{m}}{\left(1+z\right)\ \left[H(z)\right]^{2}}dz\right]. \tag{35}\]
where \(D\) is an arbitrary integration constant.
In Fig. 6, we see that the perturbation parameters \(\delta(z)\) and \(\delta_{m}(z)\) decrease monotonically and tend to zero as \(z\rightarrow-1\). Therefore, our model seems to be stable under linear perturbation in the late times.
## IV Conclusion
To scrutinise the theoretical inference of non-singular bouncing cosmologies, it is required to break a series of singularity theorems manifested by many authors. Among them, one is a violation of the NEC in the framework of GR. In the literature, a non-singular bouncing cosmology in the early Universe could be related with the quintom scenario as stimulated by the dark energy study of the late time acceleration [82].
In this paper, we have presented a model in modified gravity for a flat FLRW space-time by the parametrization of the scale factor, which yields a non-singular bouncing cosmology and if we take \(\lambda=0\) or \(\zeta=0\), the bouncing scenario does not change. From the functioning of the dynamical parameters, our model is experiencing a cosmic bounce at \(t=0\). At this point, the Hubble parameter indicates the phase from contraction to expansion. Also, the required conditions for a successful bouncing model are examined. The proposed model violates the NEC and has a singularity at the bouncing point as shown by the deceleration parameter. The behavior of the EoS parameter also supports the bouncing cosmological model as \(w\) crosses the phantom line \(w=-1\) near \(t=0\), and shows a ghost condensate behavior near the bounce. In the early times and the late times, the EoS parameter crosses the quintom line again.
In a scalar field theory, we observe that the kinetic energies are equal at the time when \(w\) crosses the quintom line, which is also a supportive result to have a bounce. The slow roll parameters show satisfactory output in this proposed model. The calculated values of the tensor-to-scalar ratio and the spectral index are studied in \(f(R,T)\) theory of gravity and both values are not near the recently improved limit on the tensor-to-scalar ratio presented by the Planck team. For a number of e-folds, \(N=60\), the value of \(\delta_{m}=1-n_{s}=2(3\epsilon-\eta)\) is positive and very close to zero and the value of \(r=16\epsilon\) are negative in our model while r is a positive quantity. This shows that the Planck data alone falsify the polynomial inflationary models independently of the observational restriction on r [83]. Finally, we examine the stability of the model via linear perturbation, and we notice that our model is stable (see Fig. 6). Thus, we conclude that the model shows a non-singular bounce and is stable in the late times.
Figure 6: Plots for perturbation \(\delta(z)\) in Hubble parameter and \(\delta_{m}(z)\) in energy density vs. redshift \(z\)
**Acknowledgements** Shaily and Akanksha Singh express their thanks to Prof. J. P. Saini, Hon'ble Vice Chancellor, NSUT, New Delhi for the fellowship under TRFs scheme.
**Data Availability Statement** No new data were created or analysed in this study.
|
2307.08579 | Scale-Aware Modulation Meet Transformer | This paper presents a new vision Transformer, Scale-Aware Modulation
Transformer (SMT), that can handle various downstream tasks efficiently by
combining the convolutional network and vision Transformer. The proposed
Scale-Aware Modulation (SAM) in the SMT includes two primary novel designs.
Firstly, we introduce the Multi-Head Mixed Convolution (MHMC) module, which can
capture multi-scale features and expand the receptive field. Secondly, we
propose the Scale-Aware Aggregation (SAA) module, which is lightweight but
effective, enabling information fusion across different heads. By leveraging
these two modules, convolutional modulation is further enhanced. Furthermore,
in contrast to prior works that utilized modulations throughout all stages to
build an attention-free network, we propose an Evolutionary Hybrid Network
(EHN), which can effectively simulate the shift from capturing local to global
dependencies as the network becomes deeper, resulting in superior performance.
Extensive experiments demonstrate that SMT significantly outperforms existing
state-of-the-art models across a wide range of visual tasks. Specifically, SMT
with 11.5M / 2.4GFLOPs and 32M / 7.7GFLOPs can achieve 82.2% and 84.3% top-1
accuracy on ImageNet-1K, respectively. After pretrained on ImageNet-22K in
224^2 resolution, it attains 87.1% and 88.1% top-1 accuracy when finetuned with
resolution 224^2 and 384^2, respectively. For object detection with Mask R-CNN,
the SMT base trained with 1x and 3x schedule outperforms the Swin Transformer
counterpart by 4.2 and 1.3 mAP on COCO, respectively. For semantic segmentation
with UPerNet, the SMT base test at single- and multi-scale surpasses Swin by
2.0 and 1.1 mIoU respectively on the ADE20K. | Weifeng Lin, Ziheng Wu, Jiayu Chen, Jun Huang, Lianwen Jin | 2023-07-17T15:47:48Z | http://arxiv.org/abs/2307.08579v2 | # Scale-Aware Modulation Meet Transformer
###### Abstract
This paper presents a new vision Transformer, Scale-Aware Modulation Transformer (SMT), that can handle various downstream tasks efficiently by combining the convolutional network and vision Transformer. The proposed Scale-Aware Modulation (SAM) in the SMT includes two primary novel designs. Firstly, we introduce the Multi-Head Mixed Convolution (MHMC) module, which can capture multi-scale features and expand the receptive field. Secondly, we propose the Scale-Aware Aggregation (SAA) module, which is lightweight but effective, enabling information fusion across different heads. By leveraging these two modules, convolutional modulation is further enhanced. Furthermore, in contrast to prior works that utilized modulations throughout all stages to build an attention-free network, we propose an Evolutionary Hybrid Network (EHN), which can effectively simulate the shift from capturing local to global dependencies as the network becomes deeper, resulting in superior performance. Extensive experiments demonstrate that SMT significantly outperforms existing state-of-the-art models across a wide range of visual tasks. Specifically, SMT with **11.5M / 2.4GFLOPs** and **32M / 7.7GFLOPs** can achieve **82.2%** and **84.3%** top-1 accuracy on ImageNet-1K, respectively. After pretrained on ImageNet-2K in 224\({}^{2}\) resolution, it attains **87.1%** and **88.1%** top-1 accuracy when finetuned with resolution 224\({}^{2}\) and 384\({}^{2}\), respectively. For object detection with Mask R-CNN, the SMT base trained with 1\(\times\) and 3\(\times\) schedule outperforms the Swin Transformer counterpart by **4.2** and **1.3** mAP on COCO, respectively. For semantic segmentation with UPerNet, the SMT base test at single- and multi-scale surpasses Swin by **2.0** and **1.1** mIoU respectively on the ADE20K. Our code is available at [https://github.com/AFeng-x/SMT](https://github.com/AFeng-x/SMT).
## 1 Introduction
Since the groundbreaking work on Vision Transformers (ViT) [11], Transformers have gained significant attention from both industry and academia, achieving remarkable success in various computer vision tasks, such as image classification [10], object detection [30, 12], and semantic segmentation [75, 7]. Unlike convolutional networks, which only allow for interactions within a local region using a shared kernel, ViT divides the input image into a sequence of patches and updates token features via self-attention (SA), enabling global interactions. However, self-attention still faces challenges in downstream tasks due to the quadratic complexity in the number of visual tokens, particularly for high-resolution inputs.
To address these challenges, several efficient spatial attention techniques have been proposed. For example, Swin Transformer [32] employs window attention to limit the number of tokens and establish cross-window connections via shifting. PVT [56, 57] and Focal [65] reduce the cost of self-attention by combining token merging with spatial reduction. Shunted [42] effectively models objects at multiple scales simultaneously while performing spatial reduction. Other techniques such as dynamic token selection [38, 40, 66] have also proven to be effective improvements.
Rather than directly improving self-attention, several works [9, 27, 37, 26] have investigated hybrid CNN-Transformer architectures that combine efficient convolutional blocks with powerful Transformer blocks. We observed that most hybrid networks replace shallow Trans
Figure 1: Top-1 accuracy on ImageNet-1K of recent SOTA models. Our proposed SMT outperforms all the baselines.
former blocks with convolution blocks to reduce the high computational cost of self-attention in the early stages. However, these simplistic stacking strategies hinder them from achieving a better balance between accuracy and latency. Therefore, one of the objectives of this paper is to present a new perspective on the integration of Transformer and convolution blocks.
Based on the research conducted in [11, 4], which performed a quantitative analysis of different depths of self-attention blocks and discovered that shallow blocks tend to capture short-range dependencies while deeper ones capture long-range dependencies, we propose that substituting convolution blocks for Transformer blocks in shallow networks offers a promising strategy for two primary reasons: \((1)\) self-attention induces significant computational costs in shallow networks due to high-resolution input, and \((2)\) convolution blocks, which inherently possess a capacity for local modeling, are more proficient at capturing short-range dependencies than SA blocks in shallow networks. However, we observed that simply applying the convolution directly to the feature map does not lead to the desired performance. Taking inspiration from recent convolutional modulation networks [15, 18, 64], we discovered that convolutional modulation can aggregate surrounding contexts and adaptively self-modulate, giving it a stronger modeling capability than using convolution blocks alone. Therefore, we proposed a novel convolutional modulation, termed Scale-Aware Modulation (SAM), which incorporates two new modules: Multi-Head Mixed Convolution (MHMC) and Scale-Aware Aggregation (SAA). The MHMC module is designed to enhance the receptive field and capture multi-scale features simultaneously. The SAA module is designed to effectively aggregate features across different heads while maintaining a lightweight architecture. Despite these improvements, we find that SAM falls short of the self-attention mechanism in capturing long-range dependencies. To address this, we propose a new hybrid Modulation-Transformer architecture called the Evolutionary Hybrid Network (EHN). Specifically, we incorporate SAM blocks in the top two stages and Transformer blocks in the last two stages, while introducing a new stacking strategy in the penultimate stage. This architecture not only simulates changes in long-range dependencies from shallow to deep layers but also enables each block in each stage to better match its computational characteristics, leading to improved performance on various downstream tasks. Collectively, we refer to our proposed architecture as Scale-Aware Modulation Transformer (SMT).
As shown in Fig. 1, our SMT significantly outperforms other SOTA vision Transformers and convolutional networks on ImageNet-1K [10]. It is worth noting that our SMT achieves top-1 accuracy of 82.2% and 84.3% with the tiny and base model sizes, respectively. Moreover, our SMT consistently outperforms other SOTA models on COCO [30] and ADE20K [75] for object detection, instance segmentation, and semantic segmentation tasks.
Overall, the contributions of this paper are as follows.
* We introduce the Scale-Aware Modulation (SAM) which incorporates a potent Multi-Head Mixed Convolution (MHMC) and an innovative, lightweight Scale-Aware Aggregation (SAA). The SAM facilitates the integration of multi-scale contexts and enables adaptive modulation of tokens to achieve more precise predictions.
* We propose a new evolutionary hybrid network that effectively models the transition from capturing local to global dependencies as the network increases in depth, leading to improved performance and high efficiency.
* We evaluated our proposed Scale-Aware Modulation Transformer (SMT) on several widely used benchmarks, including classification, object detection, and segmentation. The experimental results indicated that SMT consistently outperformed the SOTA Vision Transformers while requiring fewer parameters and incurring lower computational costs.
## 2 Related Work
### Vision Transformers
The Transformer [54] was initially developed for natural language processing tasks and has since been adapted for computer vision tasks through the introduction of the Vision Transformer (ViT) [11]. Further improvements to ViT have been achieved through knowledge distillation or more intricate data augmentation, as demonstrated by DeiT [52]. However, Transformers do not consider the quadratic complexity of high-resolution images or the 2D structure of images, which are challenges in vision tasks. To address these issues and improve the performance of vision Transformers, various methods have been proposed, including multi-scale architectures [3, 32, 56, 63], lightweight convolution layers [14, 28, 60], and local self-attention mechanisms [32, 6, 65, 71].
### Convolutional Neural Networks
Convolutional neural networks (CNNs) have been the main force behind the revival of deep neural networks in computer vision. Since the introduction of AlexNet [25], VGGNet [44], and ResNet [17], CNNs have rapidly become the standard framework for computer vision tasks. The design principles of CNNs have been advanced by subsequent models such as Inception [47, 48], ResNeXt [62], Res2Net [13] and MixNet [51], which promote the use of building blocks with multiple parallel convolutional paths. Other works such as MobileNet [20] and ShuffleNet [73]
have focused on the efficiency of CNNs. To further improve the performance of CNNs, attention-based models such as SE-Net [21], Non-local Networks [58], and CBAM [59] have been proposed to enhance the modeling of channel or spatial attention. EfficientNets [49, 50] and MobileNetV3 [19] have employed neural architecture search (NAS) [77] to develop efficient network architectures. ConvNeXt [33] adopts the hierarchical design of Vision Transformers to enhance CNN performance while retaining the simplicity and effectiveness of CNNs. Recently, several studies [15, 18, 64] have utilized convolutional modulation as a replacement for self-attention, resulting in improved performance. Specifically, FocalNet [64] utilizes a stack of depth-wise convolutional layers to encode features across short to long ranges and then injects the modulator into the tokens using an element-wise affine transformation. Conv2Former [18] achieves good recognition performance using a simple \(11\times 11\) depth-wise convolution. In contrast, our scale-aware modulation also employs depth-wise convolution as a basic operation but introduces multi-head mixed convolution and scale-aware aggregation.
### Hybrid CNN-Transformer Networks
A popular topic in visual recognition is the development of hybrid CNN-Transformer architectures. Recently, several studies [14, 45, 60, 76] have demonstrated the effectiveness of combining Transformers and convolutions to leverage the strengths of both architectures. CvT [60] first introduced depth-wise and point-wise convolutions before self-attention. CMT [14] proposed a hybrid network that utilizes Transformers to capture long-range dependencies and CNNs to model local features. MobileViT [37], EdgeNeXt [36], MobileFormer [5], and EfficientFormer [27] reintroduced convolutions to Transformers for efficient network design and demonstrated exceptional performance in image classification and downstream applications. However, the current hybrid networks lack the ability to model range dependency transitions, making it challenging to improve their performance. In this paper, we propose an evolutionary hybrid network that addresses this limitation and showcases its importance.
## 3 Method
### Overall Architecture
The overall architecture of our proposed Scale-Aware Modulation Transformer (SMT) is illustrated in Fig. 2. The network comprises four stages, each with downsampling rates of \(\{4,8,16,32\}\). Instead of constructing an attention-free network, we first adopt our proposed Scale-Aware Modulation (SAM) in the top two stages, followed by a penultimate stage where we sequentially stack one SAM block and one Multi-Head Self-Attention (MSA) block to model the transition from capturing local to global dependencies. For the last stage, we solely use MSA blocks to capture long-range dependencies effectively. For the Feed-Forward Network (FFN) in each block, we adopt the detail-specific feedforward layers as used in Shunted [42].
### Scale-Aware Modulation
Multi-Head Mixed ConvolutionWe propose the Multi-Head Mixed Convolution (MHMC), which introduces multiple convolutions with different kernel sizes, enabling it to capture various spatial features across multiple scales. Furthermore, MHMC can expand the receptive field using a large convolutional kernel, enhancing its ability to model long-range dependencies. As depicted in Fig. 3(b), MHMC partitions input channels into N heads and applies distinct depth-wise separable convolutions to each head, which reduces the parameter size and computational cost. To simplify our design process, we initialize the kernel size with 3\(\times\)3 and gradually increase it by 2 per head. This approach enables us to regulate the range of receptive fields and multi-granularity information by merely adjusting the
Figure 2: (a) The architecture of the Scale-Aware Modulation Transformer (SMT); (b) Mix Block: a series of SAM blocks and MSA blocks that are stacked successively (as presented in Sec. 3.3). SAM and MSA denote the scale-aware modulation module and multi-head self-attention module, respectively.
number of heads. Our proposed MHMC can be formulated as follows:
\[MHMC(X)=Concat(DW_{k_{1}\times k_{1}}(x_{1}),\dots,DW_{k_{n}\times k_{n}}(x_{n})) \tag{1}\]
where \(x=[x_{1},x_{2},...,x_{n}]\) means to split up the input feature \(x\) into multiple heads in the channel dimension and \(k_{i}\in\{3,5,\dots,K\}\) denotes the kernel size increases monotonically by 2 per head.
As shown in Fig. 4(a), each distinct convolution feature map learns to focus on different granularity features in an adaptive manner, as expected. Notably, when we compare the single-head and multi-head by visualizing modulation maps in Fig. 4(b), we find that the visualization under multi-head depicts the foreground and target objects accurately in stage 1, while filtering out background information effectively. Moreover, it can still present the overall shape of the target object as the network becomes deeper, while the information related to the details is lost under the single-head convolution. This indicates that MHMC has the ability to capture local details better than a single head at the shallow stage, while maintaining detailed and semantic information about the target object as the network becomes deeper.
Scale-Aware AggregationTo enhance information interaction across multiple heads in MHMC, we introduce a new lightweight aggregation module, termed Scale-Aware Aggregation (SAA), as shown in Fig. 3(c). The SAA involves an operation that shuffles and groups the features of different granularities produced by the MHMC. Specifically, we select one channel from each head to construct a group, and then we utilize the inverse bottleneck structure to perform an up-down feature fusion operation within each group, thereby enhancing the diversity of multi-scale features. However, a well-designed grouping strategy enables us to introduce only a small amount of computation while achieving desirable aggregation results. Notably, let the input \(X\in\mathbb{R}^{H\times W\times C}\), \(Groups=\frac{C}{Heads}\), which means the number of groups is inversely proportional to the number of heads. Subsequently, we perform cross-group information aggregation for all features using point-wise convolution to achieve cross-fertilization of global information. The process of SAA can be formulated as follows:
\[\begin{split} M&=W_{inter}([G_{1},G_{2},\dots,G_{M}]), \\ G_{i}&=W_{intra}([H_{1}^{i},H_{2}^{i},\dots,H_{N}^{ i}]),\\ H_{j}^{i}&=DWConv_{k_{j}\times k_{j}}(x_{j}^{i}) \in\mathbb{R}^{H\times W\times 1}.\end{split} \tag{2}\]
where \(W_{inter}\) and \(W_{intra}\) are weight matrices of point-wise convolution. \(j\in\{1,2,\dots,N\}\) and \(i\in\{1,2,\dots,M\}\), where \(N\) and \(M=\frac{C}{N}\) denote the number of heads and groups, respectively. Here, \(H_{j}\in\mathbb{R}^{H\times W\times M}\) represents the \(j\)-th head with depth-wise convolution, and \(H_{j}^{i}\) represents the \(i\)-th channel in the \(j\)-th head.
Fig. 5 shows that our SAA module explicitly strengthens the semantically relevant low-frequency signals and precisely focuses on the most important parts of the target object. For instance, in stage 2, the eyes, head and body are clearly highlighted as essential features of the target object, resulting in significant improvements in classification performance. Compared to the convolution maps before aggregation, our SAA module demonstrates a better ability to capture and represent essential features for visual recognition tasks. (More visualizations can be found in Appendix E).
Scale-Aware ModulationAs illustrated in Fig. 3(a), after capturing multi-scale spatial features using MHMC and ag
Figure 4: (a) Visualization of the output values of different heads in the MHMC in the first stage. (b) Visualization of the modulation values (corresponding to the left side of \(\odot\) in Eq. 3) under single-head and multi-head mixed convolution in the last layer during the top two stages. All maps are upsampled for display.
Figure 3: (a) The schematic illustration of the proposed scale-aware modulation (SAM). (b) and (c) are the module descriptions of multi-head mixed convolution (MHMC) and scale-aware aggregation (SAA), respectively.
gregating them with SAA, we obtain an output feature map, which we refer to as the modulator M. We then adopt this modulator to modulate the value V using the scalar product. For the input features \(X\in\mathbb{R}^{H\times W\times C}\), we compute the output Z as follows:
\[\begin{split} Z&=M\odot V,\\ V&=W_{v}X,\\ M&=SAA(MHMC(W_{s}X)).\end{split} \tag{3}\]
where \(\odot\) is the element-wise multiplication, \(W_{v}\) and \(W_{s}\) are weight matrices of linear layers. Since the modulator is calculated via Eq. 3, it changes dynamically with different inputs, thereby achieving adaptively self-modulation. Moreover, unlike self-attention, which computes an \(N\times N\) attention map, the modulator retains the channel dimension. This feature allows for spatial- and channel-specific modulation of the value after element-wise multiplication, while also being memory-efficient, particularly when processing high-resolution images.
### Scale-Aware Modulation Transformer
Evolutionary Hybrid NetworkIn this section, we propose to reallocate the appropriate computational modules according to the variation pattern in the network's capture range dependencies to achieve better computational performance. We propose using MSA blocks only from the penultimate stage to reduce the computational burden. Furthermore, to effectively simulate the transition pattern, we put forth two hybrid stacking strategies for the penultimate stage: \((i)\) sequentially stacking one SAM block and one MSA block, which can be formulated as \((SAM\times 1+MSA\times 1)\times\frac{N}{2}\), depicted in Fig. 6(i); \((ii)\) using SAM blocks for the first half of the stage and MSA blocks for the second half, which can be formulated as \((SAM\times\frac{N}{2}+MSA\times\frac{N}{2})\), depicted in Fig. 6(ii).
To assess the efficacy of these hybrid stacking strategies, we evaluated their top-1 accuracy on the ImageNet-1K, as shown in Table 9. Moreover, as depicted in Fig. 7, we calculate the relative receptive field of the MSA blocks in the penultimate stage, followed by the approach presented in [4]. It is noteworthy that there is a slight downward trend in the onset of the relative receptive field in the early layers. This decline can be attributed to the impact of the SAM on the early MSA blocks, which emphasize neighboring tokens. We refer to this phenomenon as the adaptation period. As the network becomes deeper, we can see a smooth and steady upward trend in the receptive field, indicating that our proposed evolutionary hybrid network effectively simulates the transition from local to global dependency capture.
## 4 Experiments
To ensure a fair comparison under similar parameters and computation costs, we construct a range of SMT variants. We validate our SMTs on ImageNet-1K [10] image classification, MS COCO [30] object detection, and ADE20K [75] semantic segmentation. Besides, extensive ablation studies provide a close look at different components of the SMT. (The detailed model settings are presented in Appendix A)
Figure 5: (a) Visualization of the modulation values before SAA. (b) Visualization of the modulation values after SAA.
Figure 6: Two proposed hybrid stacking strategies.
Figure 7: The receptive field of SMT-B’s relative attention across depth, with error bars representing standard deviations across various attention heads.
### Image Classification on ImageNet-1K
SetupWe conduct an evaluation of our proposed model and compare it with various networks on ImageNet-1K classification [10]. To ensure a fair comparison, we follow the same training recipes as previous works [52, 32, 42]. Specifically, we train the models for 300 epochs with an image size of \(224\times 224\) and report the top-1 validation accuracy. The batch size used is 1024, and we employ the AdamW optimizer [24, 34] with a weight decay of 0.05 and a learning rate of \(1\times 10^{-3}\). In addition, we investigate the effectiveness of SMTs when pretrained on ImageNet-22K.(Further details regarding the training process can be found in Appendix B)
ResultsTab. 1 presents a comparison of our proposed SMT with various models, and the results demonstrate that our models outperform various architectures with fewer parameters and lower computation costs. Specifically, concerning the tiny-sized model, SMT achieves an impressive top-1 accuracy of 82.2%, surpassing PVTv2-b1 [57] and Shunted-T [42] by significant margins of 3.5% and 2.4%, respectively. Furthermore, when compared to small-sized and base-sized models, SMT maintains its leading position. Notably, SMT-B achieves a top-1 accuracy of 84.3% with only 32M parameters and 7.7GFLOPs of computation, outperforming many larger models such as Swin-B [32], ConvNeXt-B [33], and FocalNet-B [64], which have over 70M parameters and 15GFLOPs of computation. Additionally, to evaluate the scalability of the SMT, we have also created smaller and larger models, and the experimental results are presented in the Appendix C.
We also report the ImageNet-22K pre-training results here in Tab. 2. When compared to the previously best results, our models achieve significantly better accuracy with a reduced number of parameters and FLOPs. SMT-L attains an 88.1% top-1 accuracy, surpassing InternImage-XL by 0.1% while utilizing significantly fewer parameters (80.5M vs. 335M) and exhibiting lower FLOPs (54.6G vs. 163G). This highly encouraging outcome underscores the impressive scalability capabilities of SMT.
\begin{table}
\begin{tabular}{c|c c c|c} \hline \multicolumn{5}{c}{**(a) Tiny Models**} \\ method & image size & \#param. & FLOPs & ImageNet top-1 acc. \\ \hline RegNetY-1-6G [39] & \(224^{2}\) & 11.2M & 1.6G & 78.0 \\ EffNet-B3 [49] & \(300^{2}\) & 12M & 1.8G & 81.6 \\ PVTv2-b1 [57] & \(224^{2}\) & 13.1M & 2.1G & 78.7 \\ Efficientformer-L1 [27] & \(224^{2}\) & 12.3M & 1.3G & 79.2 \\ Shunted-T [42] & \(224^{2}\) & 11.5M & 2.1G & 79.8 \\ Conv2Former-N [18] & \(224^{2}\) & 15M & 2.2G & 81.5 \\ \hline
**SMT-T(Ours)** & \(224^{2}\) & 11.5M & 2.4G & **82.2** \\ \hline \multicolumn{5}{c}{**(b) Small Models**} \\ method & image size & \#param. & FLOPs & ImageNet top-1 acc. \\ \hline RegNetY-4G [39] & \(224^{2}\) & 21M & 4.0G & 80.0 \\ EffNet-B4 [49] & \(380^{2}\) & 19M & 4.2G & 82.9 \\ DeiT-S [52] & \(224^{2}\) & 22M & 4.6G & 79.8 \\ Swin-T [32] & \(224^{2}\) & 29M & 4.5G & 81.3 \\ ConvNeXt-T [33] & \(224^{2}\) & 29M & 4.5G & 82.1 \\ PVTv2-b2 [57] & \(224^{2}\) & 25.0M & 4.0G & 82.0 \\ Focal-T [65] & \(224^{2}\) & 29.1M & 4.9G & 82.2 \\ Shunted-S [42] & \(224^{2}\) & 22.4M & 4.9G & 82.9 \\ CMT-S [14] & \(224^{2}\) & 25.1M & 4.0G & 83.5 \\ FocalNet-T [64] & \(224^{2}\) & 28.6M & 4.5G & 82.3 \\ Conv2Former-T [18] & \(224^{2}\) & 27M & 4.4G & 83.2 \\ HorNet-T [41] & \(224^{2}\) & 23M & 4.0G & 83.0 \\ InternImage-T [55] & \(224^{2}\) & 30M & 5.0G & 83.5 \\ MaxViT-T [53] & \(224^{2}\) & 31M & 5.6G & 83.6 \\ \hline
**SMT-S(Ours)** & \(224^{2}\) & 20.5M & 4.7G & **83.7** \\ \hline \multicolumn{5}{c}{**(c) Base Models**} \\ method & image size & \#param. & FLOPs &
\begin{tabular}{c} ImageNet top-1 acc. \\ \end{tabular} \\ \hline RegNetY-SG [39] & \(224^{2}\) & 39M & 8.0G & 81.7 \\ EffNet-B5 [49] & \(456^{2}\) & 30M & 9.9G & 83.6 \\ Swin-S [32] & \(224^{2}\) & 49.6M & 8.7G & 83.0 \\ CoAtNet-1 [9] & \(224^{2}\) & 42M & 8.0G & 83.3 \\ PVTv2-b4 [57] & \(224^{2}\) & 63M & 10.0G & 83.6 \\ SwinV2-S/8 [31] & \(256^{2}\) & 50M & 12.0G & 83.7 \\ PoolFormer-m36 [67] & \(224^{2}\) & 56.2M & 8.8G & 82.1 \\ Shunted-B [42] & \(224^{2}\) & 39.6M & 8.1G & 84.0 \\ InternImage-S [55] & \(224^{2}\) & 50.0M & 8.0G & 84.2 \\ Conv2Former-S [18] & \(224^{2}\) & 50.0M & 8.7G & 84.1 \\ Swin-B [32] & \(224^{2}\) & 87.8M & 15.4G & 83.4 \\ ConvNeXt-B [33] & \(224^{2}\) & 89M & 15.4G & 83.8 \\ Focal-B [65] & \(224^{2}\) & 89.8M & 16.4G & 83.8 \\ FocalNet-B [64] & \(224^{2}\) & 88.7M & 15.4G & 83.9 \\ HorNet-B [41] & \(224^{2}\) & 87M & 15.6G & 84.2 \\ \hline
**SMT-B(Ours)** & \(224^{2}\) & 32.0M & 7.7G & **84.3** \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of different backbones on ImageNet-1K classification.
\begin{table}
\begin{tabular}{c|c c c|c} \hline \multicolumn{5}{c}{**ImageNet-22K pre-trained models**} \\ method & image size & \#param. & FLOPs &
\begin{tabular}{c} ImageNet top-1 acc. \\ \end{tabular} \\ \hline ViT-B/16 [11] & \(384^{2}\) & 86.0M & 55.4G & 84.0 \\ ViT-L/16 [11] & \(384^{2}\) & 307.0M & 190.7G & 85.2 \\ \hline Swin-Large [32] & \(224^{2}/224^{2}\) & 196.5M & 34.5G & 86.3 \\ Swin-Large [32] & \(384^{2}/384^{2}\) & 196.5M & 104.0G & 87.3 \\ \hline FocalNet-Large [64] & \(224^{2}/224^{2}\) & 197.1M & 34.2G & 86.5 \\ FocalNet-Large [64] & \(224^{2}/384^{2}\) & 197.1M & 100.6G & 87.3 \\ \hline InterImage-L [55] & \(224^{2}/384^{2}\) & 223M & 108G & 87.7 \\ InterImage-XL [55] & \(224^{2}/384^{2}\) & 335M & 163G & 88.0 \\ \hline
**SMT-L(Ours)** & \(224^{2}/224^{2}\) & 80.5M & 17.7G & **87.1** \\
**SMT-L(Ours)** & \(224^{2}/384^{2}\) & 80.5M & 54.6G & **88.1** \\ \hline \end{tabular}
\end{table}
Table 2: ImageNet-1K finetuning results with models pretrained on ImageNet-22K. Numbers before and after “/” are resolutions used for pretraining and finetuning, respectively
### Object Detection and Instance Segmentation
SetupWe make comparisons on object detection with COCO 2017 [30]. We use SMT-S/B pretrained on ImageNet-1K as the foundation for three well-known object detectors: Mask R-CNN [16], Cascade Mask R-CNN [2], and RetinaNet [29]. To demonstrate a consistent comparison, two training schedules (\(1\times\) schedule with 12 epochs and \(3\times\) schedule with 36 epochs) are adopted in Mask R-CNN. In \(3\times\) schedule, we use a multi-scale training strategy by randomly resizing the shorter side of an image to between [480, 800]. We take AdamW optimizer with a weight decay of 0.05 and an initial learning rate of \(2\times 10^{-4}\). Both models are trained with batch size 16. To further showcase the versatility of SMT, we conducted a performance evaluation of SMT with three other prominent object detection frameworks, namely Sparse RCNN [46], ATSS [72], and DINO [70]. We initialize the backbone with weights pretrained on ImageNet-1K and fine-tune the model using a 3\(\times\) schedule for Sparse RCNN and ATSS.
ResultsTab. 3 presents the superior performance of SMT over other networks with Mask R-CNN [16] under various model sizes. Specifically, SMT demonstrates a significant improvement in box mAP of 5.6 and 4.2 over Swin Transformer in 1\(\times\) schedule under small and base model sizes, respectively. Notably, with 3\(\times\) schedule and multi-scale training, SMT still consistently outperforms various backbones.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline Method & Backbones & \#Params & FLOPs & \(AP^{b}\) & \(AP^{b}_{50}\) & \(AP^{b}_{75}\) & \(AP^{b}_{75}\) & \(AP^{b}_{75}\) \\ \hline \multirow{5}{*}{Cascade [2]} & ResNet50 [17] & 82.0M & 7396 & 46.3 & 64.3 & 50.5 & 40.1 & 61.7 & 43.4 \\ & Swin-T [32] & 86.0M & 7456 & 50.5 & 69.3 & 54.9 & 43.7 & 66.6 & 47.1 \\ & Conv-Net [33] & -741 & 50.4 & 50.1 & 54.8 & 43.7 & 66.5 & 47.3 \\ & Shuffle-T [23] & 86.0M & 7466 & 50.8 & 69.5 & 55.1 & 44.1 & 66.9 & 48.0 \\ & FocalNet-T [64] & 87.1M & 7516 & 51.5 & 71.5 & 73.0 & 56.0 & - & - \\ & **SMTS** & **77.9M** & **74.5M** & **70.5M** & **76.3** & **43.7** & **67.8** & **48.6** \\ \hline \hline Method & Backbones & \#Params & FLOPs & \(AP^{b}\) & \(AP^{b}_{75}\) & \(AP^{b}_{75}\) & \(AP^{b}_{75}\) & \(AP^{b}_{75}\) \\ \hline \multirow{5}{*}{ATSS [72]} & ResNet50 [17] & 37.7M & 2405 & 39.0 & 58.4 & 41.8 & 22.4 & 24.8 & 51.6 \\ & Swin-T [32] & 85.8M & 2456 & 45.0 & 65.9 & 48.4 & 37.2 & 49.8 & 58.1 \\ \cline{1-1} & Focal-T [65] & 39.4M & 2566 & 45.5 & 66.3 & 48.8 & 31.2 & 49.2 & 58.7 \\ \cline{1-1} & Shunted-S [42] & 32.1M & - & 46.4 & 66.7 & 50.4 & 31.0 & 51.0 & 60.8 \\ \cline{1-1} & **SMTS** & **30.1M** & 247G & **47.3** & **67.8** & **50.5** & **32.5** & **51.1** & **62.3** \\ \hline \hline \end{tabular}
\end{table}
Table 4: COCO detection and segmentation with the **Cascade Mask R-CNN** and **RetinaNet**. The performances are reported on the COCO _val_ dataset under the \(3\times\) schedule.
\begin{table}
\begin{tabular}{c|c|c|c c c c c|c c c c c c} \hline \hline \multirow{2}{*}{Backbone} & \multirow{2}{*}{Params} & FLOPs & \multicolumn{6}{c|}{Mask R-CNN \(1\times\) schedule} & \multicolumn{6}{c}{Mask R-CNN \(3\times\) schedule + MS} \\ & (M) & (G) & \(AP^{b}\) & \(AP^{b}_{50}\) & \(AP^{b}_{75}\) & \(AP^{m}\) & \(AP^{m}_{50}\) & \(AP^{m}_{75}\) & \(AP^{b}\) & \(AP^{b}_{50}\) & \(AP^{b}_{75}\) & \(AP^{m}\) & \(AP^{m}_{50}\) & \(AP^{m}_{75}\) \\ \hline ResNet50 [17] & 44.2 & 260 & 38.0 & 58.6 & 41.4 & 34.4 & 55.1 & 36.7 & 41.0 & 61.7 & 44.9 & 37.1 & 58.4 & 40.1 \\ Twins-SVT-S [6] & 44.0 & 228 & 43.4 & 66.0 & 47.3 & 40.3 & 63.2 & 43.4 & 46.8 & 69.2 & 51.2 & 42.6 & 66.3 & 45.8 \\ Swin-T [32] & 47.8 & 264 & 42.2 & 64.6 & 46.2 & 39.1 & 61.6 & 42.0 & 46.0 & 68.2 & 50.2 & 41.6 & 65.1 & 44.8 \\ PVTv2-B2 [56] & 45.0 & - & 45.3 & 67.1 & 49.6 & 41.2 & 64.2 & 44.4 & - & - & - & - & - & - \\ Focal-T [65] & 48.8 & 291 & 44.8 & 67.7 & 49.2 & 41.0 & 64.7 & 44.2 & 47.2 & 69.4 & 51.9 & 42.7 & 66.5 & 45.9 \\ CMT-S [14] & 44.5 & 249 & 44.6 & 66.8 & 48.9 & 40.7 & 63.9 & 43.4 & - & - & - & - & - \\ FocalNet-T [64] & 48.9 & 268 & 46.1 & 68.2 & 50.6 & 41.5 & 65.1 & 44.5 & 48.0 & 69.7 & 53.0 & 42.9 & 66.5 & 46.1 \\
**SMT-S** & **40.0** & 265 & **47.8** & **69.5** & **52.1** & **43.0** & **66.6** & **46.1** & **49.0** & **70.1** & **53.4** & **43.4** & **67.3** & **46.7** \\ \hline ResNet101 [17] & 63.2 & 336 & 40.4 & 61.1 & 44.2 & 36.4 & 57.7 & 38.8 & 42.8 & 63.2 & 47.1 & 38.5 & 60.1 & 41.3 \\ Swin-S [32] & 69.1 & 354 & 44.8 & 66.6 & 48.9 & 40.9 & 63.4 & 44.2 & 48.5 & 70.2 & 53.5 & 43.3 & 67.3 & 46.6 \\ Swin-B [32] & 107.1 & 497 & 46.9 & 69.2 & 51.6 & 42.3 & 66.0 & 45.5 & 48.5 & 69.8 & 53.2 & 43.4 & 66.8 & 46.9 \\ Twins-SVT-B [6] & 76.3 & 340 & 45.2 & 67.6 & 49.3 & 41.5 & 64.5 & 44.8 & 48.0 & 69.5 & 52.7 & 43.0 & 66.8 & 46.6 \\ PVTv2-B4 [56] & 82.2 & - & 47.5 & 68.7 & 52.0 & 42.7 & 66.1 & 46.1 & - & - & - & - & - \\ Focal-S [65] & 71.2 & 401 & 47.4 & 69.8 & 51.9 & 42.8 & 66.6 & 46.1 & 48.8 & 70.5 & 53.6 & 43.8 & 67.7 & 47.2 \\ FocalNet-S [64] & 72.3 & 365 & 48.3 & **70.5** & 53.1 & 43.1 & 67.4 & 46.2 & 49.3 & 70.7 & 54.2 & 43.8 & 67.9 & 47.4 \\
**SMT-B** & **51.7** & **328** & **49.0** & 70.2 & **53.7** & **44.0** & **67.6** & **47.4** & **49.8** & **71.0** &
For instance segmentation, the results also demonstrate that our SMT achieves higher mask mAP in comparison to previous SOTA networks. In particular, for small and base models in the 1\(\times\) schedule, we achieve 1.5 and 0.9 points higher than FocalNet, respectively. Furthermore, to assess the generality of SMT, we trained two additional detection models, Cascade Mask R-CNN [2] and RetinaNet [29], using SMT-S as the backbone. The results, presented in Tab. 4, show clear improvements over various backbones in both box and mask mAPs. The resulting box mAPs for Sparse R-CNN, ATSS and DINO are presented in Tab. 5, which indicate that SMT outperforms other networks consistently across all detection frameworks, highlighting its exceptional performance in downstream tasks.
### Semantic Segmentation on ADE20K
SetupWe evaluate the SMT for semantic segmentation using the ADE20K dataset. To conduct the evaluation, we use UperNet as the segmentation method and closely followed the training settings proposed by [32]. Specifically, we train UperNet [61] for 160k iterations with an input resolution of \(512\times 512\). We employ the AdamW optimizer with a weight decay of 0.01, and set the learning rate to \(6\times 10^{-5}\).
ResultsThe results are presented in Tab. 6, which shows that our SMT outperforms Swin, FocalNet, and Shunted Transformer significantly under all settings. Specifically, SMT-B achieves 1.5 and 0.9 mIoU gains compared to Swin-B and a 0.6 and 0.1 mIoU improvement over Focal-B at single- and multi-scale, respectively, while consuming significantly fewer FLOPs and reducing the model size by more than 50%. Even for the SMT with a small model size, it achieves comparable accuracy with the previous SOTA models which have a larger model size.
### Ablation Study
Number of heads in Multi-Head Mixed ConvolutionTable 7 shows the impact of the number of convolution heads in the Multi-Head Mixed Convolution (MHMC) on our model's performance. The experimental results indicate that while increasing the number of diverse convolutional kernels is advantageous for modeling multi-scale features and expanding the receptive field, adding more heads introduces larger convolutions that may negatively affect network inference speed and reduce throughput. Notably, we observed that the top-1 accuracy on ImageNet-1K peaks when the number of heads is 4, and increasing the number of heads does not improve the model's performance. This findings suggest that introducing excessive distinct convolutions or using a single convolution is not suitable for our SMT, emphasizing the importance of choosing the appropriate number of convolution heads to model a specific degree of multi-scale spatial features.
Different aggregation strategiesAfter applying the MHMC, we introduce an aggregation module to achieve information fusion. Table 8 presents a comparison of different aggregation strategies, including a single linear layer, two linear layers, and an Invert BottleNeck (IBN) [43]. Our proposed scale-aware aggregation (SAA) consistently outperforms the other fusion modules, demonstrating the effectiveness of SAA in modeling multi-scale features with fewer parameters and lower computational costs. Notably, as the size of the model increases, our SAA can exhibit more substantial benefits while utilizing a small number of parameters and low computational resources.
Different hybrid stacking strategiesIn Sec. 3.3, we propose two hybrid stacking strategies to enhance the modeling of the transition from local to global dependencies. The results shown in Table 9 indicate that the first strategy which sequentially stacks one scale-aware modulation block and
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline Backbone & \#Param(M) & FLOPs(G) & \(mIoU_{ss}\) & \(mIoU_{ms}\) \\ \hline ResNet-101 [17] & 86 & 1029 & 44.9 & - \\ DeiT-S [52] & 52 & 1099 & 44.0 & - \\ Swin-T [32] & 60 & 941 & 44.5 & 45.8 \\ Focal-T [65] & 62 & 998 & 45.8 & 47.0 \\ FocalNet-T [65] & 61 & 949 & 46.8 & 47.8 \\ Swin-S [32] & 81 & 1038 & 47.6 & 49.5 \\ ConvNeXt-S [33] & 82 & 1027 & 49.6 & - \\ Shunted-S [42] & 52 & 940 & 48.9 & 49.9 \\ FocalNet-S [64] & 84 & 1044 & 49.1 & 50.1 \\ Focal-S [65] & 85 & 1130 & 48.0 & 50.0 \\ Swin-B [32] & 121 & 1188 & 48.1 & 49.7 \\ Twins-SVT-L [6] & 133 & - & 48.8 & 50.2 \\ Focal-B [65] & 126 & 1354 & 49.0 & 50.5 \\ \hline
**SMT-S** & 50.1 & 935 & 49.2 & 50.2 \\
**SMT-B** & 61.8 & 1004 & **49.6** & **50.6** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Semantic segmentation on ADE20K [75]. All models are trained with UperNet [61]. \(mIoU_{ms}\) means multi-scale evaluation.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline \multirow{2}{*}{Heads Number} & \multirow{2}{*}{Params(M)} & \multirow{2}{*}{FLOPs(G) top-1 (\%)} & \multicolumn{2}{c}{throughput (images/s)} \\ \hline
1 & 11.5 & 2.4 & 81.8 & 983 \\
2 & 11.5 & 2.4 & 82.0 & 923 \\
4 & 11.5 & **2.4** & **82.2** & 833 \\
6 & 11.6 & 2.5 & 81.9 & 766 \\
8 & 11.6 & 2.5 & 82.0 & 702 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Model performance with number of heads in MHMC. We analyzed the model’s performance for the number of heads ranging from 1 to 8. Throughput is measured using a V100 GPU, following [32].
one multi-head self-attention block is better, achieving a performance gain of 0.3% compared to the other strategy. Furthermore, the strategy stacking all MSA blocks achieves comparable performance as well, which means retaining the MSA block in the last two stages is crucial.
Component AnalysisIn this section, we investigate the individual contributions of each component by conducting an ablation study on SMT. Initially, we employ a single-head convolution module and no aggregation module to construct the modulation. Based on this, we build an attention-free network, which can achieve 80% top-1 accuracy on the ImageNet-1K dataset. The effects of all the proposed methods on the model's performance are given in Tab. 10, which can be summarized as followings.
* **Multi-Head Mixed Convolution (MHMC)** To enhance the model's ability to capture multi-scale spatial features and expand its receptive field, we replaced the single-head convolution with our proposed MHMC. This module proves to be effective for modulation, resulting in a 0.8% gain in accuracy.
* **Scale-Aware Aggregation (SAA)** We replace the single linear layer with our proposed scale-aware aggregation. The SAA enables effective aggregation of the multi-scale features captured by MHMC. Building on the previous modification, the replacement leads to a 1.6% increase in performance.
* **Evolutionary Hybrid Network (EHN)** We incorporate the self-attention module in the last two stages of our model, while also implementing our proposed hybrid stacking strategy in the penultimate stage, which improves the modeling of the transition from local to global dependencies as the network becomes deeper, resulting in a significant gain of 2.2% in performance based on the aforementioned modifications.
## 5 Conclusion
In this paper, we introduce a new hybrid ConvNet and vision Transformer backbone, namely Scale-Aware Modulation Transformer (SMT), which can effectively simulate the transition from local to global dependencies as the network becomes deeper, resulting in superior performance. To satisfy the requirement of foundation models, we propose a new Scale-Aware Modulation that includes a potent multi-head mixed convolution module and a lightweight scale-aware aggregation module. Extensive experiments demonstrate the efficacy of SMT as a backbone for various downstream tasks, achieving comparable or better performance than well-designed ConvNets and vision Transformers, with fewer parameters and FLOPs. We anticipate that the exceptional performance of SMT on diverse vision problems will encourage its adoption as a promising new generic backbone for efficient visual modeling.
## Acknowledgement
This research is supported in part by NSFC (Grant No.: 61936003), Alibaba DAMO Innovative Research Foundation (20210925), Zhuhai Industry Core, Key Technology Research Project (no. 2220004002350) and National Key Research and Development Program of China (2022YFC3301703). We thank the support from the Alibaba-South China University of Technology Joint Graduate Education Program.
\begin{table}
\begin{tabular}{c c c|c c c} \hline MHMC & SAA & EHN & Params(M) & FLOPs(G) & top-1 (\%) \\ \hline & & & 11.1 & 2.3 & 80.0 (\(\uparrow\)0.0) \\ ✓ & & & 11.2 & 2.3 & 80.8 (\(\uparrow\)0.8) \\ ✓ & ✓ & & 12.1 & 2.5 & 81.6 (\(\uparrow\)1.6) \\ ✓ & ✓ & ✓ & 11.5 & 2.4 & 82.2 (\(\uparrow\)2.2) \\ \hline \end{tabular}
\end{table}
Table 10: Component analysis for SMT. Three variations are gradually added to the original attention-free network.
\begin{table}
\begin{tabular}{c|c c c} \hline \multirow{2}{*}{Aggregation Strategy} & \multirow{2}{*}{Hybrid} & Params & FLOPs & top-1 \\ & (M) & (G) & (\%) \\ \hline No aggregation & 10.9 & 2.2 & 81.5 \\ \hline Single Linear (\(c\to c\)) & 11.2 & 2.3 & 81.6 \\ Two Linears (\(c\to c\to c\)) & 11.5 & 2.4 & 81.9 \\ IBN (\(c\to 2c\to c\)) & 12.1 & 2.6 & 82.1 \\ SAA(\(c\to 2c\to c\)) & 11.5 & 2.4 & **82.2** \\ \hline \end{tabular}
\end{table}
Table 8: Model performance for different aggregation methods.
\begin{table}
\begin{tabular}{c|c|c c c} \hline \multirow{2}{*}{Stacking Strategy} & \multirow{2}{*}{Hybrid} & \multicolumn{2}{c}{Params} & FLOPs & top-1 \\ & & (M) & (G) & (\%) \\ \hline \((SAM\times N)\) & ✗ & 11.8 & 2.5 & 81.4 \\ \((MSA\times N)\) & ✗ & 11.2 & 2.3 & 81.8 \\ \hline \((SAM\times 1+ MSA\times 1)\times\frac{N}{2}\) & ✓ & 11.5 & 2.4 & **82.2** \\ \((SAM\times\frac{N}{2}+ MSA\times\frac{N}{2})\) & ✓ & 11.5 & 2.4 & 81.9 \\ \hline \end{tabular}
\end{table}
Table 9: Top-1 accuracy on ImageNet-1K of different stacking strategies. |
2307.10336 | Constraints on the variable nature of the slow solar wind with the
Wide-Field Imager on board the Parker Solar Probe | In a previous work we analysed the white-light coronal brightness as a
function of elongation and time from Wide-Field Imager (WISPR) observations on
board the Parker Solar Probe (PSP) mission when PSP reached a minimum
heliocentric distance of ~ 28 Rs. We found 4-5 transient outflows per day over
a narrow wedge in the PSP orbital plane, which is close to the solar equatorial
plane. However, the elongation versus time map (J-map) analysis supplied only
lower limits on the number of released density structures due to the small
spatial-scales of the transient outflows and line-of-sight integration effects.
In this work we place constraints on the properties of slow solar wind
transient mass release from the entire solar equatorial plane. We simulated the
release and propagation of transient density structures in the solar equatorial
plane for four scenarios: (1) periodic release in time and longitude with
random speeds; (2) corotating release in longitude, periodic release in time
with random speeds; (3) random release in longitude, periodic release in time
and speed; and (4) random release in longitude, time, and speed. The
simulations were used in the construction of synthetic J-maps, which are
similar to the observed J-map. The four considered scenarios have similar
ranges (35-45 for the minimum values and 96-127 for the maximum values) of
released density structures per day from the solar equatorial plane and
consequently from the streamer belt, given its proximity to the solar
equatorial plane during the WISPR observation. Our results also predict that
density structures with sizes in the range 2-8 Rs, covering 1-20 % of the
perihelion could have been detectable by PSP in situ observations during that
interval. | Spiros Patsourakos, Angelos Vourlidas, Alexander Nindos | 2023-07-19T15:15:03Z | http://arxiv.org/abs/2307.10336v1 | Constraints on the variable nature of the slow solar wind with the Wide-Field Imager on board the Parker Solar Probe
###### Abstract
Context:The formation of the slow solar wind remains unclear as we lack a complete understanding of its transient outflows.
Aims:In a previous work we analysed the white-light coronal brightness as a function of elongation and time from Wide-Field Imager (WISPR) observations on board the Parker Solar Probe (PSP) mission when PSP reached a minimum heliocentric distance of \(\approx\) 28 R\({}_{\odot}\). We found 4-5 transient outflows per day over a narrow wedge in the PSP orbital plane, which is close to the solar equatorial plane. However, the elongation versus time map (J-map) analysis supplied only lower limits on the number of released density structures due to the small spatial-scales of the transient outflows and line-of-sight integration effects. In this work we place constraints on the properties of slow solar wind transient mass release from the entire solar equatorial plane.
Methods:We simulated the release and propagation of transient density structures in the solar equatorial plane for four scenarios: (1) periodic release in time and longitude with random speeds; (2) corotating release in longitude, periodic release in time with random speeds; (3) random release in longitude, periodic release in time and speed; and (4) random release in longitude, time, and speed.
Results:The simulations were used in the construction of synthetic J-maps, which are similar to the observed J-map. The scenarios with periodic spatial and temporal releases are consistent with the observations for periods spanning \({}^{3}\)-\({}^{4}\)5\({}^{\circ}\)longitude and 1-25 hours. The four considered scenarios have similar ranges (35-45 for the minimum values and 96-127 for the maximum values) of released density structures per day from the solar equatorial plane and consequently from the streamer belt, given its proximity to the solar equatorial plane during the WISPR observation. Our results also predict that density structures with sizes in the range 2-8 R\({}_{\odot}\) covering 1-20 % of the perihelion could have been detectable by PSP in situ observations during that interval.
Conclusions:Our estimates of the release rates of density structures from the streamer belt represent a first major step towards assessing their contribution to the slow solar wind mass budget and their potential connection with in situ detections of density structures by PSP.
## 1 Introduction
Only a few years after the theoretical prediction of the dynamic expansion of the solar corona into the heliosphere in the form of the solar wind by Parker (1958) was its existence confirmed by the first in situ measurements in the interplanetary space (Gringaar et al., 1960; Neugebauer & Snyder, 1962). At the most basic level, the solar wind at 1 au is classified into fast (\(>\) 500 km/s) quasi-steady and slow (\(<~{}500\) km/s) variable wind streams (e.g., McComas et al., 2000). Although it is now clear that fast winds originate from polar coronal holes, the origins of the slow streams are more elusive (see recent reviews Antiochos et al., 2012; Abbo et al., 2016; Cranmer et al., 2017; Viall & Borovsky, 2020, and references therein). The difference in variability between fast and slow streams seems to indicate differences in the source region and/or release mechanism(s) behind these streams. Naturally, much research on slow wind focuses on how much it deviates from a steady release regime and what spatio-temporal scales and physical mechanisms are involved (e.g. Viall & Borovsky, 2020). Recent observational studies by Antonucci et al. (2023),Baker et al. (2023), and Chitta et al. (2023) supplied evidence in favour of the S-web slow-solar wind model (Antiochos et al., 2011). This model postulates that the slow wind emanates from a network of narrow open-field corridors in the low corona that are connected to a web of separatrices and quasi-separatrix layers in the heliosphere.
The research is focused at the cradle of the solar wind, the solar corona. Coronagraphs and heliospheric imagers provide the main observational means for the study of the near-Sun solar wind. These instruments typically record the visible corona (K-corona) that results from the scattering of the photospheric radiation from the coronal free electrons. Because of the optically thin nature of this emission, the observed signals correspond to the line-of-sight (LoS) integral of the electron density. Therefore, the features seen in a coronagraph or a heliospheric image represent the overlap of density structures along the LoS.
The time series of images from the Large Angle and Spectroscopic Coronagraphs (LASCO; Brueckner et al., 1995) on board the Solar and Heliospheric Observatory mission (SOHO; Domingo et al., 1995) and from the Sun-Earth Connection Coronal and Heliospheric Investigation (SECCHI; Howard et al., 2008) instrument suite on board the twin spacecraft of the Solar TErrestrial RElations Observatory mission (STEREO; Kaiser et al., 2008) have been providing ample evidence of intermittent density outflows in and around streamers. Some of the larger outflows have the shape of a blob, hence they are called streamer blobs (Sheeley et al., 1997; Wang et al., 1998). Streamer blobs
have been shown to be magnetic flux ropes resulting from intermittent magnetic reconnection between closed and open magnetic field lines (e.g. Wang et al., 1998; Sheeley et al., 2009; Antiochos et al., 2011; Higginson & Lynch, 2018). About two to six streamer blobs per day are recorded during solar minimum and maximum conditions, respectively, with typical speeds of a few hundred km/s (e.g. Wang et al., 1998; Sheeley et al., 2009; Rouillard et al., 2010).
The blobs, and more generally transient slow solar wind density structures, exhibit quasi-periodic behaviour with periods ranging from \(\approx\) 1 hour to 19 hours, and with radial sizes from \(\approx\) 12 to 1 R\({}_{\odot}\) or less (e.g. Viall et al., 2010; Viall & Vourlidas, 2015; Sanchez-Diaz et al., 2017; Stansby & Horbury, 2018). Magnetohydrodynamic (MHD) modelling suggests that tearing mode instability may be a candidate mechanism as it can give rise to periodic mass release in streamers with periods from \(\approx\) 20 hours to 1-2 hours (Reville et al., 2020). On the other hand, it should be remembered that these statistics are based on 1 au measurements that suffer considerable projection effects that tend to smear structure and confuse its origins. Specialized campaigns of high-cadence (5 min) and deep exposure (36 sec) SECH/COR2 observations have revealed higher spatio-temporal intermittency embedded in streamer flows (DeForest et al., 2018). However, the observing distance and long LoS limit the amount of information that can be extracted from 1 au imaging observations. Outflows in coronagraphic and heliospheric imaging data are often analysed with J-maps (Sheeley et al., 1997). These are maps of elongation versus time of the brightness along a given position angle; they offer an efficient method to assess coronal activity over long time periods.
The Parker Solar Probe (PSP; Fox et al., 2016) mission offers an unprecedented opportunity to break these constraints thanks to its unique design and payload. PSP's orbit brings it ever deeper into the corona, starting at at 35 R\({}_{\odot}\) in 2018 and ending at 9.8 R\({}_{\odot}\)in 2025, making it the first mission to measure and image the origins of the solar wind from within the corona. The imaging payload consists of the Wide-field Imager for Parker Solar Probe (WISPR; Vourlidas et al., 2016) whose two telescopes have a combined radial field of view (FoV) of 13.5\({}^{\circ}\)-108\({}^{\circ}\)elongation from the Sun centre. Its coronal vantage point within the corona removes much of the LoS confusion intrinsic to 1 au imagers and allows unprecedented observations of the fine-scale structures of coronal mass ejections and background streamer structures (Howard et al., 2019; Rouillard et al., 2020). On the other hand, the rapidly changing viewpoint introduces a set of novel challenges in the analysis of imaging data that do not pertain to the SOHO or STEREO observations from 1 au.
Nindos et al. (2021, hereafter Paper 1) reported the first kinematic analysis of outflows in the WISPR images by developing a methodology to account for the rapidly varying viewpoint. They identified four to five quasi-linear tracks of outflowing features per day, during the fourth solar encounter of PSP (E4) from 23 January 00:44:26 to 3 February, 2020 23:04:26 UT. The measurements were taken over a one-degree wide wedge along the PSP orbit plane. The heliospheric neutral line marking the streamer belt, during this period, was within 10\({}^{\circ}\) of the solar equatorial plane (Fig. 1), and hence it was close to the PSP orbital plane (the PSP orbit is inclined by 3.4\({}^{\circ}\) relative to the equatorial plane). The detected outflows are therefore associated with streamers. Paper 1, however, did not investigate the implications of these results.
The core goal of our paper is to supply constraints on various properties of transient slow solar wind density structures, such as their spatio-temporal periods and total number. This is achieved via Monte Carlo simulations of synthetic WISPR-I J-maps, spanning E4, and for different scenarios, encapsulating plausible properties of density structure release motivated by pertinent observations and modelling, which are then juxtaposed with the observed J-map in Paper 1. Our paper has the following outline. We describe the simulations in Sect. 2 and the construction of the synthetic J-maps in Sect. 3. We present the results in Sect. 4 and discuss them in Sect. 5. We conclude in Sect. 6.
## 2 Simulations of transient solar wind density structures
### The need for simulations
In J-maps, any outward propagating density structure will create a track with either an upward (accelerating) or a downward (de-celerating) curve (see Fig. 2). The slope is a function of the feature's own kinematics and of projection effects. In other words, a given feature's kinematic profile cannot be taken at face value, but requires analysis and the use of various constraints, as has been described in numerous publications (e.g. Sheeley et al., 1997; Davies et al., 2009; Rouillard et al., 2011; Lugaz et al., 2009; Liu et al., 2010; Sheeley & Rouillard, 2010). Depending on the particulars, the analysis results in speed, direction, and solar origin estimates.
Projection effects are always a concern in the analysis of coronal observations, but are especially important in the PSP case due to the rapidly varying heliocentric distance. A key property of Thomson scattering is that the location of maximum scattering efficiency lies on the so-called Thomson surface (TS) (Vourlidas & Howard, 2006), which is a sphere with a diameter equal to the Sun-observer distance. The TS dependence on the Sun-observer distance implies that the recorded intensity in inner heliospheric observations is more heavily weighted towards the near-spacecraft environment than a similar observation from 1 au. In the case of WISPR, this effect turns the instrument into a local imager (Vourlidas et al., 2016); however, it has a rapidly varying sensitivity due to the orbit. The temporal variation of both the LoS integration and the distance between observer and
Figure 1: Synoptic map of the radial component of the heliospheric magnetic field (in nT) at 30 R\({}_{\odot}\), on 29 January 2020, 12 UT, from a medium-resolution run of the MAS code (e.g. Mikic et al., 1999; Riley et al., 2012) corresponding to Carrington rotation 2226. The green line corresponds to the heliospheric neutral line. The simulation data are from [http://www.predsci.com/](http://www.predsci.com/).
feature will affect the shape of the tracks in J-maps to various degrees, and therefore will influence the interpretation. In addition, the small size of transient outflows and the LoS effects discussed result in lower limits for the actual numbers of released density structures from the streamer belt. Therefore, we construct synthetic observations under a variety of scenarios to further explore the potential of WISPR-based analysis.
### Constructing simulated J-maps
With our simple simulations we emulate transient releases of slow solar wind parcels. The simulations consist of idealized density structures released from the solar equatorial plane during the PSP fourth solar encounter (23 January to 3 February 2020, \(\approx 11.9\) days). We consider four scenarios to encapsulate a wide range of plausible properties of density structure release. The scenarios are motivated by the observations and modelling discussed in the Introduction, and are described below. The structures are released at a heliocentric distance of 5 R\({}_{\odot}\) for all four scenarios (e.g. Sheeley et al. 1999). They propagate radially at a constant speed randomly drawn from the interval [100,400] km/s, consistent with the observations of streamer blobs.
* **Scenario 1, periodic release longitudes and release times and random speeds:** In Scenario 1 n\({}_{beta}\) density structures are released every \(dt\) minutes from equally spaced longitudes in the equatorial plane. Hence, a total of \(N_{tot}=\frac{n_{data}T_{E4}}{dt}\) structures are released during the duration \(T_{E4}\) of E4. We consider nine \(dt\) values, equally spaced logarithmically in the interval [1, 25.6] hours, and six \(n_{beta}\) values, equally spaced logarithmically in the interval [4, 128]. This scenario corresponds to density structures released every \(\approx 3\) to 90 degrees across the solar equatorial plane.
* **Scenario 2, corotating release longitudes, periodic release times and random speeds:** In Scenario 2 \(n_{beta}\) equatorial density structures are released every \(dt\) minutes during E4, from equally spaced Carrington (i.e. corotating) longitudes. This scenario corresponds to fixed release sites on a rotating Sun.
* **Scenario 3, random release longitudes and speeds, periodic release times:** In Scenario 3 \(n_{beta}\) density structures are released every \(dt\) minutes from randomly selected equatorial longitudes. We note that here the \(n_{beta}\) and \(dt\) grids and total number of released density structures are the same in Scenarios 1-3.
* **Scenario 4, random release longitudes, release times, and speeds:** In scenario 4 \(N_{tot}\) density structures are released at randomly distributed times and from randomly distributed equatorial longitudes. \(N_{tot}\) takes 13 values equally distributed logarithmically from the interval [20,81920] and was chosen in order to encompass the corresponding intervals employed in Scenarios 1-3. As for Scenarios 1-3 a logarithmic grid was used.
The employed release times are consistent with observations and with the modelling of quasi-periodic mass release discussed in the Introduction, and they span the interval from 1 hour to 20 hours (e.g. Viall et al. 2010; Viall and Vourlidas 2015; Sanchez-Diaz et al. 2017; Stansby and Horbury 2018; Reville et al. 2020). Likewise, for the spatial scales of release (i.e. \(n_{beta}\)) synoptic maps of the coronal brightness in the outer corona using rotational tomography constructed by Morgan and Cook (2020) showed that in some cases streamers repeat themselves over several tens of degrees. Streamers, as discussed in the Introduction, represent potential sites of transient slow solar wind release. Moreover, the observations reported by Sanchez-Diaz et al. (2017) suggest a 15-degree separation of transient slow solar wind outflows, as captured by heliospheric images. Finally, for transient mass releases associated with coronal jets from the quiet Sun, the relevant spatial scale here (e.g. Raouafi et al. 2016, 2023) is the chromospheric network size, which corresponds to \(\approx 2.5\) degrees.
As both the employed \(n_{beta}\) and \(dt\) studied intervals used in Scenarios 1-3 span an order of magnitude, and to save on computing time given the large number of simulated structures and generally the large number of iterations inherent to Monte Carlo methods, we populated these intervals with logarithmic grids. The same applies to the choice of a logarithmic grid for \(N_{tot}\) for Scenario 4. For this scenario the range of \(N_{tot}\) was chosen to be consistent with the \(N_{tot}\) range used in Scenarios 1-3. The number of points in each considered interval was chosen in order to cover the corresponding interval with a reasonable number of points.
Using finer grids of the input parameters \(dt\) and \(n_{beta}\) would lead to higher precision in the determination of the ranges of these parameters that are consistent with the observations. However, this is beyond the scope of the present study; we do not wish to fine-tune \(dt\) and \(n_{beta}\) to the observations, but rather to supply some estimates of the ranges of these parameters that are consistent with the observations.
For simplicity and computational efficiency (i.e. to avoid LoS integrations over a large number of discrete structures) (e.g. Patsourakos and Vial 1997; Liewer et al. 2019; Nistico et al. 2020), we treated the density structures as zero-mass and point-like entities. Therefore, the implicit assumption in the construction of the synthetic J-maps is that the density structures are sufficiently
Figure 2: WISPR-1 J-map during PSP E4. The purple crosses correspond to the automatically detected tracks. The figure is modified from Paper 1.
massive to be detected by WISPR. We also did not take into account the 3\({}^{\circ}\) offset between the solar equatorial plane ecliptic and the PSP orbital plane because visible light structures are generally wider than 3\({}^{\circ}\). Moreover, this small offset does not impact the calculation of elongations of structures released close to the PSP orbital plane (Liewer et al. 2020).
Next, we performed 1000 Monte Carlo simulations for each scenario resulting in a large number of simulations and J-maps. Namely, we obtained 48,000 synthetic J-maps for Scenarios 1-3 and 13,000 for Scenario 4. For each Monte Carlo simulation we drew randomly selected parameter(s) as needed for each scenario's specifics.
## 3 Construction of synthetic J-maps
We constructed J-maps for every Monte Carlo simulation as follows. Using a 20 min cadence to emulate the WISPR-I synoptic programme cadence, we recorded a density structure if the following two conditions were met: (1) its elongation lies within the elongation range of the WISPR-I FoV (that is 13.5-50\({}^{\circ}\)); (2) the density structure lies inside the corresponding TS. We note here that the vantage point of each considered timestamp during E4 in our simulations followed the actual motion of the PSP spacecraft with respect to the Sun during that interval; in other words, each vantage point, and hence corresponding FoV, was determined from the corresponding PSP location.
Next, for each timestamp in our simulations, we counted the number of density structures satisfying the two detection criteria over an elongation grid with a bin size of 2 degrees spanning the WISPR-I FoV, resulting essentially to a histogram of detections versus elongation. Time-stacking these histograms resulted into the simulated J-maps, which obviously do not record brightness, as the actual J-maps do. However, non-zero J-map values at any given time-elongation pair imply the presence of density structure(s), and therefore record the outflows. Large (small) J-map values correspond to a large (small) number of density structures for a given elongation-time pair. We applied a box-car filter to smooth local variations in the synthetic J-maps. J-map examples for Scenarios 1 and 4 are shown in Fig. 3 and 4, respectively. Similar maps were obtained for the other two scenarios, but are
Figure 4: Same as Fig. 3, but for Scenario 4.
Figure 3: Example synthetic J-maps for Scenario 1. The colour-coding from black to green to white corresponds to increasing J-map values. The purple crosses indicate the detected tracks.
not presented here for the sake of brevity. The lack of tracks at the beginning of the synthetic J-maps is due to the finite start time and reflects the time it takes for the first density structures to enter the WISPR-I FoV. The synthetic J-maps exhibit quasi-linear tracks similar to those in the observed J-map of Fig. 2. In addition, the track duration in the simulated J-maps (i.e. the time it takes the associated density structure to traverse the WISPR-I FoV) is of the order of one day, similar to what was found for the observed J-map in Paper I. In addition, we can observe tracks in the simulated J-maps in Figs. 3 and 4, which become progressively vertical. This indicates outflows approaching PSP (Liewer et al. 2019). As expected, the track density follows the number of simulated structures (check the value of \(N_{tot}\) in the captions of Figs. 3 and 4).
Given the large number of our simulations, we employ a simple method to automatically detect tracks in the synthetic J-maps. The method searches for local maxima in J-map time series created by averaging the J-map over a \(4^{\circ}\) wide elongation bin centred at the middle of the WISPR-I FoV, which ensures the inclusion of transient outflows that traversed a significant part of the relevant FoV. The detected local maxima is consistent with the temporal width (i.e. the duration) of the tracks of the J-map in Paper 1, which resulted from the calculation of the autocorrelation function of the J-map time series at the centre of WISPR-I FOV of the J-map in Paper 1.
We validate the track-detection algorithm on the observed J-map in Paper 1 (see Fig. 2) and the sample simulated J-maps (Figs. 3 and 4). The purple crosses in Figs. 2-4 give the locations of the detected tracks, which are largely consistent with visual inspection.
## 4 Results
To derive constraints on the properties of the density outflows, we first examine the dependence of the number of detected tracks on the spatio-temporal periods of the released density structures. Figure 5 is a colour representation of the daily average (over the 1000 Monte Carlo simulations per scenario) of detected tracks per day (\(N_{tracks}\)) as a function of release longitude (\(n_{theta}\)) and time (\(dt\)) for Scenarios 1-3.
In addition, the higher the number of released density structures, the larger the \(N_{tracks}\). A large portion of the plots corresponds to \(N_{tracks}\), equal to or greater than six (corresponding to yellow). We come back to this later on. Periods of transient density structure release are given in Table 1. In Fig. 5 and Table 1 we note that the rather extended ranges of both the spatial (\(\approx 3^{\circ}\) to \(45^{\circ}\)) and temporal (\(\approx 1\)-\(25\) hours) release periods are consistent with the observations. The temporal periods of the density structures discussed above are consistent with observational (e.g. Viall et al. 2010; Viall & Vourlidas 2015; Sanchez-Diaz et al. 2017) or theoretical (e.g. Reville et al. 2020) studies of periodic mass release from streamers.
Next, we examine the daily dependence of the number of detected tracks versus the number of released structures for all four scenarios (Fig. 6). We note that the x-axis values for Scenario 4 strictly correspond to the averaged daily number of released density structures. In Fig. 6 we display the daily 5, 50, and 95% percentiles of the number of detected tracks distributions for the corresponding 1000 Monte Carlo simulations as a function of the daily number of released structures. It should be noted in Fig. 6 that the abscissae of the corresponding panels, containing the daily number of detected tracks resulting from the synthetic J-maps, are not expressed in integer values; the same applies to Fig. 8. This choice is dictated by our main goal in building these figures. We decided to use the \(N_{tot}\)-\(N_{tracks}\) curves resulting from the simulations to infer \(N_{tot}\), which is consistent with the results of Paper 1. This is achieved by essentially finding the intersection between the \(N_{tot}\)-\(N_{tracks}\) curves and horizontal lines at \(N_{tracks}\) equal to 4 and 5, which is consistent with the observations in Paper 1. Rounding the \(N_{tracks}\) values would obviously decrease the accuracy at which \(N_{tot}\) is calculated. All scenarios show a similar behaviour. The number of detected tracks rises slowly for small numbers (\(<10\)) of released structures; it increases rapidly with increasing number of structures before reaching a plateau of six detections beyond about 100-200 released structures per day, depending on the scenario.
We can now estimate the range of released structures that are consistent with the observations. The horizontal dashed lines in Fig. 6 give the observed range of four or five tracks per day. The resulting numbers of released structures for each of the four scenarios are compiled in Table 2. The differences among the four scenarios are small. Scenarios 3 and 4, whose use is based on randomly drawn distributions for input parameters of the simulations besides their speed, result in the wider ranges.
Figures 5 and 6 indicate that the most important factor in determining the number of track detections is the number of released density structures. The dependence on the spatio-temporal release periods is much weaker since multiple periods across extended ranges correspond to the same number of detections.
A salient assumption in our analysis is that each track corresponds to a single density structure. This translates to a lower
Figure 5: Average daily number of detected tracks (\(N_{tracks}\)) in synthetic J-maps from 1000 Monte Carlo simulations for Scenarios 1 (top), 2 (middle), and 3 (bottom) as a function of the number of release longitudes (\(n_{theta}\)) and repeat time (\(dt\)) of the corresponding density structures.
limit on the number of released density structures that are consistent with the observations. This conclusion is supported by the reasonable expectation that the properties of some structures (size or intensity) (see e.g. the simulations of Higginson & Lynch 2018) may drive them below WISPR's detection thresholds in terms of spatial resolution, cadence, and/or sensitivity. We note, however, that a close comparison of the WISPR time series with the associated J-map shows that there is a one-to-one correspondence between density structures in the movie and the corresponding J-map tracks.
\begin{table}
\begin{tabular}{l c c} \hline
**scenario** & **spatial period (\({}^{\circ}\))** & **temporal period (hours)** \\
1: periodic release longitudes and release times and random speeds & 3-45 & 1.5-25 \\
2: corotating release longitudes, periodic release times and random speeds & 3-45 & 1.5-25 \\
3: random release longitudes, periodic release times and random speeds & 3-45 & 1.0-25 \\ \hline \end{tabular}
\end{table}
Table 1: Spatio-temporal release periods of density structures for Scenarios 1–3 leading to four or five detected tracks in the synthetic J-maps per day, as resulting from the observed J-map.
\begin{table}
\begin{tabular}{l c} \hline
**scenario** & **range** \\
1: periodic release longitudes and release times and random speeds & 44-113 \\
2: corotating release longitudes, periodic release times and random speeds & 42-96 \\
3: random release longitudes, periodic release times and random speeds & 37-119 \\
4: random release longitudes, release times and speeds & 35-127 \\ \hline \end{tabular}
\end{table}
Table 2: Daily number of released structures in four simulated environments that are consistent with the number of detections in WISPR J-maps in Paper 1.
Figure 6: Quartiles of the daily number of detected tracks for the 1000 Monte Carlo simulations (5% in orange, 50% in red line, and 95% in blue) for Scenarios 1 (top left), 2 (top right), 3 (bottom left), and 4 (bottom right). The two horizontal dashed lines correspond to four and five daily J-map tracks, consistent with the Paper 1 observations.
## 5 Discussion
The weak correspondence between the number of detected tracks and both the considered scenario and the spatio-temporal release periods is unsurprising.We use the number of tracks to compare the synthetic and observed J-maps, which is expected to depend primarily on the number of released density structures rather than on the specifics of their release. This is clearly seen in Fig. 6, which shows a clear dependence of the number of detected tracks on the number of released density structures.
However, at relatively high daily numbers of released density structures (\(>\approx 200\)), the number of detected synthetic tracks becomes insensitive to that parameter. A factor influencing this could be the observing image cadence. In Fig. 7 we display two synthetic J-maps produced by Scenario 1 where we consider the same speed (200 km/s) for all density structures. The only difference between the two J-maps is the cadence of 20 min (upper panel) and 1 min (lower panel). The total number of released structures is \(\approx 220\) for both simulations and therefore lies in the'saturated' portion of the number of released structures (\(N_{ds}\))-number of detected tracks (\(N_{tracks}\)) curves in Fig. 6. Clearly, the high-cadence J-map recovers more structures, something to be investigated with higher-cadence observations.
Our automated track detection scheme is based on the inferred value of temporal width based on the observations of J-map tracks from Paper 1. However, we wonder how sensitive our analysis is to the choice of this parameter. In Fig. 8 we display the 50% quartiles of the \(N_{track}\) distributions from the 1000 Monte Carlo simulations of Scenario 4 as a function of \(N_{ds}\), for four different choices of the J-map track duration: 60, 120, 240, and 480 minutes. We recall that in our analysis we used a J-map track duration of 120 minutes (green line in Fig. 8). As expected, the shorter the considered track duration, the larger the number of the detected tracks. For instance, the shorter considered track duration (blue line) gives rise to a maximum \(N_{track}\) equal to 7; conversely, the shorter considered track width (brown line) gives rise to maximum \(N_{tracks}\) below 4, which is outside the range of the observations in Paper 1. However, the intersections of the horizontal dashed lines at \(N_{tracks}\) 4 and 5 with the blue and green lines are consistent with rather small differences of less than a factor of two in \(N_{ds}\). The same applies to when no smoothing is applied to the J-map and the same temporal width as in our original study is used (purple curve in Fig. 8). Taking all these together, this suggests the robustness of our approach. In retrospective, for periods with faster and smaller density structures (i.e. when solar activity is higher than at E4) our framework could also be applied. Finally, it is possible that the plateaus of the \(N_{tot}-N_{tracks}\) curves at large values of \(N_{tot}\) discussed above could also result from the line-of-sight overlap of multiple density structures.
Figure 8: 50 % quartiles of the daily number of detected tracks for the 1000 Monte Carlo simulations for Scenario 4 for J-map track duration (i.e. the interval between detected tracks; see Sect. 3) of 60, 120, 240, and 480 min (blue, green, red, brown lines, respectively). The purple line corresponds to results to when no smoothing is applied to the J-maps and the same temporal width as in our original study is used. The two horizontal dashed lines correspond to four and five daily J-map tracks, consistent with the observed J-map in Paper 1.
Figure 7: Synthetic J-maps resulting from Scenario 1 considering the same speed (200 km/s) for all density structures. The only difference between the two J-maps is the cadence: 20 min (upper panel) and 1 minute (lower panel).
Figure 9: 50 % quartiles of the percentage of the E4 duration consistent with in situ detection of density structures with radial sizes, 2 R\({}_{\odot}\)(green line), 4 R\({}_{\odot}\)(orange line), 6 R\({}_{\odot}\)(red line), and 8 R\({}_{\odot}\)(blue line) as a function of the daily number of released density structures from the equatorial plane for the 1000 Monte Carlo simulations for Scenario 4. The horizontal dashed lines (from bottom to top) correspond to 1, 5, 10, and 20 % of the E4 duration.
The derived number of released density structures from the solar equatorial plane is about 35-130, which is clearly above the estimates of Sanchez-Diaz et al. (2017). These authors used observations taken by HI-1/SECCHI on board the STEREO A spacecraft, corresponding to a period of highly inclined heliospheric neutral lines during solar maximum conditions, and therefore enabling a face-on view of the streamer belt. Using a combination of different maps, Sanchez-Diaz et al. (2017) found that patches of enhanced emission at 30 R\({}_{\odot}\), which they attributed to plasma blobs, occurred every 19.5 hours and were separated by 15 degrees in latitude. These findings suggested the release of 24 density structures from the streamer belt. One reason for the difference in the number of released structures may be linked to the fact that a corotating interaction region (CIR) was observed during the Sanchez-Diaz et al. (2017) observations. However, a possibly much more important factor is that the Sanchez-Diaz et al. (2017) results are based on emission patches, which, as they also discuss, could be the amalgamation of several individual smaller density structures. The size of the density structures of the Sanchez-Diaz et al. (2017) study is \(\approx\) 12 \(\times\) 5 R\({}_{\odot}\), whereas the density structures of our WISPR observations could be significantly smaller; for example, the density structure indicated with an arrow in panel (b) in Fig. 8 of Paper 1 is \(\approx\) 8 degrees in elongation and 4 degrees in latitude (which corresponds to \(\approx\) 5.6 \(\times\) 2.8 R\({}_{\odot}\)), as viewed from a PSP viewpoint of 40 R\({}_{\odot}\).
We now consider the implications from our J-map simulations for PSP in situ measurements of large-scale density structures during E4. We assume that a simulated density structure would be detected in situ by PSP if its distance from PSP is comparable to the size of the density structure. Given that remote sensing observations are generally biased towards large-scale density structures, we considered density structure-PSP distances in the range of 2-8 R\({}_{\odot}\), consistent with inferred spatial scales of density structures from the WISPR-I E4 observations, as discussed above. By counting the number of timestamps leading to detections, we were able to estimate the percentage of the E4 orbit that would contain in situ structure detections. We note here that there could be timestamps populated by multiple density structures.
Our estimates are shown in Fig. 9, where the percentage of the E4 orbit with in situ structure detections is plotted against the number of released density structures in the solar equatorial plane for Scenario 4. The coloured lines correspond to the 50 % quartiles of the distributions resulting from the 1000 Monte Carlo simulations for in situ density structures-PSP distance (i.e. density structure size) of 2, 4, 6, and 8 R\({}_{\odot}\). The horizontal dashed lines correspond to 1, 5, 10, and 20 % of the E4 orbit. The inferred daily number of released density structures for Scenario 4 from the analysis of the J-maps in the previous section (i.e. 35-127; see Fig. 6 and Table 1) correspond to in situ detections for 1-20 % of the E4 duration. This prediction requires further investigation via a detailed analysis of the PSP E4 in situ density observations in a manner similar to past studies (e.g. Vial et al. 2008; Stansby & Horbury 2018) for observations at 1 au and 0.3-0.5 au, respectively.
## 6 Summary
Determining the global properties of transient slow solar wind density structures is a key goal in the study of the slow solar wind. To this end, we performed simple Monte Carlo simulations for different scenarios regarding the properties of the released density structures, and compared the resulting synthetic J-maps against an observed J-map during PSP's fourth perihelion. Our findings can be summarized as follows:
1. The synthetic J-maps exhibit quasi-linear tracks similar to the observed tracks.
2. For scenarios employing periodic spatio-temporal density structure release, multiple spatio-temporal period pairs give rise to the same daily number of synthetic tracks as seen in the observations. The periods span extended intervals (i.e. 3\({}^{\circ}\)-45\({}^{\circ}\) and 1-25 hours).
3. The four considered scenarios are consistent with similar ranges 35-45 to 96-127 of released density structures per day from the solar equatorial plane (see Table 1) and result in the same number of daily detected tracks as the observations in Paper 1. For relatively high release rates (\(>\approx\) 200), the number of detected synthetic tracks becomes insensitive to the release rate, and hence the exact number of released density structures cannot be inferred from WISRP-I E4 observations.
4. Our results predict that PSP in situ detections of density structures, with sizes in the range 2-8 R\({}_{\odot}\), will occur over 1-20 % of the E4 orbit.
The estimates of the global release rates of density structures of this paper represent the first step in assessing their potential contribution to the slow solar wind mass budget. This will be achieved by calculations of the mass content of the density structures observed by WISPR, which will be confronted with the corresponding in situ observations, and by further J-map calculations for other PSP as well as Solar Orbiter (Muller et al. 2013) perihelia, in the latter scenario using EUI Rochus et al. (2020), METIS (Antonucci et al. 2020), and SolO-HI (Howard et al. 2020) observations. Finally, it is important to compare PSP remote-sensing and in situ measurements of transient density structures to investigate whether the predictions of this work are correct.
###### Acknowledgements.
The authors would like to thank the referee for useful comments and suggestions. AV is supported by WISPR Phase-E funding to APL.We acknowledge the work of the PSP operations team. Parker Solar Probe was designed, built, and is now operated by the Johns Hopkins Applied Physics Laboratory as part of NASA's Living with a Star (LWS) program (contract NNN06AA01C). We also acknowledge the efforts of the WISPR team developing and operating the instrument. SP and AV would like to thank AV for the invitation and hospitality during their visit to APL when this work was initiated. SP and AN acknowledge support by the ERC Synergy Grant 'Whole Sun' (GAN: 810218).
|
2304.06004 | Astrocytic gliotransmission as a pathway for stable stimulation of
post-synaptic spiking: Implications for working memory | The brain consists not only of neurons but also of non-neuronal cells,
including astrocytes. Recent discoveries in neuroscience suggest that
astrocytes directly regulate neuronal activity by releasing gliotransmitters
such as glutamate. In this paper, we consider a biologically plausible
mathematical model of a tripartite neuron-astrocyte network. We study the
stability of the nonlinear astrocyte dynamics, as well as its role in
regulating the firing rate of the post-synaptic neuron. We show that astrocytes
enable storing neuronal information temporarily. Motivated by recent findings
on the role of astrocytes in explaining mechanisms of working memory, we
numerically verify the utility of our analysis in showing the possibility of
two competing theories of persistent and sparse neuronal activity of working
memory. | Valentin Würzbauer, Kerstin Lenk, Matin Jafarian | 2023-04-12T17:29:45Z | http://arxiv.org/abs/2304.06004v2 | # Astrocytic gliotransmission as a pathway
###### Abstract
The brain consists not only of neurons but also of non-neuronal cells, including astrocytes. Recent discoveries in neuroscience suggest that astrocytes directly regulate neuronal activity by releasing gliotransmitters such as glutamate. In this paper, we consider a biologically plausible mathematical model of a tripartite neuron-astrocyte network. We study the stability of the nonlinear astrocyte dynamics, as well as its role in regulating the firing rate of the post-synaptic neuron. We show that astrocytes enable storing neuronal information temporarily. Motivated by recent findings on the role of astrocytes in explaining mechanisms of working memory, we numerically verify the utility of our analysis in showing the possibility of two competing theories of persistent and sparse neuronal activity of working memory.
C +
Footnote †: footnote]The work of M. Jafarian is supported by the Marie Sklodowska-Curie Fellowship, project ReWoMeN.
ontrol in neuroscience; Stability of nonlinear systems; Networked systems
## 1 Introduction
While the role of neurons as the key component in the nervous system has been the subject of tremendous research since Cajal has drawn the first brain cells in the late 19th century, the function of astrocytes as an active partner in neural signaling pathways has only become evident in the last two decays (Llinas, 2003). The concept of the tripartite synapse including an astrocyte as well as a pre- and a postsynaptic neuron as an extension to the classical neuron-to-neuron communication has been introduced in Araque et al. (1999). Until this point, glial cells, the class of cells to which astrocytes are assigned, have only been considered supportive cells that ensure the nutrition and structural support of neurons.
Communication between neurons occurs in the form of electrical/chemical signals (spiking) via the so-called synapses. Chemical synapses are often enwrapped or closely contacted by astrocytes. Astrocytes' primary signal for information transfer is calcium. When the astrocyte's calcium level elevates, it can release transmitter molecules that directly act on neurons. In fact, a sequence of action potentials triggers the release of messenger substrates of the presynaptic neuron. The neurotransmitters such as glutamate in the synaptic cleft activate membrane channels of both the postsynaptic neurons triggering a postsynaptic potential and the adjacent astrocyte. Then, the astrocytes start a cascade reaction that leads to calcium (\(Ca^{2+}\)) elevations. The astrocyte dynamics seemingly differ significantly from the neuronal dynamics with respect to space and time. While an action potential and the corresponding synaptic neurotransmission occur in the range of a few milliseconds, the elevation of intracellular \(Ca^{2+}\) concentration happens within several seconds. Experiments have shown the release of gliotransmitters by astrocytes in response to high \(Ca^{2+}\) levels affecting both the presynaptic and the postsynaptic neuron of the synapse (Savtchouk and Volterra, 2018). It has to be mentioned that gliotransmission in physiological astrocytes is still a matter of debate today (Fiacco and McCarthy, 2018; Savtchouk and Volterra, 2018). Recently, Gordleeva et al. (2021) and De Pitta and Brunel (2022) have shown that neuron-astrocyte network models with varying biological accuracy are able to store neuronal stimulation via an astrocyte mechanism that enhances synaptic efficacy.
The above findings have motivated the investigation of the role of astrocytes in regulating the neural mechanisms underlying brain functioning e.g. in Parkinson's and epilepsy, as well as cognitive functions such as working memory (Villani et al., 2020; De Pitta and Brunel, 2022). Working Memory (WM) is a general-purpose cognitive system responsible for temporarily processing information in service of higher-order cognition such as reasoning and decision-making. The neuronal activity within a limited temporal interval is assumed to form the mechanism of storing in formation in the WM (Adamsky and Goshen, 2018). In fact, several theories exist about the underlying mechanism that stores information. These theories can be divided into two main classes of persistent and sparse neuronal activities Barak and Tsodyks (2014). Both, persistent activity as well as sparse activity, have been observed in WM experiments with primates (Funahashi et al., 1989; Lundqvist et al., 2018). The aforementioned slow reaction of astrocytic gliotransmission is an interesting theory of
WM. The recent findings of the regulatory role of astrocytes in WM networks have shed new light on the debates between persistent and sparse activities (Gordleeva et al., 2021; De Pitta and Brunel, 2022).
In this paper, we study the stability of the nonlinear astrocyte dynamics in a tripartite model and show its effects on modulating the spiking rate of the postsynaptic neuron. We will also numerically verify the obtained insights in a large-scale WM network. Our results show that astrocytes can enable both sparse and persistent neural activities for the WM network.
Compared with the literature, our contribution is twofold. First, different from Gordleeva et al. (2021) and De Pitta and Brunel (2022) where the effect of gliotransmission towards the presynaptic neuron has been studied, we consider the integration of slow inward current to the postsynaptic neuron and the examination of persistent and sparse firing depending on the strength of gliotransmission. Second, we perform stability analysis for a detailed biologically plausible nonlinear model of an astrocyte considered in Gordleeva et al. (2021). We show that the nonlinear model is a positive system and it is ultimately bounded. We charatrize the ultimate bound and show the existence of a locally asymptotic stable equilibrium in the positive orthant. To the best of our knowledge, stability analysis of astrocyte dynamics has only been considered in De Pitta and Brunel (2022) which studied the presynaptic stimulation effects using linearization of a model that is less biologically detailed than the model we consider. Furthermore, we provide numerical results to indicate the implications of our results in supporting the possibility of the co-existence of both sparse and persistent neural activities in WM.
The paper is organized as follows. Section 2 reviews the tripartite model (Gordleeva et al., 2021), and presents our spiking rate model and assumptions. In Section 3, both stability analysis and its corresponding numerical validation are presented. A large-scale simulation shows the performance of a neuron-astrocyte network performing WM tasks in Section 4. Finally, the paper is concluded in Section 5.
**Notations:** Let \(\mathbb{R}_{+}=[0,\infty)\), and \(\mathbb{R}_{+}^{n}\) is the set of \(n\)-tuples for which all components belong to \(\mathbb{R}_{+}\). Denote the boundary of \(\mathbb{R}_{+}^{n}\) by \(bd(\mathbb{R}_{+}^{n})\).
**Definition 1**: (Positive systems). _System \(\dot{x}=f(x(t))\) is positive if and only if \(\mathbb{R}_{+}^{n}\) is forward invariant (De Leenheer and Aeyels, 2001)._
## 2 Modeling
We consider a tripartite synapse including a presynaptic and a postsynaptic neuron as well as an astrocyte as shown in Fig. 1. We study the neuron-astrocyte network model by Gordleeva et al. (2021) which combines the neuron model by Izhikevich (2003) and the astrocyte models by Nadkarni and Jung (2003) and Ullah et al. (2006).
### Neuronal Dynamics
The Izhikevich neuron model (Izhikevich, 2003) in combination with the release glutamate dynamics proposed in Gordleeva et al. (2012) is given by
\[\dot{V} =0.04V^{2}+5V-U+140+I_{\text{app}}+I_{\text{syn}}+I_{\text{astro}}, \tag{1}\] \[\dot{U} =a\left(bV-U\right),\] \[\dot{G} =-\alpha_{\text{glu}}G+k_{\text{glu}}\Theta\left(V-30mV\right),\]
with \(V\), \(U\), and \(G\) denoting the membrane potential, membrane potential's recovery variable, and the released glutamate in the synaptic cleft where \(\Theta(x)\) is the Heaviside step function. Additionally, if \(V\geq 30\)mV, then the neuron spikes, and \(V\) and \(U\) are updated to \(c\), and \(U+d\) respectively. The constant parameters \(a\), \(b\), \(c\), and \(d\) are chosen to resemble fast-spiking behavior as it is mostly found in the prefrontal cortex. The currents \(I_{app}\), \(I_{syn}\), and \(I_{astro}\) denote an external current, the synaptic current via neurotransmission, and the astrocytic current via gliotransmission.
### Astrocytic Dynamics
We consider a three-state single astrocyte model given by Li and Rinzel (1994) with extensions by Nadkarni and Jung (2003) and Ullah et al. (2006). The model states are \([IP_{3}]\), \([Ca^{2+}]\), and \(h\). The first two describe the intracellular concentration of \(IP_{3}\), which is a second messenger substrate, and \(Ca^{2+}\) ions. The parameter \(h\) denotes the share of active \(IP_{3}\) receptors connecting the endoplasmic reticulum with the intracellular space. Denoting \([IP_{3}]\), \([Ca^{2+}]\), and \(h\) with \(x_{1}\), \(x_{2}\), and \(x_{3}\) respectively, the astrocyte dynamics (Gordleeva et al., 2021) obey
\[\dot{x}_{1} =\frac{x_{1}^{*}-x_{1}}{\tau_{IP_{3}}}+\frac{v_{4}(x_{2}+(1-\alpha )k_{4})}{x_{2}+k_{4}}+u(t), \tag{2}\] \[\dot{x}_{2} =-k_{1}x_{2}+c_{1}v_{1}(x_{1}x_{2}x_{3})^{3}\frac{\frac{c_{0}}{c _{1}}-(1+\frac{1}{c_{1}})x_{2}}{(x_{1}+d_{1})^{3}(x_{2}+d_{5})^{3}}\] \[\quad-\frac{v_{3}}{k_{3}^{2}+x_{2}^{2}}+\frac{v_{6}\ x_{1}^{2}}{ k_{2}^{2}+x_{1}^{2}}+c_{1}v_{2}(\frac{c_{0}}{c_{1}}-(1+\frac{1}{c_{1}})x_{2}),\] \[\dot{x}_{3} =a_{2}\left(d_{2}\frac{x_{1}+d_{1}}{x_{1}+d_{3}}\left(1-x_{3} \right)-x_{2}x_{3}\right),\]
where all the coefficients are positive numbers provided in Appendix A.The system's input, connecting the presynaptic neuron and the astrocyte, is the \(IP_{3}\) production induced by presynaptic glutamate release, i.e., \(u(t)=J_{\text{glu}}(\text{t})\), given by
\[J_{\text{glu}}=A_{\text{glu}}\Theta(G-G_{thr}), \tag{3}\]
Figure 1: Block diagram representation of a tripartite synapse: The presynaptic neuron (Pre N) signals to the postsynaptic neuron (Post N) and the astrocyte. The astrocyte can release gliotransmitters towards the postsynaptic neuron.
where exceeding a threshold value \(G_{\rm thr}\) of the presynaptic neuron provokes \(J_{glu}\)1. Finally, the inward current of the postsynaptic neuron under the effect of gliotransmission, \(I_{astro}\), is modeled as described by Nadkarni and Jung (2003). Define \(y=x_{2}/{\rm nM}-196.69\), then
Footnote 1: The condition for glutamate-induced \(IP_{3}\) production \(J_{glu}\) for the analysis of the tripartite synapse as described in Equation 3 is modeled slightly different from Gordleeva et al. (2021)
\[I_{\rm astro}\ =2.11\Theta(\ln y)\ln y. \tag{4}\]
### Complexity Reduction and Assumptions
We are interested in the stability analysis of the interconnected model as in Fig. 1, where the pre-synaptic neuron provides input to the two other blocks. As shown, the astrocyte dynamics are nonlinear as well as the neuron dynamics. To reduce the complexity of neuronal dynamics, we consider the dynamics of the firing rate of the postsynaptic Izhikevich neuron denoted by \(x_{4}\). We model the firing rate dynamics as
\[\dot{x}_{4}=-x_{4}+f(\eta I_{astro}-I_{thr}), \tag{5}\]
where \(f(.)\) is an odd function, \(f(0)=0\), \(I_{astro}\) represents the astrocytic current, and \(I_{thr}\) is a threshold value. The parameter \(\eta\) is introduced in order to differentiate between weak and strong gliotransmission (De Pitta et al., 2011). The derivation of the function \(f(\cdot)\) based on the Izhikevich neuron is provided in more detail in Appendix B.
Moreover, we impose the following mild assumption on the input signal \(G_{glu}\) of the astrocyte.
**Assumption 1**: _The input signal \(J_{glu}(G)\) is a smooth function, such that \(0\leq J_{glu}\leq A_{glu},\forall G\)._
## 3 Stability Analysis
In this section, the effect of gliotransmission on the postsynaptic neuronal firing rate is studied. First, we focus on the stability the astrocyte dynamics. Second, we investigate the effects of astrocyte output, its second state, on the firing rate of the post-synaptic neuron. We then verify the results via numerical simulation of the extended model, composed of the astrocyte and the post-synaptic neuron, as well as the tripartite model.
### Stability Analysis of the Astrocyte Dynamics
Define \(x=[x_{1}\ x_{2}\ x_{3}]^{\top}\). The astrocyte dynamics given in (2) admits the following general representation
\[\dot{x}=f(x)+b+Bu(t), \tag{6}\]
where \(b=[\frac{x_{1}^{*}}{\tau_{IP_{3}}}\ v_{2}c_{0}\ 0]^{T}\), \(B=[1\ 0\ 0]^{T}\), and \(f(x)\) captures all nonlinear terms in the dynamics of \(x_{1},x_{2},x_{3}\) as presented in (2). Since the input to the system (2) obeys Assumption 1, we consider it as a bounded positive additive term. We use the following Lemma derived from Property (7) in (De Leenheer and Aeyels, 2001) to show the positivity of the astrocyte dynamics.
**Lemma 1**: _System (6) with the input based on Assumption 1 is positive if and only if_
\[\mathbf{P}:\forall x\in bd(\mathbb{R}_{+}^{n}):x_{i}=0\Rightarrow f_{i}(x)\geq 0. \tag{7}\]
**Proposition 1**: _The nonlinear system in (6) is positive. That is, \(\forall x(0)\in\mathbb{R}_{+}^{n}\) it holds that \(x(t)\in\mathbb{R}_{+}^{n}\)._
We verify that system (6) satisfies the property in Lemma 1. Since all coefficients in (2) are positive, substituting \(x_{2}=0\) in (2) gives \(\dot{x}_{2}>0\). Thus, \(x_{2}(t)\geq 0\) holds. Substituting \(x_{1}=0\) and \(x_{2}\geq 0\) in \(x_{1}\) dynamics, gives \(\dot{x}_{1}>0\), hence \(x_{1}(t)\geq 0\). Similarly, we conclude that \(x_{3}(t)\geq 0\), which ends the proof.
We now continue by proving uniform ultimate boundedness (Definition 4.6, and Theorem 4.18 (Khalil, 2015)) for system (6). We benefit from the positivity of the system in conducting the proof.
**Proposition 2**: _Consider astrocyte dynamics in (6) with \(x(0)>0\), under Assumption 1. The system is uniformly ultimately bounded, and its solution converges to the set_
\[\Omega=\{x\in\mathbb{R}_{3}^{+}:0\leq x_{1}\leq\mu_{1};0\leq x_{2}\leq\mu_{2} ;0\leq x_{3}\leq 1\},\]
_with \(\mu_{1}=x_{1}^{*}+\tau_{IP_{3}}(v_{4}+J_{\rm glu})\) and \(\mu_{2}=\frac{v_{6}+c_{0}(v_{1}-v_{2})}{k_{1}+v_{2}(1+c_{1})}\)._
Let us first rewrite the dynamics in (2) as follows
\[\dot{x}=\left[\begin{array}{c}\frac{x_{1}^{*}-x_{1}}{\tau_{IP_{3}}}+N_{1}+J_{ \rm glu}\\ -\beta_{1}x_{2}-\beta_{2}+N_{2}(\frac{c_{0}}{c_{1}}-(1+\frac{1}{c_{1}})x_{2})- N_{3}+N_{4}\\ a_{2}(N_{5}(1-x_{3})-x_{2}x_{3})\end{array}\right] \tag{8}\]
where \(\beta_{1}=k_{1}+v_{2}(1+c_{1})\), \(\beta_{2}=c_{0}v_{2}\), and each \(N_{i}\) denotes a bounded nonlinear term in (2). Now, consider the Lyapunov function \(V(x)=\frac{1}{2}(x_{1}^{2}+x_{2}^{2}+x_{3}^{2})\). First, define \(V_{3}=\frac{1}{2}x_{3}^{2}\), where \(\dot{V}_{3}=x_{3}\dot{x}_{3}=a_{2}N_{5}x_{3}(1-x_{3})-a_{2}x_{2}^{2}x_{3}^{2}\). Since \(0<\frac{d_{1}d_{2}}{d_{3}}\leq N_{5}\leq d_{2}\), and based on Proposition 1, \(x_{2}\geq 0,x_{3}\geq 0\) hold, we conclude that \(\dot{V}_{3}<0\) if \(x_{3}>1\). Thus, \(x_{3}\) converges to the interval \([0,1]\). We continue by computing the bounds of each \(N_{i}\), considering \(x_{1},x_{2}\geq 0,0\leq x_{3}\leq 1\). We obtain \((1-\alpha)v_{4}\leq N_{1}\leq v_{4};0\leq N_{2}\leq c_{1}v_{1};0\leq N_{3}\leq v _{3};0\leq N_{4}\leq v_{6}\). Computing the derivative of \(V_{1,2}(x)=\frac{1}{2}(x_{1}^{2}+x_{2}^{2})\), gives
\[\dot{V}_{1,2}\leq-\frac{x_{1}^{2}}{\tau_{IP_{3}}}+x_{1}(\frac{x_{1}^{*}}{\tau_{IP _{3}}}+v_{4}+J_{\rm glu}) \tag{9}\]
\[-\beta_{1}x_{2}^{2}+(v_{6}+c_{0}(v_{1}-v_{2}))x_{2}.\]
So, if \(x_{1}>x_{1}^{*}+\tau_{IP_{3}}(v_{4}+J_{\rm glu})\) and \(x_{2}>\frac{v_{6}+c_{0}(v_{1}-v_{2})}{k_{1}+v_{2}(1+c_{1})}\), then \(\dot{V}_{1,2}<0\) holds. The latter two conditions together with \(x_{3}>1\) guarantee that \(\dot{V}\leq 0\) holds, since \(V=V_{1,2}+V_{3}\). Thus, the solution of the system converges to the set \(\Omega\) which ends the proof.
As we discussed in the proof of Proposition 2, the nonlinear terms in (2) are bounded. Hence, system (6) can be approximated by the following linear system
\[\dot{x}\approx Cx+d+b+Bu(t), \tag{10}\]
where \(B\) and \(b\) are as defined for (6), and \(d\) represents a constant vector approximating the nonlinear terms. Notice that matrix \(C\) is a negative definite diagonal matrix, which guarantees local asymptotic stability of the equilibrium of the system (10).
**Corollary 1**: _There exists a locally asymptotically stable equilibrium belonging to \(\mathbb{R}_{+}^{n}\) for the system (6)._
Numerical examples corresponding to the above result are given in the Appendix.
### Stability of firing rate of the post-synaptic neuron
We now continue by connecting the astrocyte output to the input of the postsynaptic neuron (as shown in Fig. 2).
Based on the dynamics of \(x_{4}\) in (5), since for \(\eta I_{astro}<I_{thr}\), the map \(f(.)\) gives zero, then \(x_{4}=0\) is asymptotically stable. Also, \(x_{4}\) is bounded for bounded \(I_{astro}\), which is a function of \(x_{2}\), hence \(x_{4}\) is input-to-state stable (Khalil, 2015). In particular for constant \(\eta I_{astro}>I_{thr}\), \(x_{4}\) converges to an asymptotically stable equilibrium in \(\mathbb{R}_{+}\). The following Corollary summarizes these statements.
**Corollary 2**. The firing-rate of the post-synaptic neuron in (5) is input-to-state stable. Moreover, there exists an asymptotically stable equilibrium in \(\mathbb{R}_{+}^{n}\) for the extended system composed of (5) and (6).
### Numerical Analysis
In this section, we consider three cases to account for various possible scenarios.
* Case 1: No stimulation of astrocyte, i.e., \(J_{glu}=0\)
* Case 2: Stimulation of astrocyte, i.e., \(J_{glu}=5\frac{uM}{s}\) and strong gliotransmission, i.e., \(\eta=100\%\)
* Case 3: Stimulation of astrocyte, i.e., \(J_{glu}=5\frac{uM}{s}\) and weak gliotransmission, i.e., \(\eta=25\%\)
The two scenarios of a persistently stimulated (Case 2,3) and an unstimulated astrocyte (Case 1) can be achieved by simulating the glutamate dynamics for the different spiking activity of the pre-synaptic neuron (see dynamics in Equation (1)). The results of Proposition 2 and Corollary 1 are verified below. The simulation results show the ultimate boundedness property for all bounded stimulation cases. For the persistent and constant stimulation (Case 2), the states converge to an equilibrium. The short-period pulse of 0.2 seconds (\(J_{glu}\)) - which is selected in accordance to real experiments with primates (Funahashi et al., 1989) - leads to bounded astrocytic response (\(Ca^{2+}\) and \(I_{astro}\)) within the first 4 seconds. Additionally, a simulation without any input is performed to confirm a positive equilibrium value of the unforced system.
A second simulation is realized in order to examine and confirm the effect of weak and strong gliotransmission. Fig. 4 shows that both, Cases 2 and 3, have identical astrocytic behavior - as expected for the same input - and an entirely different postsynaptic firing frequency. The weak gliotransmission results in no visible neuronal postsynaptic activity although it still causes \(I_{astro}>0\) toward the postsynaptic neuron. The reason for this observation is the type 2 neuron dynamics of the Izhikevich model which denotes a class of neurons that shows a sudden high firing rate after exceeding an input threshold and no firing rate below the threshold. The weak astrocytic current does still lead to the easier onset of firing in the presence of additional inputs in a more realistic setting.
Finally, the actual tripartite model as described in section 2 is simulated. In this simulation, the whole nonlinear dynamics of neurons and the astrocyte are used. Fig. 5 shows the firing frequency of the presynaptic and the postsynaptic neuron as well as the astrocytic \(Ca^{2+}\) dynamics for the scenario with a short and a persistent input signal of \(I_{app}=100\mu A\) (see (1)) for strong gliotransmission. Although the presynaptic neuron is spiking for the initial stimulation phase, the synaptic connection is not strong enough to initiate the firing of the postsynaptic neuron. Strong astrocytic current \(I_{astro}\) induced by the enhanced \(Ca^{2+}\) concentration within the astrocyte leads to temporal postsynaptic neuronal activity. All simulations are conducted using a Runge-Kutta-4 algorithm with a fixed time step \(\Delta t=0.1ms\).
Figure 4: The input and response of the model in Fig. 2 with: a short impulse signal with weak (red) and strong (blue) gliotransmission.
Figure 3: The extended astrocyte model (Fig. 2) with different glutamate inputs: no input signal (yellow), a persistent input \(J_{glu}=5\frac{uM}{s}\) (red) and a short input (blue).
Figure 2: Block diagram of the extended astrocyte model
## 4 Implications of the results for working memory: numerical validation
The application of a neuron-astrocyte network model is tested by expanding the tripartite synapse - as described in section 2 - into a large-scale network representing the WM. For these purposes, a dual-layer structure is built by neuronal synapses, astrocytic gap junctions to connect astrocytes with each other, and the tripartite synapses as the connection between the neuronal and the astrocytic layer (Fig. 6). We connected 1296 neurons consisting of 80% excitatory and 20% inhibitory cells following an exponential distribution depending on the distance between them. Four neurons are linked to the spatially closest astrocyte, respectively. The inter-astrocyte connections are assumed bidirectional, i.e., via (electrical) synapses (gap junctions) (Gordleeva et al., 2021).
As mentioned in the introduction, astrocytic gliotransmission and its effect on the postsynaptic neuron is considered a possible mechanism for WM. The simulation sequence reflects real experiments having a sequence of _stimulation_, _delay period_, and _recall_. In order to enhance the recall, a form of original stimulus can be re-introduced to the network which is called a cue. We assume that the selected target neurons receive a stimulating input for the simulation period \(t_{stim}=0.2s\) which is followed by a delay period \(t_{delay}=2.8s\). Next, the recall period starts which is accompanied by a weak and noisy recall cue. Simulation results for strong, weak, and no gliotransmission show the following performance. Strong gliotransmission (Fig. 7) results in persistent enhanced activity of target neurons with an average frequency of approx. 150 Hz compared to non-target neurons at 10 to 20 Hz. In contrast to the other two scenarios, no recall cues are applied here. Although the difference between firing frequencies is strongly enhanced in this simulation, the general effect corresponds to the observed persistent activity of tuned neurons in Funahashi et al. (1989).
The effect of weak gliotransmission is significant, especially in the second subplot of Fig. 8. While there is basically no difference in neuronal activity between target and non-target neurons during the delay period, the recall cue shows a significant difference in neuronal reactions. The generally low neuronal activity during the delay period also links to the hypothesis of sparse neuronal activity.
In the last scenario, we removed the effect of gliotransmission in order to examine whether the above-seen WM performance can purely be assigned to astrocytic gliotransmission. The absence of gliotransmission removes every difference in neuronal activity between target and non-target cells which can be translated to a totally dysfunctional WM model (Fig. 9).
## 5 Conclusion
Using nonlinear stability analysis for a biologically detailed astrocyte model, we showed that the astrocyte response to a bounded input is bounded and we quantified this bound. Our analysis showed that astrocytic gliotransmission provides a stable activation of the postsynaptic neuron depending on the stimulus applied to the presynaptic neurons. This result indicates the possibility of storing
Figure 5: The input and response of the tripartite synapse exposed: the short input, e.g. stimulation, signal (red), and a persistent signal (blue).
Figure 6: Dual layer network: The neuronal layer (left) and the astrocytic layer (right) form spatial interconnections (adapted from Gordleeva et al. (2021)).
Figure 7: Strong gliotransmission (shot period): The activity of target neurons (T) is significantly higher than non-target neurons (NT).
presynaptic neuronal activity. We numerically verified the implication of this result to explain that both, persistent activity as well as sparse activity of WM, can be explained by our analysis. Further research, both in experimental as well as computational aspects, is necessary to further examine the possible role of astrocytes in WM tasks.
|
2310.15681 | Fixed-Budget Real-Valued Combinatorial Pure Exploration of Multi-Armed
Bandit | We study the real-valued combinatorial pure exploration of the multi-armed
bandit in the fixed-budget setting. We first introduce the Combinatorial
Successive Asign (CSA) algorithm, which is the first algorithm that can
identify the best action even when the size of the action class is
exponentially large with respect to the number of arms. We show that the upper
bound of the probability of error of the CSA algorithm matches a lower bound up
to a logarithmic factor in the exponent. Then, we introduce another algorithm
named the Minimax Combinatorial Successive Accepts and Rejects
(Minimax-CombSAR) algorithm for the case where the size of the action class is
polynomial, and show that it is optimal, which matches a lower bound. Finally,
we experimentally compare the algorithms with previous methods and show that
our algorithm performs better. | Shintaro Nakamura, Masashi Sugiyama | 2023-10-24T09:47:32Z | http://arxiv.org/abs/2310.15681v2 | # Fixed-Budget Real-Valued Combinatorial Pure Exploration of Multi-Armed Bandit
###### Abstract
We study the real-valued combinatorial pure exploration of the multi-armed bandit in the fixed-budget setting. We first introduce the Combinatorial Successive Asign (CSA) algorithm, which is the first algorithm that can identify the best action even when the size of the action class is exponentially large with respect to the number of arms. We show that the upper bound of the probability of error of the CSA algorithm matches a lower bound up to a logarithmic factor in the exponent. Then, we introduce another algorithm named the Minimax Combinatorial Successive Accepts and Rejects (Minimax-CombSAR) algorithm for the case where the size of the action class is polynomial, and show that it is optimal, which matches a lower bound. Finally, we experimentally compare the algorithms with previous methods and show that our algorithm performs better.
## 1 Introduction
The multi-armed bandit (MAB) model is an important framework in online learning since it is useful to investigate the trade-off between exploration and exploitation in decision-making problems (Auer et al., 2002; Audibert et al., 2009). Although investigating this trade-off is intrinsic in many applications, some application domains only focus on obtaining the optimal object, e.g., an arm or a set of arms, among a set of candidates, and do not care about the loss or rewards that occur during the exploration procedure. This learning problem called the pure exploration (PE) task has received much attention (Bubeck et al., 2009; Audibert et al., 2010).
One of the important sub-fields among PE of MAB is the _combinatorial pure exploration_ of the MAB (CPE-MAB)(Chen et al., 2014; Gabillon et al., 2016; Chen et al., 2017). In the CPE-MAB, we have a set of \(d\) stochastic arms, where the reward of each arm \(s\in\{1,\ldots,d\}\) follows an unknown distribution with mean \(\mu_{s}\), and an _action class_\(\mathcal{A}\), which is a collection of subsets of arms with certain combinatorial structures. Then, the goal is to identify the best action from the action class \(\mathcal{A}\) by pulling a single arm each round. There are mainly two settings in the CPE-MAB. One is the _fixed confidence_ setting, where the player tries to identify the optimal action with high probability with as few rounds as possible, and the other is the _fixed-budget_ setting, where the player tries to identify the optimal action with a fixed number of rounds (Chen et al., 2014; Katz-Samuels et al., 2020; Wang and Zhu, 2022). Abstractly, the goal is to identify \(\boldsymbol{\pi}^{*}\), which is the optimal solution for the following constraint optimization problem:
\[\begin{array}{ll}\text{maximize}_{\boldsymbol{\pi}}&\boldsymbol{\mu}^{ \top}\boldsymbol{\pi}\\ \text{subject to}&\boldsymbol{\pi}\in\mathcal{A},\end{array} \tag{1}\]
where \(\boldsymbol{\mu}\) is a vector whose \(s\)-th element is the mean reward of arm \(s\) and \(\top\) denotes the transpose.
Although CPE-MAB can be applied to many models which can be formulated as (1), most of the existing works in CPE-MAB (Chen et al., 2014; Wang and Zhu, 2022; Gabillon et al., 2016; Chen et al., 2017; Du et al., 2021; Chen et al., 2016) assume \(\mathcal{A}\subseteq\{0,1\}^{d}\). This means that although we can apply the CPE-MAB framework to the shortest path problem (Sniedovich, 2006), top-\(K\) arms identification (Kalyanakrishnan and Stone, 2010), matching (Gibbons, 1985), and spanning trees (Pettie and Ramachandran, 2002), we cannot apply it to problems where \(\mathcal{A}\subset\mathbb{R}^{d}\), such as the optimal transport problem (Villani, 2008), the knapsack problem (Dantzig and Mazur, 2007), and the production planning problem
(Pochet and Wolsey, 2010). For instance, in the knapsack problem shown in Figure 1, actions are not binary vectors since, for each item, we can put more than one in the bag, e.g., one blue one and two orange ones.
To overcome this limitation, Nakamura and Sugiyama (2023) introduced the real-valued CPE-MAB (R-CPE-MAB), where the action class is a set of real vectors, i.e., \(\mathcal{A}\subset\mathbb{R}^{d}\). Though they have investigated the R-CPE-MAB in the fixed confidence setting, there is still room for investigation in the fixed-budget setting. To the best of our knowledge, the only existing work that can be applied to the fixed-budget R-CPE-MAB is the Peace algorithm introduced by Katz-Samuels et al. (2020). However, there are some issues with the Peace algorithm. Firstly, since the Peace algorithm requires enumerating all the actions in advance, it is impractical when \(\mathcal{A}\) is exponentially large in \(d\). Secondly, there is a hyper-parameter for it, and one needs to carefully choose them for the feasibility of the Peace algorithm (Yang and Tan, 2022). Finally, it needs an assumption that the rewards follow a standard normal distribution, which may not be satisfied in real-world applications.
In this work, we first introduce a parameter-free algorithm named the Combinatorial Successive Asign (CSA) algorithm, which is a generalized version of the Combinatorial Successive Accepts and Rejects (CSAR) algorithm proposed by Chen et al.(2014) for the ordinary CPE-MAB. The CSA algorithm is the first algorithm that can be applied to the fixed-budget R-CPE-MAB even when the size of the action class \(\mathcal{A}\) is exponentially large in \(d\). We show that the upper bound of the probability of error of the CSA algorithm matches a lower bound up to a logarithmic factor in the exponent.
Since the CSA algorithm does not match a lower bound, we introduce another algorithm named the Minimax Combinatorial Successive Accepts and Rejects (Minimax-CombSAR) algorithm inspired by Yang and Tan (2022) for the case where the size of the action class is polynomial. In Section 4, we show that the Minimax-CombSAR algorithm is optimal, which means that the upper bound of the probability of error of the best action matches a lower bound. We also show that the Minimax-CombSAR algorithm has only one hyper-parameter, and is easily interpreted.
Finally, we report the results of numerical experiments. First, we show that the CSA algorithm can identify the best action in a knapsack problem, where the size of the action class can be exponentially large in \(d\). Then, when the size of the action class is polynomial in \(d\), we show that the Minimax-CombSAR algorithm performs better than the CSA algorithm and the Peace algorithm in identifying the best action in a knapsack problem.
## 2 Problem Formulation
In this section, we formally define our R-CPE-MAB model. Suppose we have \(d\) arms, numbered \(1,\ldots,d\). Assume that each arm \(s\in[d]\) is associated with a reward distribution \(\phi_{s}\), where \([d]=\{1,\ldots,d\}\). We assume all reward distributions have \(R\)-sub-Gaussian tails for some known constant \(R>0\). Formally, if \(X\) is a random variable drawn from \(\phi_{s}\) for some \(s\in[d]\), then, for all \(\lambda\in\mathbb{R}\), we have \(\mathbb{E}[\exp(\lambda X-\lambda\mathbb{E}[X])]\leq\exp(R^{2}\lambda^{2}/2)\). It is known that the family of \(R\)-sub-Gaussian tail distributions includes all distributions that are supported on \([a,b]\), where \((b-a)/2=R\), and also many unbounded distributions such as Gaussian distributions with variance \(R^{2}\)(Rivasplata, 2012; Rinaldo and Bong, 2018). Let \(\boldsymbol{\mu}=(\mu_{1},\ldots,\mu_{d})^{\top}\) denote the vector of expected rewards, where each element \(\mu_{s}=\mathbb{E}_{X\sim\phi_{s}}[X]\) denotes the expected reward of arm \(s\) and \(\top\) denotes the transpose. With a given \(\boldsymbol{\nu}\), let us consider the following linear optimization problem:
\[\begin{array}{ll}\text{maximize}_{\boldsymbol{\pi}}&\boldsymbol{\nu}^{ \top}\boldsymbol{\pi}\\ \text{subject to}&\boldsymbol{\pi}\in\mathcal{C}\subset\mathbb{R}_{\geq 0}^{d}. \end{array} \tag{2}\]
Here, \(\mathcal{C}\) is a problem-dependent feasible region of \(\boldsymbol{\pi}\), which satisfies some combinatorial structures. Then, for any \(\boldsymbol{\nu}\in\mathbb{R}^{d}\), we denote by \(\boldsymbol{\pi}^{\boldsymbol{\nu},\mathcal{C}}\) the solution of (2). We define the action class \(\mathcal{A}\) as the set of vectors that contains optimal solutions of (2) for any \(\boldsymbol{\nu}\), i.e.,
\[\mathcal{A}=\left\{\boldsymbol{\pi}^{\boldsymbol{\nu},\boldsymbol{\mathcal{C}} }\in\mathbb{R}_{\geq 0}^{d}\ |\ \forall\boldsymbol{\nu}\in\mathbb{R}^{d}\right\}. \tag{3}\]
We assume that the size of \(\mathcal{A}\) is finite and denote it by \(K\). This assumption is relatively mild since, for instance, in linear programming, the optimal solution
Figure 1: A simple sketch of the knapsack problem. We want to know how many of each item should be included in the bag to maximize the total value. Here, the total weight of every item cannot exceed 10kg, which is the capacity of the bag.
can always be found at one of the vertices of the feasible region (Ahmadi, 2016).
Also, let \(\texttt{POSSIBLE-PI}(s)=\{x\in\mathbb{R}\mid\exists\boldsymbol{\pi}\in\mathcal{A}, \pi_{s}=x\}\), where \(\pi_{s}\) denote the \(s\)-th element of \(\boldsymbol{\pi}\). We can see \(\texttt{POSSIBLE-PI}(s)\) as the set of all possible values that an action in the action set can take as the \(s\)-th element. Let us denote the size of \(\texttt{POSSIBLE-PI}(s)\) by \(B_{s}\). Note that \(K\), the size of \(\mathcal{A}\), can be exponentially large in \(d\), i.e., \(|\mathcal{A}|=\mathcal{O}(\prod_{s=1}^{d}B_{s})\).
We assume we have _offline oracles_ which efficiently solve the linear optimization problem (2) once \(\nu\) is given. For instance, algorithms which output \(\boldsymbol{\pi}^{s}(\boldsymbol{\nu})=\underset{\boldsymbol{\pi}\in\mathcal{ A}}{\arg\max}\boldsymbol{\nu}^{\top}\boldsymbol{\pi}\) in polynomial or pseudo-polynomial 1 time in \(d\).
Footnote 1: A pseudo-polynomial time algorithm is a numeric algorithm whose running time is polynomial in the numeric value of the input, but not necessarily in the length of the input (Garey and Johnson, 1990)
The player's objective is to identify the best action \(\boldsymbol{\pi}^{*}=\underset{\boldsymbol{\pi}\in\mathcal{A}}{\arg\max} \boldsymbol{\mu}^{\top}\boldsymbol{\pi}\) by pulling a single arm each round. The player is given a _budget_\(T\), and cannot pull arms more than \(T\) times. The player outputs an action \(\boldsymbol{\pi}^{\text{out}}\) at the end, and she is evaluated by the _probability of error_, which is formally \(\Pr\left[\boldsymbol{\pi}^{\text{out}}\neq\boldsymbol{\pi}^{*}\right]\).
## 3 Lower Bound of the Fixed-Budget R-CPE-MAB
In this section, we show a lower bound of the probability of error in R-CPE-MAB. As preliminaries, let us introduce some notions. First, we introduce \(\boldsymbol{\pi}^{(s)}\) as follows (Nakamura and Sugiyama, 2023b):
\[\boldsymbol{\pi}^{(s)}=\underset{\boldsymbol{\pi}\in\mathcal{A}\setminus\{ \boldsymbol{\pi}^{*}\}}{\arg\min}\ \frac{\boldsymbol{\mu}^{\top}\left(\boldsymbol{\pi}^{*}-\boldsymbol{\pi} \right)}{|\pi_{s}^{*}-\pi_{s}|}. \tag{4}\]
Intuitively, among the actions whose \(s\)-th element is different from \(\boldsymbol{\pi}^{*}\), \(\boldsymbol{\pi}^{(s)}\) is the action which is the most difficult to determine whether it is the best action or not. We also introduce the notion _G-gap_(Nakamura and Sugiyama, 2023b) as follows:
\[\Delta_{s} = \min_{\boldsymbol{\pi}\in\mathcal{A}\setminus\{\boldsymbol{\pi} ^{*}\}}\frac{\boldsymbol{\mu}^{\top}\left(\boldsymbol{\pi}^{*}-\boldsymbol{ \pi}\right)}{|\pi_{s}^{*}-\pi_{s}|} \tag{5}\] \[= \frac{\boldsymbol{\mu}^{\top}\left(\boldsymbol{\pi}^{*}- \boldsymbol{\pi}^{(s)}\right)}{\left|\pi_{s}^{*}-\pi_{s}^{(s)}\right|}.\]
_G-Gap_ was first introduced in Nakamura and Sugiyama (2023b) as a natural generalization of the notion _gap_ in the CPE-MAB literature (Chen et al., 2014, 2016, 2017). This was introduced as a key notion that characterizes the difficulty of the problem instance.
In Theorem 1, we show that the sum of the inverse of squared _G-Gaps_,
\[\mathbf{H}=\sum_{s=1}^{d}\left(\frac{1}{\Delta_{s}}\right)^{2}, \tag{6}\]
appears in the lower bound of the probability of error of R-CPE-MAB, which implies that it characterizes the difficulty of the problem instance.
**Theorem 1**.: _For any action class and any algorithm that returns an action \(\boldsymbol{\pi}^{\text{out}}\) after \(T\) times of arm pulls, the probability of error is at least_
\[\mathcal{O}\left(\exp\left(-\frac{T}{\mathbf{H}}\right)\right). \tag{7}\]
We show the proof in Appendix A. If \(\mathcal{A}\) is a set of \(d\) dimensional standard basis, R-CPE-MAB becomes the standard best arm identification problem whose objective is to identify the best arm with the largest expected reward among \(d\) arms (Bubeck et al., 2009; Audibert et al., 2010; Carpentier and Locatelli, 2016). From Carpentier and Locatelli (2016), a lower bound is \(\mathcal{O}\left(\exp\left(-\frac{T}{\log(d)\mathbf{H}}\right)\right)\) for the standard best arm identification problem. It is a future work that whether the lower bound is \(\mathcal{O}\left(\exp\left(-\frac{T}{\log(d)\mathbf{H}}\right)\right)\) for general action classes.
## 4 The Combinatorial Successive Assign (CSA) Algorithm
In this section, we first introduce the CSA algorithm, which can be seen as a generalization of the CSAR algorithm (Chen et al., 2014). This algorithm can be applied to fixed-budget R-CPE-MAB even when the size of the action class \(\mathcal{A}\) is exponentially large in \(d\). Then, we show an upper bound of the probability of error of the best action of the CSA algorithm. We also discuss the number of times offline oracles have to be called.
### CSA Algorithm
In this subsection, we introduce the CSA algorithm, a fully parameter-free algorithm for fixed-budget R-CPE-MAB that works even when the action set \(\mathcal{A}\) can be exponentially large in \(d\).
We first define the _constrained offline oracle_ which is used in the CSA algorithm.
**Definition 1** (Constrained offline oracle).: _Let \(\boldsymbol{S}=\{(e,x)|(e,x)\in\mathbb{Z}\times\mathbb{R}\}\) be a set of tuples. A constrained offline oracle is denoted by \(\mathrm{COracle}\): \(\mathbb{R}^{d}\times\boldsymbol{S}\to\mathcal{A}\cup\bot\)
_and satisfies_
\[\mathrm{COracle}(\mathbf{\mu},\mathbf{S})=\left\{\begin{array}{ll}\underset{\mathbf{\pi} \in\mathcal{A}_{\mathbf{S}}}{\text{arg}\max}\mathbf{\mu}^{\top}\mathbf{\pi}&(\text{if }\mathcal{A}_{\mathbf{S}}\neq\emptyset),\\ \perp&(\text{if }\mathcal{A}_{\mathbf{S}}=\emptyset),\end{array}\right.\]
_where we define \(\mathcal{A}_{\mathbf{S}}=\{\mathbf{\pi}\in\mathcal{A}\ |\ \forall(e,x)\in\mathbf{S},\pi_{e}=x\}\) as the collection of feasible actions and \(\perp\) is a null symbol._
Here, we can see that a COracle is a modification of an offline oracle specified by \(\mathbf{S}\). In other words, for all \((e,x)\) in \(\mathbf{S}\), the COracle outputs an action whose \(e\)-th element is \(x\); otherwise, it outputs the null symbol. In Appendix B, we discuss how to construct such COracles for some combinatorial problems, such as the optimal transport problem and the knapsack problem.
We introduce the CSA algorithm in Algorithm 1. The CSA algorithm divides the budget into \(d\) rounds. In each round, we pull each of the remaining arms the same number of times (line 5). At each round \(t\), the CSA outputs the empirically best action \(\hat{\mathbf{\pi}}(t)\) (line 6), chooses a single arm \(p(t)\) (line 14), and assigns \(\hat{\pi}_{p(t)}(t)\) for the \(e\)-th element of \(\mathbf{\pi}^{\mathrm{out}}\). Indices that are assigned are maintained in \(\mathbf{F}(t)\), and arms \(s\in F(t)\) will no longer be pulled in the next rounds. The pair of the index and the assigned value for \(\mathbf{\pi}^{\mathrm{out}}\), \(\mathbf{S}(t)\), is updated at every round (line 16).
### Theoretical Analysis of CSA Algorithm
Here, we first discuss an upper bound of the probability of error. Then, we discuss the number of times we call the offline oracle.
#### 4.2.1 An Upper Bound of the Probability of Error
Here, we show an upper bound of the probability of error of the CSA algorithm. Let \(\Delta_{(1)},\ldots,\Delta_{(d)}\) be a permutation of \(\Delta_{1},\ldots,\Delta_{d}\) such that \(\Delta_{(1)}\leq\cdots\leq\Delta_{(d)}\). Also, let us define
\[\mathbf{H}_{2}=\max_{s\in[d]}\frac{s}{\Delta_{(s)}^{2}}.\]
One can verify that \(\mathbf{H}_{2}\) is equivalent to \(\mathbf{H}\) up to a logarithmic factor: \(\mathbf{H}_{2}\leq\mathbf{H}\leq\log(2d)\mathbf{H}_{2}\)(Audibert et al., 2010).
**Theorem 2**.: _Given any \(T>d\), action class \(\mathcal{A}\subset\mathbb{R}^{d}\), and \(\mathbf{\mu}\in\mathbb{R}^{d}\), the CSA algorithm uses at most \(T\) samples and outputs a solution \(\mathbf{\pi}^{\mathrm{out}}\in\mathcal{A}\cup\{\perp\}\) such that_
\[\Pr\left[\mathbf{\pi}^{\mathrm{out}}\neq\mathbf{\pi}^{*}\right] \tag{8}\] \[\leq d^{2}\exp\left(-\frac{T-d}{2(2+L^{2})^{2}R^{2}\!\log(d)U_{A}^{2} \mathbf{H}_{2}}\right),\]
_where \(L=\max_{e\in[d],\pi^{1},\pi^{2},\pi^{3}\in\mathcal{A},\pi^{1}_{s}\neq\pi^{2}_ {s}}\frac{\left|\pi^{1}_{s}-\pi^{2}_{s}\right|}{\left|\pi^{1}_{s}-\pi^{2}_{s} \right|}\), \(\tilde{\log}(d)\triangleq\sum_{s=1}^{d}\frac{1}{s}\), and \(U_{\mathcal{A}}=\max_{\mathbf{\pi},\mathbf{\pi}^{\prime}\in\mathcal{A},e\in\{s[d]\ |\ \pi_{s}\neq\pi^{ \prime}_{s}\}}\frac{\sum_{s=1}^{d}\left|\pi_{s}-\pi^{\prime}_{s}\right|}{\left| \pi_{s}-\pi^{\prime}_{s}\right|}\)._
We can see that the CSA algorithm is optimal up to a logarithmic factor in the exponent. Since \(L=1\) and \(U_{\mathcal{A}}=\mathrm{width}(\mathcal{A})\) in the ordinary CPE-MAB, we can confirm that Theorem 2 can be seen as a natural generalization of Theorem 3 in Chen et al. (2014), which shows an upper bound of the probability of error of the CSAR algorithm.
#### 4.2.2 The Oracle Complexity
Next, we discuss the _oracle complexity_, the number of times we call the offline oracle (Ito et al., 2019; Xu and Li, 2021). Note that, in the CSA algorithm, we call the COracle \(\mathcal{O}(d\sum_{s=1}^{d}B_{s})\) times (line 12). Therefore, if each COracle call invokes the offline oracle \(N\) times, the _oracle complexity_ is \(\mathcal{O}\left(Nd\sum_{s=1}^{d}B_{s}\right)\). Finally, if the time complexity of each oracle call is \(Z\), then the total time complexity of the CSA algorithm is \(\mathcal{O}\left(ZNd\sum_{s=1}^{d}B_{s}\right)\).
Below, we discuss the oracle complexity of the CSA algorithm in some specific combinatorial problems such as the knapsack problem and the optimal transport problem. Note that the Minimax-CombSAR algorithm and Peace algorithm Katz-Samuels et al. (2020) have to enumerate all the actions in advance, where the number may be exponentially large in \(d\). We show that the CSA algorithm mitigates the curse of dimensionality.
**The Knapsack Problem (Dantzig and Mazur, 2007).** In the knapsack problem, we have \(d\) items. Each item \(s\in[d]\) has a weight \(w_{s}\) and value \(v_{s}\). Also, there is a knapsack whose capacity is \(W\) in which we put items. Our goal is to maximize the total value of the knapsack, not letting the total weight of the items exceed the capacity of the knapsack. Formally, the optimization problem is given as follows:
\[\begin{array}{ll}\text{maximize}_{\mathbf{\pi}\in\mathcal{A}}&\sum_{s=1}^{d}v_ {s}\pi_{s}\\ \text{subject to}&\sum_{s=1}^{d}\pi_{s}w_{s}\leq W,\end{array}\]
where \(\pi_{s}\in\mathbb{Z}_{\geq 0}\) denotes the number of item \(s\) in the knapsack. Here, if we assume the weight of each item is known, but the value is unknown, we can apply the R-CPE-MAB framework to the knapsack problem, where we estimate the values of items. The knapsack problem is NP-complete (Garey and Johnson, 1979). Hence, it is unlikely that the knapsack problem can be solved in polynomial time. However, it is well known that the knapsack problem can be solved in pseudo-polynomial time \(\mathcal{O}(dW)\) if we use dynamic programming (Kellerer et al., 2004; Fujimoto, 2016). It finds the optimal solution by constructing a table of size \(dW\) whose \((s,w)\)-th element represents the maximum total value that can be achieved if the sum of the weights does not exceed \(w\) using up to the \(s\)th item. In some cases, it is sufficient to assume \(\mathcal{O}(dW)\) time-complexity is enough, and therefore, we use this dynamic programming method as the offline oracle. We can construct the _COracle_ for the knapsack problem by calling this offline oracle once (see Appendix B.1 for details). Therefore, the CSA algorithm calls the offline oracle \(\mathcal{O}(1\times d\sum_{s=1}^{d}B_{s})=\mathcal{O}(d\sum_{s=1}^{d}B_{s})\) times, and the total time complexity of the CSA algorithm is \(\mathcal{O}(dW\times d\sum_{s=1}^{d}B_{s})=\mathcal{O}(d^{2}\sum_{s=1}^{d}B_{ s}W)\). This is much more computationally friendly than the Peace algorithm with time complexity \(\mathcal{O}\left(\prod_{s=1}^{d}B_{s}\right)\).
**The Optimal Transport (OT) Problem (Peyre and Cuturi, 2019).** OT can be regarded as the cheapest plan to deliver resources from \(m\) suppliers to \(n\) consumers, where each supplier \(i\) and consumer \(j\) have supply \(s_{i}\) and demand \(d_{j}\), respectively. Let \(\mathbf{\gamma}\in\mathbb{R}_{\geq 0}^{m\times n}\) be the cost matrix, where \(\gamma_{ij}\) denotes the cost between supplier \(i\) and demander \(j\). Our objective is to find the optimal transportation matrix
\[\mathbf{\pi}^{*}=\underset{\mathbf{\pi}\in\mathcal{O}(\mathbf{s},\mathbf{d})}{\arg\min}\sum_{ i,j}\pi_{ij}\gamma_{ij}, \tag{9}\]
where
\[\mathcal{G}(\mathbf{s},\mathbf{d})\triangleq\left\{\mathbf{\Pi}\in\mathbb{R}_{\geq 0}^{m \times n}\,\Big{|}\,\mathbf{\Pi}\mathbf{1}_{n}=\mathbf{s},\mathbf{\Pi}^{\top}\mathbf{1}_{m}=\mathbf{d} \right\}. \tag{10}\]
Here, \(\mathbf{s}=(s_{1},\ldots,s_{m})\) and \(\mathbf{d}=(d_{1},\ldots,d_{n})\). \(\pi_{ij}\) represents how much resources one sends from supplier \(i\) to demander \(j\). Here, if we assume that the cost is unknown and changes stochastically, e.g., due to some traffic congestions, we can apply the R-CPE-MAB framework to the optimal transport problem, where we estimate the cost of each edge \((i,j)\) between supplier \(i\) and consumer \(j\).
Once \(\mathbf{\gamma}\) is given, we can compute \(\mathbf{\pi}^{*}\) in \(\mathcal{O}(l^{3}\log l)\), where \(l=\max\left(m,n\right)\), by using network simplex or interior point methods (Cuturi, 2013), and we can use them as the offline oracle. It is known that the solution of linear programming can always be found at one of the vertices of the feasible region (Ahmadi, 2016), and therefore the size of the action space \(\mathcal{A}=\left\{\mathbf{\pi}^{\mathbf{\nu},\mathbf{\mathcal{G}(\mathbf{s},\mathbf{d})}}\in \mathbb{R}_{\geq 0}^{m\times n}\mid\forall\mathbf{\nu}\in\mathbb{R}^{m\times n}\right\}\) is finite. However, it is difficult to construct \(\left\{\text{POSSIBLE-PI}\left((i,j)\right)\right\}_{i\in[m],j\in[n]}\) in general to run the CSA algorithm. On the other hand, if \(\mathbf{s}\) and \(\mathbf{d}\) are both integer vectors, we can construct \(\left\{\text{POSSIBLE-PI}\left((i,j)\right)\right\}_{i\in[m],j\in[n]}\) thanks to the fact that \(\mathcal{A}=\left\{\mathbf{\pi}^{\mathbf{\nu},\mathbf{\mathcal{G}(\mathbf{s},\mathbf{d})}}\in \mathbb{Z}_{\geq 0}^{m\times n}\mid\forall\mathbf{\nu}\in\mathbb{R}^{m\times n}\right\}\) is a set of non-negative integer matrices (Goodman and O'Rourke, 1997). In this case, all actions are restricted to non-negative integers, and \(\text{POSSIBLE-PI}(i,j)=\left\{0,1,\ldots,\min\left(s_{i},d_{j}\right)\right\}\). In Appendix B.2, we show that the COracle can be constructed by calling the offline oracle once. Therefore, we call the offline oracle \(\mathcal{O}\left(1\times mn\sum_{s=1}^{mn}B_{s}\right)=\mathcal{O}(mn\sum_{s=1 }^{mn}B_{s})\) times, and the total time complexity is \(\mathcal{O}\left(l^{3}\log lmn\sum_{s=1}^{mn}B_{s}\right)\), where \(l=\max\left(m,n\right)\). Again, this is much more computationally friendly than the Peace algorithm with time complexity \(\mathcal{O}\left(\prod_{s=1}^{d}B_{d}\right)\).
**A General Case When \(K(=|\mathcal{A}|)=\text{poly}(d).\)** In some cases, we can enumerate all the possible actions in \(\mathcal{A}\). For instance, one may use some prior knowledge of each arm, which is sometimes obtainable in the real world, to narrow down the list of actions, and make the size of the action class \(\mathcal{A}\) polynomial in \(d\). In Appendix B.3, we show that the time complexity of the COracle is \(\mathcal{O}\left(dK+K\log K\right)\), and therefore the total time complexity of the CSA algorithm is \(\mathcal{O}\left(\left(dK+K\log K\right)\cdot d\sum_{s=1}^{d}B_{s}\right)\).
## 5 The Minimax-CombSAR Algorithm for R-CPE-MAB where \(|\mathcal{A}|=\mathrm{poly}(d)\)
In this section, we show an algorithm for fixed-budget R-CPE-MAB named the Minimax Combinatorial Successive Accept (Minimax-CombSAR) algorithm, for the case where we can assume that the size of \(\mathcal{A}\) is polynomial in \(d\). Let \(\mathcal{A}~{}=~{}\{\boldsymbol{\pi}^{1}~{}=~{}\boldsymbol{\pi}^{*}, \boldsymbol{\pi}^{2},\ldots,\boldsymbol{\pi}^{K}\}\), where \(\boldsymbol{\mu}^{\top}\boldsymbol{\pi}^{i}\geq\boldsymbol{\mu}^{\top} \boldsymbol{\pi}^{i+1}\), for all \(i\in[K-1]\). The Minimax-CombSAR algorithm is inspired by the Optimal Design-based Linear Best Arm Identification (OD-LinBAI) algorithm Yang and Tan (2022), which eliminates actions in order from those considered suboptimal and finally outputs the remaining action as the optimal action. For the Minimax-CombSAR algorithm, we show an upper bound of the probability of error.
### Minimax-CombSAR Algorithm
We show the Minimax-CombSAR in Algorithm 2. Here, we explain it at a higher level.
We have \(\lceil\log d\rceil\) phases in the Minimax-CombSAR algorithm, and it maintains an _active_ action set \(\mathbb{A}(r)\) in each phase \(r\). In each phase \(r\in[\lceil\log d\rceil]\), it pulls arms \(m(r)=\frac{T^{\prime}-d\lceil\log_{2}d\rceil}{B/2^{r-1}}\) times in total, where \(B=2^{\lceil\log_{2}d\rceil}-1\) and \(\beta\in[0,1]\) is a hyperparameter, and \(T^{\prime}=T-\left\lfloor\frac{T}{d}\beta\right\rfloor\times d\). In each phase, we compute an _allocation vector_\(\boldsymbol{p}(r)\in\left\{\boldsymbol{v}\in\mathbb{R}^{d}~{}|~{}\sum_{s=1}^ {d}v_{s}=1\right\}\triangleq\Pi_{d}\), and pull each arm \(s\left\lceil p_{s}(r)\cdot m(r)\right\rceil\) times. Then, at the end of each phase \(r\), it eliminates a subset of possibly suboptimal actions. Eventually, there is only one action \(\boldsymbol{\pi}^{\mathrm{out}}\) in the active action set, which is the output of the algorithm.
The key to identifying the best action with high confidence is the choice of the allocation vector \(\boldsymbol{p}(r)\), which determines how many times we pull each arm in phase \(r\).
```
0: Budget: \(T\geq 0\), initialization parameter: \(\beta\), action set: \(\mathbb{A}(1)=\mathcal{A}\)
1:// Initialization
2:for\(s\in[d]\)do
3: Pull arm \(s\left\lfloor\frac{T}{d}\beta\right\rfloor\) times and update \(\hat{\mu}_{s}(1)\)
4:\(T_{s}(r)\leftarrow\left\lfloor\frac{T}{d}\beta\right\rfloor\)
5:endfor
6:\(T^{\prime}\gets T-\left\lfloor\frac{T}{d}\beta\right\rfloor\times d\)
7:for\(r=1\) to \(\lceil\log_{2}d\rceil\)do
8:\(m(r)=\frac{T^{\prime}-d\lceil\log_{2}d\rceil}{B/2^{r-1}}\)
9: Compute \(\boldsymbol{p}(r)\) according to (13) or (15)
10:for\(s\in[d]\)do
11: Pull arm \(s\)\([p_{s}(r)\cdot m(r)]\) times
12: Update \(\hat{\mu}_{s}(r+1)\) with the observed samples
13:endfor
14: For each action \(\boldsymbol{\pi}\in\mathbb{A}(r)\), estimate the expected reward: \(\langle\hat{\boldsymbol{\mu}}(r+1),\boldsymbol{\pi}\rangle\)
15: Let \(\mathbb{A}(r+1)\) be the set of \(\left\lceil\frac{d}{2^{r}}\right\rceil\) actions in \(\mathbb{A}(r)\) with the largest estimates of the expected rewards
16:endfor
17: The only action \(\boldsymbol{\pi}^{\mathrm{out}}\) in \(\mathbb{A}(\lceil\log_{2}d\rceil+1)\)
```
**Algorithm 2** Minimax Combinatorial Successive Accept Algorithm
**Choice of \(\boldsymbol{p}(r)\)**
We discuss how to choose an allocation vector that is beneficial to identify the best action. Let us denote the number of times arm \(s\) is pulled before phase \(r\) starts by \(T_{s}(r)\). Also, we denote by \(\hat{\mu}_{s}(r)\) the empirical mean of the reward from arm \(s\) before phase \(r\) starts. Then, at the end of phase \(r\), from Hoeffding's inequality (Hoeffding, 1963), we have
\[\Pr\left[\left|(\hat{\boldsymbol{\mu}}(r+1)-\boldsymbol{\mu})^{ \top}(\boldsymbol{\pi}^{a}-\boldsymbol{\pi}^{b})\right|\geq\epsilon\right] \tag{11}\] \[\leq \exp\left(-\frac{\epsilon^{2}}{\kappa^{a,b,\boldsymbol{p}}(r)R^{2}}\right)\]
for any \(\boldsymbol{\pi}^{a},\boldsymbol{\pi}^{b}\in\mathcal{A}\), and \(\epsilon\in\mathbb{R}\). Here, \(\hat{\boldsymbol{\mu}}(r+1)=(\hat{\mu}_{1}(r+1),\ldots,\hat{\mu}_{d}(r+1))^{\top}\) and
\[\kappa^{a,b,\boldsymbol{p}}(r)=\sum_{s=1}^{d}\frac{\left(\pi_{s}^{a}-\pi_{s}^{b} \right)^{2}}{T_{s}(r)+\left\lfloor p_{s}(r)\cdot m(r)\right\rceil}. \tag{12}\]
(11) shows that the empirical difference \(\hat{\boldsymbol{\mu}}^{\top}(\boldsymbol{\pi}^{a}-\boldsymbol{\pi}^{b})\) between \(\boldsymbol{\pi}^{a}\) and \(\boldsymbol{\pi}^{b}\) is closer to the true difference \(\boldsymbol{\mu}^{\top}(\boldsymbol{\pi}^{a}-\boldsymbol{\pi}^{b})\) with high probability if we make \(\kappa^{a,b,\boldsymbol{p}}(r)\) small. In that case, we have a higher chance to distinguish whether \(\boldsymbol{\pi}^{a}\) is better than \(\boldsymbol{\pi}^{b}\) or not. However, when we have more than two actions in \(\mathbb{A}(r)\),
\[\boldsymbol{p}^{a,b}(r)=\underset{\boldsymbol{p}\in\Pi_{d}}{\arg\min}\,\kappa^ {a,b,\boldsymbol{p}}(r)\]
is not necessarily a good allocation vector for investigating the true difference between other pairs of actions in \(\mathbb{A}(r)\). Therefore, we propose the following allocation vector as an alternative:
\[\boldsymbol{p}^{\min}(r)\triangleq\underset{\boldsymbol{p}\in\Pi_{d}}{\arg \min}\,\underset{\boldsymbol{\pi}^{a},\boldsymbol{\pi}^{b}\in\mathcal{A}}{ \max}\,\kappa^{a,b,\boldsymbol{p}}(r). \tag{13}\]
(13) takes a minimax approach, which computes an allocation vector that minimizes the maximum value of the right-hand side of (11) among all of the pairs of actions in \(\mathbb{A}(r)\). Since (13) is a \(d\)-dimensional nonlinear optimization problem, it becomes computationally costly as \(d\) grows. Thus, another possible choice of the allocation vector is as follows:
\[\boldsymbol{q}^{\min}(r) \triangleq \underset{\boldsymbol{p}\in\Pi_{d}}{\arg\min}\,\underset{ \boldsymbol{\pi}^{a},\boldsymbol{\pi}^{b}\in\mathcal{A}}{\max}\,\lambda^{a,b, \boldsymbol{p}}(r), \tag{14}\]
where \(\lambda^{a,b,\mathbf{p}}=\sum_{s=1}^{d}\frac{\left(\frac{\pi_{s}^{a}-\pi_{s}^{l}}{p_{s} (r)\cdot m(r)}\right)}{p_{s}(r)\cdot m(r)}\). For specific actions \(\mathbf{\pi}^{k},\mathbf{\pi}^{l}\in\mathcal{A}\), from the method of Lagrange multipliers (Hoffmann and Bradley, 2009), we have
\[\mathbf{q}^{k,l}(r) \triangleq \arg\min_{\mathbf{p}\in\Pi_{d}}\lambda^{k,l,\mathbf{p}}(r)\] \[= \left(\frac{\left|\pi_{1}^{k}-\pi_{1}^{l}\right|}{\sum_{s=1}^{d} \left|\pi_{s}^{k}-\pi_{s}^{l}\right|},\ldots,\frac{\left|\pi_{d}^{k}-\pi_{d}^{ l}\right|}{\sum_{s=1}^{d}\left|\pi_{s}^{k}-\pi_{s}^{l}\right|}\right)^{\top},\]
and therefore, \(\mathbf{q}^{\min}(r)\) in (14) can be written explicitly as follows:
\[\mathbf{q}^{\min}(r) = \mathbf{q}^{i,j}(r)\] \[= \left(\frac{\left|\pi_{1}^{i}-\pi_{1}^{l}\right|}{\sum_{s=1}^{d} \left|\pi_{s}^{i}-\pi_{s}^{j}\right|},\ldots,\frac{\left|\pi_{d}^{i}-\pi_{d}^{ j}\right|}{\sum_{s=1}^{d}\left|\pi_{s}^{i}-\pi_{s}^{j}\right|}\right)^{\top},\]
where \(\mathbf{\pi}^{i},\mathbf{\pi}^{j}=\underset{\mathbf{\pi}^{k},\mathbf{\pi}^{l}\in\mathcal{A}} {\arg\max}\,\lambda^{k,l,\mathbf{p}}(r)\). If we use (15) for the allocation vector instead of computing (13), we do not have to solve a \(d\)-dimensional non-linear optimization problem, and thus is computationally more friendly.
In some cases, actions in \(\mathcal{A}\) can be sparse, and \(\mathbf{p}(1)\) can also be sparse. If \(\mathbf{p}(1)\) is sparse, we may only have a few samples for some arms, and therefore accidentally eliminate the best action in the first phase in line 15 of Algorithm 2. To cope with this problem, we have the _initialization_ phase (lines 2-5) to pull each arm \(\left|\frac{T}{d}\beta\right|\) times, where \(\beta\in[0,1]\) is a hyperparameter. Intuitively, \(\beta\) represents how much of the total budget will be spent on the initialization phase. If \(\beta\) is too small, we may accidentally eliminate the best action in the early phase, and if it is too large, we may not have enough budget to distinguish between the best action and the next best action.
### Theoretical Analysis of Minimax-CombSAR
Here, in Theorem 3, we show an upper bound of the probability of error of the Minimax-CombSAR algorithm.
**Theorem 3**.: _For any problem instance in fixed-budget R-CPE-MAB, the Minimax-CombSAR algorithm outputs an action \(\mathbf{\pi}^{\mathrm{out}}\) satisfying_
\[\Pr\left[\mathbf{\pi}^{\mathrm{out}}\neq\mathbf{\pi}^{*}\right]\] \[\leq\left(\frac{4K}{d}+3\log_{2}d\right)\exp\left(-\frac{T^{ \prime}-\left\lceil\log_{2}d\right\rceil}{R^{2}V^{2}}\cdot\frac{1}{\mathbf{H} _{2}}\right), \tag{16}\]
_where \(V=\max_{\mathbf{\pi}^{i}\in\mathcal{A}\setminus\{\mathbf{\pi}^{*}\}}\left(\sum_{s=1}^ {d}\frac{\left|\pi_{s}^{i}-\pi_{s}^{i}\right|}{\left|\pi_{s(i)}^{i}-\pi_{s(i)} ^{i}\right|}\right)^{2}\) and \(s(i)=\underset{s\in[d]}{\arg\max}\left|\pi_{s}^{1}-\pi_{s}^{i}\right|\)._
We can see that the Minimax-CombSAR algorithm is an optimal algorithm whose upper bound of the probability of error matches the lower bound shown in (7) since we have \(\mathbf{H}_{2}\leq\mathbf{H}\leq\log(2d)\mathbf{H}_{2}\).
### Comparison with Katz-Samuels et al. (2020)
Here, we compare the Minimax-CombSAR algorithm with the Peace algorithm introduced in Katz-Samuels et al. (2020). The upper bound of the Peace algorithm can be written as follows:
\[\Pr\left[\mathbf{\pi}^{\mathrm{out}}\neq\mathbf{\pi}^{*}\right] \tag{17}\] \[\leq 2\left\lceil\log\left(d\right)\right\rceil\exp\left(-\frac{T}{c^{ \prime}\log(d)\left(\gamma_{s}+\rho_{*}\right)}\right),\]
where
\[\gamma_{*}=\underset{\mathbf{\lambda}\in\Pi_{d}}{\inf}\mathbb{E}_{\mathbf{\eta} \mathbf{\sim}\mathcal{N}(0,I)}\left[\underset{\mathbf{\pi}\in\mathcal{A}\setminus\{\mathbf{ \pi}^{*}\}}{\sup}\,\frac{\left(\mathbf{\pi}^{*}-\mathbf{\pi}\right)^{\top}\mathbf{A}\left( \mathbf{\lambda}\right)^{\frac{1}{2}}\mathbf{\eta}}{\mathbf{\mu}^{\top}\left(\mathbf{\pi}^{*}- \mathbf{\pi}\right)}\right],\]
\[\rho_{*}=\underset{\mathbf{\lambda}\in\Pi_{d}}{\min}\underset{\mathbf{\pi}\in \mathcal{A}\setminus\{\mathbf{\pi}^{*}\}}{\max}\,\frac{\sum_{s=1}^{d}\frac{\left| \pi_{s}^{*}-\pi_{s}\right|^{2}}{\lambda_{s}}}{\Delta_{\mathbf{\pi}^{*},\mathbf{\pi}}^{2 }},\]
and \(c^{\prime}\) is a problem-dependent constant. Here, \(\mathbf{A}\left(\mathbf{\lambda}\right)\) is a diagonal matrix whose \((i,i)\) element is \(\lambda_{i}\). In general, it is not clear whether the upper bound of the Minimax-CombSAR algorithm in (2) is tighter than that of the Peace algorithm shown in (17). In the experiment section, we compare the two algorithms numerically and show that our algorithm outperforms the Peace algorithm.
## 6 Experiment
In this section, we numerically compare the CSA, Minimax-CombSAR, and Peace algorithms. Here, the goal is to identify the best action for the knapsack problem. In the knapsack problem, we have \(d\) items. Each item \(s\in[d]\) has a weight \(w_{s}\) and value \(v_{s}\). Also, there is a knapsack whose capacity is \(W\) in which we put items. Our goal is to maximize the total value of the knapsack, not letting the total weight of the items exceed the capacity of the knapsack. Formally, the optimization problem is given as follows:
\[\mathrm{maximize}_{\mathbf{\pi}\in\mathcal{A}} \sum_{s=1}^{d}v_{s}\pi_{s}\] \[\mathrm{subject\ to} \sum_{s=1}^{d}\pi_{s}w_{s}\leq W,\]
where \(\pi_{s}\) denotes the number of item \(s\) in the knapsack. Here, the weight of each item is known, but the value is unknown, and therefore has to be estimated. In each time step, the player chooses an item \(s\) and gets
an observation of value \(v_{s}\), which can be regarded as a random variable from an unknown distribution with mean \(v_{s}\).
First, we run an experiment where we assume \(|\mathcal{A}|\) is exponentially large in \(d\). We see the performance of the CSA algorithm with different budgets. Next, we compare the CSA, Minimax-CombSAR, and Peace algorithms, where we can assume that \(|\mathcal{A}|\) is polynomial in \(d\).
### When \(|\mathcal{A}|\) is Exponentially Large in \(d\)
Here, we assume \(|\mathcal{A}|\) is exponentially large in \(d\). Therefore, we can not use the Minimax-CombSAR and the Peace algorithm since they have to enumerate all the possible actions. Here, we see how many times out of \(50\) experiments the CSA algorithm identified the best action.
For our experiment, we generated the weight of each item \(s\), \(w_{s}\), uniformly from \(\{1,2,\ldots,200\}\). For each item \(s\), we generated \(v_{s}\) as \(v_{s}=w_{s}\times x\), where \(x\) is a uniform sample from \([1.0,1.1]\). The capacity of the knapsack was \(W=200\). Each time we chose an item \(s\), we observed a value \(v_{s}+\epsilon\) where \(\epsilon\) is a noise from \(\mathcal{N}(0,1)\). We set \(R=1\). We show the result in Figure 2. The larger the dimension \(d\) is, and the smaller the budget \(T\) is, the fewer times the optimal actions are correctly identified, which is consistent with the theoretical result of Theorem 2.
### When \(|\mathcal{A}|\) is Polynomial in \(d\).
Here, we assume that we have prior knowledge of the rewards of arms, i.e., knowledge of the values of the items. We assumed that we know each \(v_{s}\) is in \([w_{s},1.1\times w_{s}]\), and used this prior knowledge to generate the action class \(\mathcal{A}\) in the following procedure. We first generated a vector \(\mathbf{v}^{\prime}\) whose \(s\)-th element \(v^{\prime}_{s}\) was uniformly sampled from \([w_{s},1.1\times w_{s}]\), and then solved the knapsack problem with \(v^{\prime}_{s}\) and added the obtained solution \(\mathbf{\pi}\) to \(\mathcal{A}\). We repeated this 2000 times; therefore \(|\mathcal{A}|\leq 2000\). In this experiment, the budget was \(T=50000\). For the Minimax-CombSAR algorithm, we set \(\beta=0.2,0.4\). We ran forty experiments for each \(d\in\{10,15,\ldots,100\}\). We showed the result in Figure 3. We can see that the Minimax-CombSAR algorithm outperforms the other two algorithms for almost every \(d\). Also, the CSA algorithm outperforms the Peace algorithm for almost every \(d\).
## 7 Conclusion
In this paper, we studied the fixed-budget R-CPE-MAB. We first introduced the CSA algorithm, which is the first algorithm that can identify the best action even when the size of the action class is exponentially large with respect to the number of arms. However, it still has an extra logarithmic term in the exponent. Then, we proposed an optimal algorithm named the Minimax-CombSAR algorithm, which, although it is applicable only when the action class is polynomial, matches a lower bound. We showed that both of the algorithms outperform the existing methods.
Figure 3: Comparison of the percentage of the best actions correctly identified by the CSA, Minimax-CombSAR, and Peace algorithms. The horizontal axis represents the number of items \(d\). We ran the experiments fifty times for each \(d\).
Figure 2: The percentage of the best actions correctly identified by the CSA algorithm in the knapsack problem with different budgets. The horizontal axis represents the number of items \(d\). We ran the experiments fifty times for each \(d\).
## Acknowledgement
We thank Dr. Kevin Jamieson for his very helpful advice and comments on existing studies. SN was supported by JST SPRING, Grant Number JPMJSP2108.
|
2310.04954 | A framework to generate sparsity-inducing regularizers for enhanced
low-rank matrix completion | Applying half-quadratic optimization to loss functions can yield the
corresponding regularizers, while these regularizers are usually not
sparsity-inducing regularizers (SIRs). To solve this problem, we devise a
framework to generate an SIR with closed-form proximity operator. Besides, we
specify our framework using several commonly-used loss functions, and produce
the corresponding SIRs, which are then adopted as nonconvex rank surrogates for
low-rank matrix completion. Furthermore, algorithms based on the alternating
direction method of multipliers are developed. Extensive numerical results show
the effectiveness of our methods in terms of recovery performance and runtime. | Zhi-Yong Wang, Hing Cheung So | 2023-10-08T00:35:54Z | http://arxiv.org/abs/2310.04954v1 | # A framework to generate sparsity-inducing regularizers for enhanced low-rank matrix completion
###### Abstract
Applying half-quadratic optimization to loss functions can yield the corresponding regularizers, while these regularizers are usually not sparsity-inducing regularizers (SIRs). To solve this problem, we devise a framework to generate an SIR with closed-form proximity operator. Besides, we specify our framework using several commonly-used loss functions, and produce the corresponding SIRs, which are then adopted as nonconvex rank surrogates for low-rank matrix completion. Furthermore, algorithms based on the alternating direction method of multipliers are developed. Extensive numerical results show the effectiveness of our methods in terms of recovery performance and runtime.
Matrix completion, rank minimization, rank surrogate, sparsity-inducing regularizer.
## I Introduction
Low-rank matrix completion (LRMC) aims to fill the unobserved entries of an incomplete matrix with the use of the low-rank property [1]. It has numerous real-world applications such as image inpainting [2], hyperspectral image restoration [3] and collaborative filtering [4]. That is because although these data have high-dimensional structure, their main information lies in a subspace with a much lower dimensionality. Roughly speaking, two strategies are widely used for LRMC, namely, matrix factorization [5, 6] and rank minimization [7, 8]. The former formulates the estimated matrix as a product of two much smaller matrices, but it requires knowing the prior rank information, which may be not easy to determine in real-world scenarios.
Different from matrix factorization, the rank minimization can estimate the rank of the observed matrix via employing an SIR to make the singular values sparse [9, 10, 11]. One popular and feasible method is nuclear norm minimization (NNM) [7]. However, since NNM utilizes the \(\ell_{1}\)-norm to shrink all nonzero singular values by the same constant, the resultant solution is biased. To solve this issue, nonconvex sparsity-inducing regularizers (SIRs) are suggested because they have less estimation bias than the \(\ell_{1}\)-norm [13]. Many nonconvex SIRs are adopted to replace the \(\ell_{1}\)-norm for LRMC [9], while they cannot ensure that the resultant subproblems associated with these SIRs are convex, and the closed-from solutions to these subproblems are not obtained, implying high computational costs. To handle this problem, a parameterized nonconvex SIR is proposed in [12]. However, to maintain the convexity of the subproblem associated with this SIR, one parameter is required to be properly set. Moreover, Gu \(et\)\(al.\)[14, 15] employ a weighted NNM (WNNM) for LRMC, yielding better low-rank recovery than the NNM.
On the other hand, it has been analyzed that applying half-quadratic (HQ) optimization [16] or the Legendre-Fenchel (LF) transform [17] to loss functions, including the Welsch, German-McClure (GMC) and Cauchy functions [18], can yield regularizers with closed-form proximity operators. However, these regularizers are usually not SIRs [19]. In this work, our attempt is to answer an interesting and important question: _Under what conditions, the resultant regularizers are SIRs_?.
Motivated by the results in [3, 19, 20, 21], we devise a framework to generate SIRs with closed-form proximity operators. Note that the loss functions considered in [3, 19] are a special case of our framework. Besides, it is analyzed that the subproblems associated with our SIRs are convex with closed-form solutions and our SIRs can yield a less bias solution than the \(\ell_{1}\)-norm. We then employ the SIRs for LRMC, and algorithms based on the alternating direction method of multipliers (ADMM) are developed.
## II Preliminaries
### _Proximity Operator_
The Moreau envelope of a regularizer \(\varphi(\cdot)\) is defined as [22]:
\[\min_{y}\ \frac{1}{2}(x-y)^{2}+\lambda\varphi(y) \tag{1}\]
whose solution is given by the proximity operator:
\[P_{\varphi}(x):=\arg\min_{y}\ \frac{1}{2}(x-y)^{2}+\lambda\varphi(y) \tag{2}\]
In particular, when \(\varphi(\cdot)=|\cdot|_{1}\), the solution to (1) is:
\[P_{\ell_{1},\lambda}(x)=\max\{0,|x|-\lambda\}{\rm sign}(x) \tag{3}\]
which is called the proximity operator of \(|\cdot|_{1}\), also known as the soft-thresholding operator. From (3), it is clear that the \(\ell_{1}\)-norm is an SIR. Here, \(\varphi(\cdot)\) is called an SIR if the solution to (1) is zero if the magnitude of \(|x|\) is no bigger than a threshold. On the other hand, although applying the HQ optimization to the Welsch, Cauchy or GMC function can give the corresponding regularizer with closed-form proximity operator [18], the generated regularizers are not SIRs.
### _Related Works_
Given an incomplete matrix \(\mathbf{X}_{\Omega}\in\mathbb{R}^{m\times n}\) with \(\Omega\subset\{1,\cdots,m\}\times\{1,\cdots,n\}\) being the index set of the observed elements, defined as:
\[\left[\mathbf{X}_{\Omega}\right]_{ij}=\begin{cases}X_{ij},&\text{if }(i,j)\in\Omega\\ 0,&\text{if }(i,j)\in\Omega^{c}\end{cases}\]
where \(\Omega^{c}\) is the complement of \(\Omega\), LRMC can be solved by NNM [7]:
\[\min_{\mathbf{M}}\ \|\mathbf{M}\|_{*},\ \text{s.t.}\ \mathbf{M}_{\Omega}=\mathbf{X}_{\Omega} \tag{4}\]
where \(\|\mathbf{M}\|_{*}=\sum_{i=1}^{r}\mathbf{\sigma}_{i}\) denotes the nuclear norm of the estimated matrix \(\mathbf{M}\) and \(\mathbf{\sigma}_{i}\) is the \(i\)th singular value of \(\mathbf{M}\). Nevertheless, nuclear norm is equivalent to applying the \(\ell_{1}\)-norm to the singular value of a matrix, which shrinks all singular values with the same constant. However, as large singular values may dominate the main structure of real-world data, it is better to shrink them less [14]. Hence nonconvex SIRs have been suggested to replace the \(\ell_{1}\)-norm as nonconvex rank surrogates for LRMC [9]:
\[\min_{\mathbf{M}}\ \|\mathbf{M}\|_{\varphi},\ \text{s.t.}\ \mathbf{M}_{\Omega}=\mathbf{X}_{\Omega} \tag{5}\]
where \(\|\mathbf{M}\|_{\varphi}=\sum_{i=1}^{r}\varphi(\mathbf{\sigma}_{i})\) and \(\varphi(\cdot)\) is a nonconvex SIR. As shown in [10, 11], solving (5) involves the proximity operator of \(\varphi(\cdot)\). However, existing SIRs, like the \(\ell_{p}\)-norm with \(0<p<1\) except for \(p=\{1/2,2/3\}\)[23], may not have the closed-form proximity operator, implying that iterations are needed for its computation. Furthermore, it is worth noting that when \(\varphi(\mathbf{\sigma}_{i})=\mathbf{w}_{i}\mathbf{\sigma}_{i}\) with \(\mathbf{w}_{i}\) being a weight parameter, \(\|\mathbf{M}\|_{\varphi}\) is the weighted nuclear norm and (5) becomes the WNNM problem [14].
## III Framework to Generate SIR and its Application to LRMC
As the Huber [20], truncated-quadratic [3] and hybrid ordinary-Welsch (HOW) [19] functions can yield SIRs using the LF transform, we generalize this class of functions and devise a framework to produce an SIR with closed-from proximity operator. Then, the generated SIR is considered as a nonconvex rank surrogate for LRMC.
### _Framework to Generate SIRs_
We first provide our framework in the following theorem.
**Theorem 1**.: _Consider a continuously differentiable loss function \(\phi_{h,\lambda}(x)\), defined as:_
\[\phi_{h,\lambda}(x)=\begin{cases}x^{2}/2,&|x|\leq\lambda\\ a\cdot h(|x|)+b,&|x|>\lambda\end{cases} \tag{6}\]
_where \(a\) and \(b\) are constants to make \(\phi_{h,\lambda}(x)\) continuously differentiable, such that \(g(x)=x^{2}/2-\phi_{h,\lambda}(x)\) is a proper, lower semi-continuous and convex function. Then it can be used to generate an SIR \(\varphi_{h,\lambda}(\cdot)\) via LF transform, namely,_
\[\phi_{h,\lambda}(x)=\min_{y}\ \frac{1}{2}(x-y)^{2}+\lambda\varphi_{h,\lambda}(y) \tag{7}\]
_where \(\varphi_{h,\lambda}(y)=\max\limits_{x}\ \phi_{h,\lambda}(x)/\lambda-\frac{1}{2 \lambda}(x-y)^{2}\). The solution to \(y\) in (7) is given by the proximity operator:_
\[P_{\varphi_{h,\lambda}}(x)=\nabla g=\max\left\{0,|x|-a\cdot h^{\prime}(|x|) \right\}\cdot\mathrm{sign}(x) \tag{8}\]
Proof.: The process of obtaining (8) and (7) from (6) is the same as that getting (16) and (17) from (9) in our previous work [19], and we omit the process due to page limit.
In addition, the expression of \(\varphi_{h,\lambda}(\cdot)\) is generally unknown [16, 20], and its properties are provided in the following proposition.
**Proposition 1**.: \(\varphi_{h,\lambda}(\cdot)\) _has the following properties:_
1. _Problem (_7_) is a convex problem although_ \(\varphi_{h,\lambda}(y)\) _is nonconvex._
2. \(P_{\varphi_{h,\lambda}}(x)\) _is monotonically non-decreasing, namely, for any_ \(x_{1}<x_{2}\)_,_ \(P_{\varphi_{h,\lambda}}(x_{1})\leq P_{\varphi_{h,\lambda}}(x_{2})\)_._
3. _If_ \(\phi_{h,\lambda}(x)\) _is strictly concave for_ \(x>\lambda\)_, the resultant proximity operator makes the solution have less bias than the proximity operator of the_ \(\ell_{1}\)_-norm._
Proof.: (i) can be obtained due to the conjugate theory, which is similar the proof of Proposition 1 in [19]. As \(\phi_{h,\lambda}\) is an even function and the function \(g\) is convex, we then have (ii). For (iii), the bias \(\Delta b\) can be estimated by the gap between the identity function and the proximity operator for \(|x|\geq\lambda\), and since the proximity operator is odd, we only discuss \(\Delta b=x-P_{\varphi_{h,\lambda}}(x)=a\cdot h^{\prime}(x)\) for \(x\geq\lambda\). Observing (3), it is apparent that the bias generated by the \(\ell_{1}\)-norm is \(\lambda\). While when \(x=\lambda\), the bias generated by our SIRs is \(\lambda\), thus we need \(\Delta b\) decreases with \(x>\lambda\) to ensure that the bias produced by our SIRs is less than that by the \(\ell_{1}\)-norm. Therefore, we have \((\Delta b)^{\prime}=a\cdot h^{\prime\prime}(x)<0\), implying that \(\phi_{h,\lambda}(x)\) is strictly concave for \(x>\lambda\).
Next, we specify the function \(g\) as the Welsch, GMC and Cauchy functions to develop the corresponding hybrid ordinary-Welsch (HOW), hybrid ordinary-GMC (HOG) and hybrid ordinary-Cauchy (HOC) functions, where 'ordinary' refers to the quadratic function, and the corresponding SIRs, namely, \(\varphi_{\sigma,\lambda}(\cdot)\), \(\varphi_{\gamma,\lambda}(\cdot)\) and \(\varphi_{\tau,\lambda}(\cdot)\), which are shown in Table I. Moreover, by Proposition 1, to make the bias generated by our SIRs less than that by the \(\ell_{1}\)-norm, we have \(\sigma\leq\sqrt{2}\lambda\), \(\gamma\leq\lambda\) and \(\tau\leq\sqrt{3}\lambda/2\) for \(\varphi_{\sigma,\lambda}(\cdot)\), \(\varphi_{\gamma,\lambda}(\cdot)\) and \(\varphi_{\tau,\lambda}(\cdot)\), respectively. Fig. 1 plots the curves of \(\varphi_{\sigma,\lambda}(\cdot)\), \(\varphi_{\gamma,\lambda}(\cdot)\) and \(\varphi_{\tau,\lambda}(\cdot)\) and their proximity operators.
### _LRMC via Generated SIRs_
In this section, we apply the generated SIRs to LRMC, resulting in:
\[\min_{\mathbf{M}}\ \|\mathbf{M}\|_{\varphi_{\cdot,\lambda}},\ \text{s.t.}\ \mathbf{M}_{ \Omega}=\mathbf{X}_{\Omega} \tag{9}\]
where \(\|\mathbf{M}\|_{\varphi_{\cdot,\lambda}}=\sum_{i=1}^{r}\varphi_{\cdot,\lambda}(\mathbf{ \sigma}_{i})\), and \(\varphi_{\cdot,\lambda}(\cdot)\) can be the generated regularizer \(\varphi_{\varphi,\lambda}(\cdot)\), \(\varphi_{\gamma,\lambda}(\cdot)\) or \(\varphi_{\tau,\lambda}(\cdot)\). Problem (9) can be converted into the following equivalent form:
\[\min_{\mathbf{M},\mathbf{E}}\ \|\mathbf{M}\|_{\varphi_{\cdot,\lambda}}\ \text{s.t.}\ \mathbf{X}=\mathbf{M}+\mathbf{E},\ \mathbf{X}_{\Omega^{c}}=0,\ \mathbf{E}_{\Omega}=0 \tag{10}\]
which can be solved by the ADMM, and its augmented Lagrangian function is:
\[\begin{split}\mathcal{L}_{\rho}^{\prime}(\mathbf{M},\mathbf{E},\mathbf{ \Lambda}):=&\|\mathbf{M}\|_{\varphi_{\cdot,\lambda}}+\langle\mathbf{ \Lambda},\mathbf{X}-\mathbf{M}-\mathbf{E}\rangle\\ &+\frac{\rho}{2}\left\|\mathbf{X}-\mathbf{M}-\mathbf{E}\right\|_{F}^{2}\end{split} \tag{11}\]
which is equal to:
\[\begin{split}\mathcal{L}_{\rho}(\mathbf{M},\mathbf{E},\mathbf{\Lambda}):=& 1/\rho\|\mathbf{M}\|_{\varphi_{\cdot,\lambda}}+\frac{1}{2}\left\|\mathbf{X}-\mathbf{M}- \mathbf{E}\right\|_{F}^{2}\\ &+1/\rho\left\langle\mathbf{\Lambda},\mathbf{X}-\mathbf{M}-\mathbf{E}\right\rangle \end{split} \tag{12}\]
where \(\mathbf{\Lambda}\) is the Lagrange multiplier matrix and \(\rho>0\) is the penalty parameter. The variables \(\mathbf{M},\mathbf{E}\) and \(\mathbf{\Lambda}\) are updated at the \((k+1)\)th iteration as follows:
\(Update\ of\ \mathbf{M}\): Given \(\mathbf{E}^{k}\), \(\mathbf{\Lambda}^{k}\) and \(\rho^{k}\), the estimated matrix \(\mathbf{M}\) is obtained by:
\[\arg\min_{\mathbf{M}}1/\rho^{k}\|\mathbf{M}\|_{\varphi_{\cdot,1/\rho^{k}}}+\frac{1}{2 }\left\|\mathbf{D}^{k}-\mathbf{M}\right\|_{F}^{2},\ \mathbf{D}^{k}=\mathbf{X}-\mathbf{E}^{k}+\frac{\mathbf{\Lambda}^{k}}{\rho^{k}} \tag{13}\]
If \(\mathbf{D}^{k}=\mathbf{U}^{k}\Sigma^{k}\mathbf{V}^{k}\) is the singular value decomposition (SVD) of \(\mathbf{D}^{k}\), then the optimal solution to (13) according to Theorem 1 in [11] is:
\[\mathbf{M}^{k+1}=\mathbf{U}^{k}P_{\varphi_{\cdot,1/\rho^{k}}}(\mathbf{\Sigma}^{k})\mathbf{V}^ {k} \tag{14}\]
\(Update\ of\ \mathbf{E}\): Given \(\mathbf{M}^{k+1}\), \(\mathbf{\Lambda}^{k}\) and \(\rho^{k}\), \(\mathbf{E}^{k+1}\) is updated by solving:
\[\arg\min_{\mathbf{E}_{\Omega^{c}}}\frac{1}{2}\left\|\mathbf{X}_{\Omega^{c}}-\mathbf{M}^{k +1}_{\Omega^{c}}+\frac{\mathbf{\Lambda}^{k}_{\Omega^{c}}}{\rho^{k}}-\mathbf{E}_{\Omega ^{c}}\right\|_{F}^{2} \tag{15}\]
with the optimal solution:
\[\mathbf{E}^{k+1}_{\Omega^{c}}=\frac{\mathbf{\Lambda}^{k}_{\Omega^{c}}}{\rho^{k}}-\mathbf{ M}^{k+1}_{\Omega^{c}} \tag{16}\]
The steps of the proposed algorithm are summarized in Algorithm 1. It is worth noting that the updates of \(\mathbf{\Lambda}\) and \(\rho\) are only provided in Algorithm 1 due to page limit. When the generated regularizers \(\varphi_{\sigma,\lambda}(\cdot)\), \(\varphi_{\gamma,\lambda}(\cdot)\) are \(\varphi_{\tau,\lambda}(\cdot)\) are adopted in (9), we refer the corresponding algorithm to as MC-HOW, MC-HOC and MC-HOG, respectively. We terminate Algorithm 1 until the relative error \(rel_{E}^{k}=\|\mathbf{X}-\mathbf{M}^{k}-\mathbf{S}^{k}\|_{F}/\|\mathbf{X}\|_{F}\leq\xi\) or the iteration number reaches the maximum allowable number \(I_{m}\). Besides, the SVD computation with complexity of \(\mathcal{O}(\min(m,n)mn)\) is involved per iteration. Furthermore, the convergence analysis of Algorithm 1 is provided in Proposition 2, and its proof is similar to the proof of Theorem 3 in [14], thus we omit it due to page limit.
**Proposition 2**.: _The sequence \(\{\mathbf{M}^{k},\mathbf{E}^{k},\mathbf{\Lambda}^{k}\}\) generated in Algorithm 1 satisfies:_
1. _The generated sequences_ \(\{\mathbf{M}^{k},\mathbf{E}^{k},\mathbf{\Lambda}^{k}\}\) _are bounded._
2. \(\lim_{k\rightarrow\infty}\left\|\mathbf{M}^{k+1}-\mathbf{M}^{k}\right\|_{F}^{2}=0\)__ \(\lim_{k\rightarrow\infty}\left\|\mathbf{X}-\mathbf{M}^{k+1}-\mathbf{E}^{k+1}\right\|_{F}^ {2}=0\)_._
## IV Experimental Results
We compare our algorithms with the competing approaches, including NNM [24], IRNN-\(\ell_{p}\) (\(p=0.5\)) [9] and IRNN-SCAD [10]. All numerical simulations are conducted using a computer with 3.0 GHz CPU and 16 GB memory. A rank-\(r\) matrix \(\mathbf{X}=\mathbf{U}\mathbf{V}\) is first generated, where the entries of \(\mathbf{U}\in\mathbb{R}^{m\times r}\) and \(\mathbf{V}\in\mathbb{R}^{r\times n}\) are sampled from the standard Gaussian distribution. In this study, \(r=f_{r}\times n\) where \(f_{r}\) is the fraction of full rank. Besides, we randomly remove \(f_{m}\times m\times n\) entries from \(\mathbf{X}\), where \(f_{m}\) is the fraction of missing entries, to yield the incomplete matrix \(\mathbf{X}_{\Omega}\). Furthermore, root mean square error (RMSE) defined as \(\mathrm{RMSE}=\|\mathbf{X}-\mathbf{M}\|_{F}/\sqrt{mn}\) is adopted to measure the recovery performance. In our experiments, \(m=300\), \(n=200\), \(f_{m}\) and \(f_{r}\) range from \(0.01\) to \(0.05\) with a step size of \(0.02\). All methods are evaluated by the RMSE based on \(100\) independent runs. Fig. 2 shows the log-scale RMSE with different \(\{p_{r},p_{s}\}\) values. It is seen that compared with the convex NNM, the nonconvex rank surrogates can recover more cases, and MC-HOW yields the
biggest success area. Here, if \(\text{RMSE}<10^{-3}\), we consider that is a success recovery.
On the other hand, the runtime (in seconds) for all techniques is investigated under different matrix ranks. Fig. 3 plots runtime versus matrix rank with \(f_{m}=0.1\). We see that the runtime of our approaches is less than that of IRNN-\(\ell_{p}\) because our regularizers have closed-form proximity operators. Nevertheless, NNM is the most computationally efficient since the proximity operator of the \(\ell_{1}\)-norm has a simpler expression than those of our SIRs.
## V Conclusion
In this paper, we devise a framework to generate SIRs with closed-form proximity operators. We analyze that the Moreau envelope of our regularizers is a convex problem although the regularizers may be nonconvex, and provide the corresponding closed-form solution. Besides, it is proved that the bias generated by our SIRs is less than that by the \(\ell_{1}\)-norm under certain conditions. Then, we employ our SIRs as nonconvex rank surrogates for LRMC, and algorithms based on the ADMM are developed. Finally, extensive numerical experiments are conducted to demonstrate that the developed algorithms can achieve better recovery under different matrix ranks and missing ratios.
|
2306.08650 | Learning to Rank when Grades Matter | Graded labels are ubiquitous in real-world learning-to-rank applications,
especially in human rated relevance data. Traditional learning-to-rank
techniques aim to optimize the ranked order of documents. They typically,
however, ignore predicting actual grades. This prevents them from being adopted
in applications where grades matter, such as filtering out ``poor'' documents.
Achieving both good ranking performance and good grade prediction performance
is still an under-explored problem. Existing research either focuses only on
ranking performance by not calibrating model outputs, or treats grades as
numerical values, assuming labels are on a linear scale and failing to leverage
the ordinal grade information. In this paper, we conduct a rigorous study of
learning to rank with grades, where both ranking performance and grade
prediction performance are important. We provide a formal discussion on how to
perform ranking with non-scalar predictions for grades, and propose a
multiobjective formulation to jointly optimize both ranking and grade
predictions. In experiments, we verify on several public datasets that our
methods are able to push the Pareto frontier of the tradeoff between ranking
and grade prediction performance, showing the benefit of leveraging ordinal
grade information. | Le Yan, Zhen Qin, Gil Shamir, Dong Lin, Xuanhui Wang, Mike Bendersky | 2023-06-14T17:30:02Z | http://arxiv.org/abs/2306.08650v2 | # Learning to Rank when Grades Matter
###### Abstract.
Graded labels are ubiquitous in real-world learning-to-rank applications, especially in human rated relevance data. Traditional learning-to-rank techniques aim to optimize the ranked order of documents. They typically, however, ignore predicting actual grades. This prevents them from being adopted in applications where grades matter, such as filtering out "poor" documents. Achieving both good ranking performance and good grade prediction performance is still an under-explored problem. Existing research either focuses only on ranking performance by not calibrating model outputs, or treats grades as numerical values, assuming labels are on a linear scale and failing to leverage the ordinal grade information. In this paper, we conduct a rigorous study of learning to rank with grades, where both ranking performance and grade prediction performance are important. We provide a formal discussion on how to perform ranking with non-scalar predictions for grades, and propose a multiobjective formulation to jointly optimize both ranking and grade predictions. In experiments, we verify on several public datasets that our methods are able to push the Pareto frontier of the tradeoff between ranking and grade prediction performance, showing the benefit of leveraging ordinal grade information.
Learning to Rank; Ordinal Regression; Multiobjective Optimization +
Footnote †: journal: Information systems Information retrieval
+
Footnote †: journal: Information systems Information retrieval
+
Footnote †: journal: Information systems Information retrieval
+
Footnote †: journal: Information systems Information retrieval
+
Footnote †: journal: Information systems Information retrieval
+
Footnote †: journal: Information systems Information retrieval
## 1. Introduction
Learning to rank (LTR) with graded labels is ubiquitous in real-world applications. For example, in traditional LTR datasets such as Web30K, human raters rate each query-document pair from "irrelevant" (graded as 0) to "perfectly relevant" (graded as 4). Grades are _ordinal_, i.e., represented by discrete numbers with a natural order, but not necessarily preserving numerical relations. For example, grade 4 is not necessarily twice as relevant as grade 2. Traditional LTR work focuses on ranking performance or treats grades as numerical values (Zhou et al., 2019), ignoring potential non-linearity of the grading scale. Predicting actual grades is traditionally treated as a classification problem, which has not been given much attention in the LTR literature (Liang et al., 2019), and that usually ignores the order of the grades. Unlike classical LTR work, we consider the problem in which both ranking performance and grade prediction performance, measured by ranking metrics and classification accuracy, respectively, are both important. We argue that achieving good performance on both fronts delivers a better user facing experience via optimal ranking _and_ capabilities such as filtering out "poor" documents with certain grades. For example, user could choose to show just perfectly relevant results or any relevant results when grade predictions are available.
In the sequel, we present a rigorous study of LTR with graded labels. We formally demonstrate ranking with non-scalar predictions for grades. Based on ordinal prediction aggregation, we propose a multiobjective formulation that directly trades-off ranking and grade prediction. We conduct an extensive experimental study on 3 public LTR datasets, comparing with state-of-art ranking methods, and ranking-agnostic classification methods. Experimental results show interesting trade-off behaviors of different methods. Our proposed methods are able to push the Pareto frontier of ranking and grade prediction performances.
## 2. Related Works
LTR has been widely studied with focus on designing losses and optimization methods to improve ranking performance. Several notable losses include Pairwise Logistic (Beng et al., 2017) (also called RankNet) and ListNet (Zhou et al., 2019). Subsequent work included multiple perspectives to optimize ranking metrics. These include LambdaRank (Beng et al., 2017), SoftNDCG (Krizhevsky et al., 2014), SmoothNDCG (Beng et al., 2017), ApproxNDCG (Beng et al., 2017; Liu et al., 2018) and GumbelApproxNDCG (Beng et al., 2017), among many others. A recent work (LambdaLoss (Li et al., 2019)) used ideas from LambdaRank to develop a theoretically sound framework for neural optimization of ranking metrics.
Ranking methods studied in the LTR literature focus on improving ordering, but not on prediction accuracy of the actual labels (or grades). Previous work (Liang et al., 2019) studied if accurate label predictions could lead to good ranking, but not directly optimizing both objectives. Multi-objective setting has been well studied in Gradient Boosting Decision Trees (Beng et al., 2017; Liu et al., 2018; Liu et al., 2018), but little attention has been paid to the two objectives we are considering. Calibrated LTR, where model predictions are anchored to concrete meanings, has also drawn some attention due to its practical value (Liang et al., 2019; Liu et al., 2018; Zhou et al., 2019). However, existing work treats grades as real values, assuming that grade values are on a linear scale. This is inaccurate for many applications where the grades are ordinal and discrete, but not linear. To the best of our knowledge, our work is the first to formally study and demonstrate benefit on various tasks of learning to rank with graded labels when prediction of the labels matter.
## 3. Problem Formulation
We consider a ranking dataset with graded documents,
\[\mathcal{D}=\{(\{q_{i},\{x_{i},y_{i}\}|i\in\mathcal{D}_{q}\}|q\in\mathcal{Q}\}.\]
Dataset \(\mathcal{D}\) consists of queries \(q\in\mathcal{Q}\), each associated with a set of candidate documents \(\mathcal{D}_{q}\). Document \(i\) is featured by \(\mathbf{x}_{i}\) and graded label \(y_{i}\). Without loss of generality, we assume \(y_{i}\in\{0,1,...,L-1\}\) for \(L\) possible ordinal classes. The ordinal relevance relation aligns with the integer order. The graded labels in the setting play two aligned roles: (1) they define the ordinal categories that a document
appears in a query; and (2) presenting the list of documents in descending order of the grades optimizes ranking performance.
Conventionally, optimization focuses on one of two objectives: (1) to predict the correct category of each query document pair; or (2) to exploit the correct ranking regardless of the category predictions. Ideally, as the perfect ranking can be achieved by sorting the grades, i.e., perfect category predictions indicate perfect ranking, optimizing (1) is sufficient to reach (2). In practice, however, directly optimizing (2) usually leads to much better ranking performance. In this work, we consider a formulation to jointly optimize the two objectives.
### Ordinal grade prediction
We aim to predict correct graded labels for query-document pairs.
Mean squared errorA naive and straight-forward way is to cast ordinal classes as real values and then apply linear regression. We consider a parametric model that predicts a real value for each query-document pair minimizing mean squared error between the model prediction \(f_{\theta}(\mathbf{x}_{i})\) and the graded label \(y_{i}\),
\[\mathcal{L}^{\text{MSE}}(\mathcal{D})=\sum_{i}\left(f_{\theta}(\mathbf{x}_{i} )-y_{i}\right)^{2}. \tag{1}\]
The model converges to the expected \(y_{i}\), and we can pick the grade that minimizes the distance to the model's prediction,
\[\hat{y}_{i}=\text{argmin}_{l=0,...,L-1}|l-f_{\theta}(\mathbf{x}_{i})|. \tag{2}\]
An implicit assumption is that the grade scale is well calibrated. Thus, differences in relevance are equal if differences in labels are equal. However, this may not be the case for every graded dataset.
Multi-class cross entropyMaking predictions of graded categories can be seen as a multi-class classification problem, and the presumption above is no longer needed. The model predictions, \(\mathbf{f}_{\theta}(\mathbf{x}_{i})\), with \(L\) logits for \(L\) grades, can be transformed to normalized probabilities with a softmax function,
\[p(y_{i}=l|\mathbf{x}_{i})=\frac{\exp(f_{\theta}^{l}(\mathbf{x}_{i}))}{\sum_{j} \exp(f_{\theta}^{l}(\mathbf{x}_{i}))}. \tag{3}\]
The superscript \(l\) labels the \(l\)-th component of the predictions. The model is trained to minimize cross-entropy loss,
\[\mathcal{L}^{\text{CE}}(\mathcal{D})=-\sum_{i}\sum_{l=0}^{L-1}\mathbb{I}(y_{i }=l)\ln(p(y_{i}=l|\mathbf{x}_{i})), \tag{4}\]
where \(\mathbb{I}(y_{i}=l)\) is the indicator function of item \(i\) taking label \(l\). Given the model predicted probabilities of each ordinal category, we naturally use the label maximizing the corresponding probability as the predicted ordinal grade,
\[\hat{y}_{i}=\text{argmax}_{l=0,...,L-1}p(y_{i}=l|\mathbf{x}_{i}). \tag{5}\]
The multi-class cross entropy approach ignores the ordinal relation of grades, which could possibly be leveraged in training. For example, if a document is not likely in grade \(l\) or higher, then it is less likely in grade \(l+1\) or higher. Ordinal regression methods have been applied to leverage this relation.
Univariate ordinal regressionUnivariate ordinal regression leverages ordinal relations by mapping ordinal grades into consecutive regions on the real axis. \(L-1\) variables \(\phi_{1}\), \(\phi_{2}\),..., \(\phi_{L-1}\), constrained to \(\phi_{l}\leq\phi_{m}\)_iff_\(l<m\), are trained as class boundaries for the full dataset (or slices of it). Together with \(\phi_{0}=-\infty\) and \(\phi_{L}=\infty\), the \(L+1\) boundaries partition the real axis into \(L\) consecutive regions. A model learns a per-item shift \(f_{\theta}(\mathbf{x}_{i})\) for the grid of boundaries. Fitting the shifted boundaries to an infinite support _probability density function (PDF)_ renders the integral over each region as the class probability (where integrating from \(-\infty\) to a shifted boundary gives the _cumulative density function (CDF)_ of an item up to some label class). Fitting a _logistic PDF_ gives probability,
\[p(y_{i}\geq l|\mathbf{x}_{i})=\frac{1}{1+\exp(-[f_{\theta}(\mathbf{x}_{i})- \phi_{l}])} \tag{6}\]
for item \(i\) belonging to class \(l\) or greater. Thus,
\[p(y_{i}=l|\mathbf{x}_{i})=p(y_{i}\geq l|\mathbf{x}_{i})-p(y_{i} \geq l+1|\mathbf{x}_{i})\\ =\frac{1}{1+\exp(-[f_{\theta}(\mathbf{x}_{i})-\phi_{l}])}-\frac{1 }{1+\exp(-[f_{\theta}(\mathbf{x}_{i})-\phi_{l+1}])} \tag{7}\]
is the probability of \(i\) taking label \(l\). With the probability in Eq. (7), the model \(f_{\theta}\) and boundaries \(\{\phi_{l}\}\) are trained to minimize the cross entropy loss in Eq. (4).
Multivariate ordinal regressionMultivariate ordinal regression, see also in Ref. (Han et al., 2017), leverages the ordinal relations by dividing the \(L\)-level ordinals into \(L-1\) successive binary classifications, which learn \(L-1\) values \(f_{\theta}^{l}(\mathbf{x}_{i})\), each with logistic regression, giving
\[p(y_{i}\geq l|\mathbf{x}_{i})=\frac{1}{1+\exp(-f_{\theta}^{l}(\mathbf{x}_{i}))},\quad\text{for }l=1,2,...,L-1, \tag{8}\]
with \(p(y_{i}\geq 0|\mathbf{x}_{i})=1\) and \(p(y_{i}\geq L|\mathbf{x}_{i})=0\). Then,
\[p(y_{i}=l|\mathbf{x}_{i})=\frac{1}{1+\exp(-f_{\theta}^{l}(\mathbf{x}_{i}))}- \frac{1}{1+\exp(-f_{\theta}^{l+1}(\mathbf{x}_{i}))}. \tag{9}\]
The multivariate ordinal regression trains the model to minimize the sum of the \(L-1\) consecutive logistic losses,
\[\mathcal{L}^{\text{Ord}}(\mathcal{D})=-\sum_{i}\sum_{l=1}^{L-1} \left[\mathbb{I}(y_{i}\geq l)\ln(p(y_{i}\geq l|\mathbf{x}_{i}))\right.\\ \left.+\mathbb{I}(y_{i}<l)\ln(1-p(y_{i}\geq l|\mathbf{x}_{i})) \right]. \tag{10}\]
Both univariate and multivariate ordinal methods could predict the grade using max probability, as in Eq. (5).
### Ranking prediction
LTR methods usually care only about the ranking of documents in the same list, and can be insensitive to the absolute values of predictions. In the most popular state-of-the-art ranking methods, as introduced below, the model predicts a ranking score, \(s_{i}=f_{\theta}(\mathbf{x}_{i})\in\mathbb{R}\), for each query-document pair, and the documents in the same query list are then ranked by sorting their scores.
Lambda lossAs ranking performance is usually measured by ranking metrics, some methods directly optimize these metrics or corresponding surrogates. The Lambda loss (Lambda, 2017; Lambda, 2017) is an example, where we reweight the gradient of each pair in a pairwise logistic loss (Lambda, 2017) by the difference between the ranking metric to its value
when flipping the pair. To optimize the _Normalized Discounted Cumulative Gain (NDCG)_ metric (Krishnan et al., 2017), we apply
\[\mathcal{L}^{\mathrm{Lambda}}(\mathcal{D})=-\sum_{q\in\mathcal{Q}}\sum_{i,j\in \mathcal{D}_{q}:y_{i}>y_{j}}\Delta_{i,j}\ln\frac{1}{1+\exp(-(s_{i}-s_{j}))}, \tag{11}\]
where \(\Delta_{ij}\) is the LambdaWeight as defined in Eq. (11) of (Krishnan et al., 2017).
## 4. Methods
The main challenge to balance the two roles of the graded labels is to align grade prediction methods and ranking methods.
Ranking score of grade prediction methodsCompared with _mean squared error_ and _univariate ordinal_ methods, where we can directly leverage the scalar predictions as the ranking scores, \(s_{i}=f_{\theta}(\mathbf{x}_{i})\), it is less straightforward to determine ranking scores for the multivariate _multi-class cross entropy_ and _ordinal_ methods. The multivariate output corresponds to well-defined probabilities as in Eqs. (3) and (8), but contains only part of the information for ranking. A single output scalar is insufficient for ranking. We thus propose to use the expected grade predictions in these methods as the ranking scores, which align with sorting by grades. Following Eq. (3), for _multi-class cross entropy_ method, we have
\[s_{i}=\mathbb{E}[y_{i}]=\sum_{l=0}^{L-1}lp(y_{i}=l|\mathbf{x}_{i})=\sum_{l=0} ^{L-1}l\frac{\exp(f_{\theta}^{l}(\mathbf{x}_{i}))}{\sum_{j}\exp(f_{\theta}^{j} (\mathbf{x}_{i}))}. \tag{12}\]
Following Eq. (8), assuming equally spaced consecutive label values, for the _multivariate ordinal_ method, we have
\[s_{i}=\mathbb{E}[y_{i}]=\sum_{l=1}^{L-1}[l-(l-1)]p(y_{i}\geq l| \mathbf{x}_{i})=\sum_{l=1}^{L-1}\frac{1}{1+\exp(-f_{\theta}^{l}(\mathbf{x}_{i }))}. \tag{13}\]
Multiobjective methodsGiven the ranking score from the ordinal predictions in Eqs. (12) and (13), we can also extend the multiobjective setting to _multi-class cross entropy_ and _multivariate ordinal_ methods, with a total loss,
\[\mathcal{L}^{\mathrm{MultiObj}}(\mathcal{D})=(1-\alpha)\mathcal{L}^{\mathrm{ Ord}}(\mathcal{D};f_{\theta})+\alpha\mathcal{L}^{\mathrm{Rank}}( \mathcal{D};s), \tag{14}\]
where the ranking score function \(s\) is defined by the grade prediction function \(f_{\theta}\), and \(\alpha\) gives the relative weight on the ranking method.
## 5. Experiments
### Experimental Setup
We study the problem with three large public learning-to-rank datasets, Web30K (Krishnan et al., 2017), Yahoo (Yahoo, 2018), and Istella (Krishnan et al., 2017). The statistics of the datasets used are summarized in Table 1.
Comparing MethodsThe focus of this paper is on the loss function, thus all compared methods on each dataset share the same model architecture, containing three layers with 1024, 512, 256 hidden units, implemented with a public learning to rank library: TensorFlow Ranking 1. In addition, we apply the log1p input transformations, batch normalization, and dropout (Krizhevsky et al., 2014). Hyperparameters including learning rate, batch normalization momentum, dropout rate, and rank loss weight \(\alpha\) are tuned for each method when applicable to the validation set.
Footnote 1: [https://github.com/tensorflow/ranking](https://github.com/tensorflow/ranking)
As summarized in Table 2, we study the naive methods (MSL, MCCE, UniOrd, and Ordinal) that train models to directly predict relevance grades, compared with the SOTA ranking methods (Lambda, Softmax, USoft, Gumbel), as well as the multiobjective methods allowing us to optimize both grade prediction accuracy and ranking simultaneously.
MetricsTo quantify the methods on both grade prediction accuracy and ranking, we consider metrics in both categories. For ranking performance, we measure NDCG metrics (Krishnan et al., 2017), which we try to maximize. Specifically, we use NDCG@10, which scores the top 10 positions. For grade prediction performance, we want to minimize cross entropy (CE) in Eq. (4) and the mean square error (MSE) in Eq. (1), and to maximize the classification accuracy (ACC). The grade prediction metrics CE and ACC depend on predictions of grade probabilities. These are not defined by ranking methods that predict a single score. To evaluate such metrics for ranking methods, we convert ranking scores to grade probabilities by introducing ordinal boundaries \(\phi_{l}\), as those used for univariate ordinal regression Eqs. (6) and (7). The boundaries \(\phi_{l}\) are trained to optimize cross entropy in Eq. (4) with fixed model parameters \(\theta\).
### Results and Discussion
The main results are summarized in Table 3. We can make the following observations: (_i_) In terms of grade prediction performance, MCCE and Ordinal are strong baselines: they show the
\begin{table}
\begin{tabular}{l|c|c c c|c c c c c c} \hline \hline & \#features & \multicolumn{3}{c|}{\#queries} & \multicolumn{3}{c|}{avg.} & \multicolumn{3}{c}{grade ratio (\%)} \\ & & training & validation & test & \#docs & 0 & 1 & 2 & 3 & 4 \\ \hline Web30K & 136 & 18,919 & 6,306 & 6,306 & 119 & 51.4 & 32.5 & 13.4 & 1.9 & 0.8 \\ Yahoo & 700 & 19,944 & 2,994 & 6,983 & 24 & 26.1 & 35.9 & 28.5 & 7.6 & 1.9 \\ Istella & 220 & 20,091 & 2,318 & 9,799 & 316 & 96.3 & 0.8 & 1.3 & 0.9 & 0.7 \\ \hline \hline \end{tabular}
\end{table}
Table 1. The statistics of the three largest public benchmark datasets for LTR models.
\begin{table}
\begin{tabular}{l|l} \hline \hline Method & Description \\ \hline MSL & Mean squared error loss method in Eq. (1). \\ MCCE & Multi-class classification in Eq. (4). \\ UniOrd & Univariate Ordinal regression in Eq. (7). \\ Ordinal & Vanilla multivariate Ordinal regression in Eq. (8). \\ Lambda (Krishnan et al., 2017) & LambdaLoss@1 method optimizing NDCG metric in Eq. (11). \\ MSL (Lambda) (Krishnan et al., 2017) & Multiobjective method combining MSL and Lambda in Eq. (14). \\ MCCE (Lambda) & Multiobjective method combining MCCE and Lambda in Eqs. (12) and (14). \\ UniOrd (Lambda) & Multiobjective method combining UniOrd and Lambda in Eq. (14). \\ Ordinal (Lambda) & Multiobjective method combining Ordinal and Lambda in Eqs. (13) and (14). \\ \hline \hline \end{tabular}
\end{table}
Table 2. Compared methods.
best competitive CE and ACC performance, which they are directly optimized for. In addition, by predicting the expected grade value using Eqs. (12) and (13), they also give competitive MSE. (_ii_) More interestingly, on Web30K, multiobjective setting combining MCCE objective and Lambda objective shows the best CE, MSE, and ACC. This demonstrates that the ranking objective is synergetic to the grade prediction with MCCE on this dataset. (_iii_) Similarly, the best ranking NDCG is approached by one of the proposed multiobjective method on each dataset: MCCE (Lambda) on Web30K, Ordinal (Lambda) on Yahoo, and UniOrd (Lambda) on Istella. These best values are statistically significantly better than the state-of-the-art ranking baselines, which also indicates a synergetic interaction of two objectives on the ranking task. (_iv_) On contrary, the traditional multiobjective method combining MSL and Lambda show inferior grade prediction performance to MSL only and inferior ranking performance to Lambda (except on Yahoo). This implies no synergy between MSL and ranking losses.
We further analyze the behaviors of the methods in terms of their trade-offs between the ranking performance (measured by NDCG@10) and the grade prediction performance (measured by ACC). The results are shown in Figure 1. For each of the multiobjective methods, we can probe multiple points by varying \(\alpha\), and we connect the Pareto frontiers for each combination of a grade prediction method and the Lambda method. From the tradeoff plot, we observe: (_i_) The grade prediction objective and the ranking objective are not simply trading off with each other, but can work collaboratively in certain range of rank weight \(\alpha\); (_ii_) Proposed combinations of grade prediction objective (MCCE, Ordinal, and UniOrd) and ranking objective probe different Pareto frontiers on different datasets, and are consistently better than a simple combination of MSL and Lambda. These behaviors provide guidance to practitioners: Depending on the dataset, practitioners can bias towards one of multiobjective methods and tune the ranking objective weight \(\alpha\) for the best balance of grade prediction and ranking.
As this work focuses on the neural network models, whether these observations could be extended to GBDT models needs further study. But we foresee no constraints to limit the generalization.
## 6. Conclusion
We provided a rigorous study of learning to rank with graded labels when grades matter, which has practical values but is less explored in the literature. We studied several existing classification and state-of-the-art ranking methods, and proposed several methods by addressing challenges of performing learning to rank with the goal of also accurately predicting ordinal grades. Experiments show that grade prediction and ranking can have synergetic interaction, allowing us to push the Pareto frontier in the ranking and grade prediction trade-off.
Figure 1. Tradeoffs of methods on classification accuracy (ACC) versus NDCG@10. Lines correspond to the Pareto fronts of different grade prediction objectives in the multiobjective setting, labeled in the legend. The results best balancing ACC and NDCG@10, marked in bold, are chosen to represent the MultiObj method in Table 3.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Web30K} & \multicolumn{3}{c}{Yahoo} & \multicolumn{3}{c}{Istella} \\ \hline Method & CE & MSE & ACC & NDCG@10 & CE & MSE & ACC & NDCG@10 & CE & MSE & ACC & NDCG@10 \\ \hline \hline MSL & 12.348 & 0.5414 & 0.5531\({}^{\dagger}\) & 0.5002\({}^{\ddagger}\) & 13.570 & 0.5781 & 0.5089 & 0.7720 & 1.8310 & 0.1166 & 0.9337\({}^{\ddagger}\) & 0.7120\({}^{\ddagger}\) \\ \hline MCCE & 0.9035 & 0.5384 & 0.6018\({}^{\dagger}\) & 0.5028\({}^{\ddagger}\) & **1.0531** & **0.5736** & **0.5260\({}^{\dagger}\)** & 0.7722 & **0.1236** & **0.1085** & 0.9611\({}^{\dagger}\) & 0.7111\({}^{\ddagger}\) \\ UniOrd & 0.9202 & 1.5899 & 0.5953\({}^{\dagger}\) & 0.4953\({}^{\ddagger}\) & 1.0916 & 1.6029 & 0.5155\({}^{\dagger}\) & 0.7692\({}^{\ddagger}\) & 0.1276 & 112.39 & 0.9612\({}^{\dagger}\) & 0.7151\({}^{\ddagger}\) \\ Ordinal & 0.9066 & 0.5405 & 0.6013\({}^{\dagger}\) & 0.5053 & 1.0628 & 0.5763 & 0.5235\({}^{\dagger}\) & 0.7698\({}^{\ddagger}\) & 0.1252 & 0.1093 & **0.9616\({}^{\dagger}\)** & 0.7123\({}^{\ddagger}\) \\ \hline Lambda [(11)] & 0.9444 & 1.8466 & 0.5709\({}^{\dagger}\) & 0.5057 & 1.4078 & 3.8040 & 0.2993\({}^{\ddagger}\) & 0.7716 & 0.1577 & 544.92 & 0.9578\({}^{\ddagger}\) & 0.7310\({}^{\dagger}\) \\ \hline MSL (Lambda) [(22)] & 12.543 & 0.5566 & 0.5460 & 0.5054 & 13.591 & 0.5781 & 0.5081 & 0.7726 & 1.6886 & 0.1215 & 0.9389 & 0.7251 \\ \hline MCCE (Lambda) & **0.9027** & **0.5377** & **0.6030\({}^{\dagger}\)** & **0.5107\({}^{\dagger}\)** & 1.0604 & **0.5736** & 0.5232\({}^{\dagger}\) & 0.7734 & 0.1328 & 0.1206 & 0.9605\({}^{\dagger}\) & 0.7288\({}^{\dagger}\) \\ UniOrd (Lambda) & 0.9280 & 1.5973 & 0.5877\({}^{\dagger}\) & 0.5073\({}^{\dagger}\) & 1.1109 & 1.5923 & 0.5040 & 0.7721 & 0.1422 & 319.43 & 0.9581\({}^{\dagger}\) & **0.7320\({}^{\dagger}\)** \\ Ordinal (Lambda) & 0.9056 & 0.5394 & 0.6006\({}^{\dagger}\) & 0.5100\({}^{\dagger}\) & 1.0650 & 0.5758 & 0.5225\({}^{\dagger}\) & **0.7743\({}^{\ddagger}\)** & 0.1365 & 0.1242 & 0.9593\({}^{\dagger}\) & 0.7298\({}^{\dagger}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3. Comparisons on classification and ranking for three LTR datasets. Bold numbers are the best in each column. Up arrow \({}^{\ddagger}\)* and down arrow \({}^{\ddagger}\)* indicate statistical significance with p-value=0.01 of better and worse ACC/NDCG performance than the multiobjective baseline “MSL (Lambda)”, respectively. The results of multiobjective methods in the table correspond to the ones of optimal balance of ACC and NDCG@10, as the bold markers in Figure 1. |
2302.05683 | Quantum probing of singularities at event horizons of black holes | It is proved that coordinate transformations of the Schwarzschild metric to
new static and stationary metrics do not eliminate the mode of a particle
''fall'' to the event horizon of a black hole. This mode is unacceptable for
the quantum mechanics of stationary states. | V. P. Neznamov | 2023-02-11T12:39:31Z | http://arxiv.org/abs/2302.05683v1 | ###### Abstract
###### Abstract
It is proved that coordinate transformations of the Schwarzschild metric to new static and stationary metrics do not eliminate the mode of a particle "fall" to the event horizon of a black hole. This mode is unacceptable for the quantum mechanics of stationary states.
**QUANTUM PROBING OF SINGULARITIES AT EVENT HORIZONS**
**OF BLACK HOLES**
V. P. Neznamov*
Footnote *: [email protected], [email protected]
_Russian Federal Nuclear Center-All-Russian Research Institute of Experimental Physics, Mira pr., 37, Sarov, 607188, Russia_
_National Research Nuclear University MEPhI, Moscow, 115409, Russia_
PACS numbers: 03.65.-w, 04.62.+v, 04.70.Dy
## 1 Introduction
In Refs. [1], [2], we considered the interaction of scalar particles (\(S=0\)), photons (\(S=1\)), fermions (\(S=1/2\)) with Schwarzschild, Reissner-Nordstrom, Kerr and Kerr-Newman black holes with zero and non-zero cosmologic constants. For the above metrics and particles with different spins, existence of the mode of a particle "fall" to event horizons has been found. The mode of a quantum-mechanical "fall" of particles to the singular center is, for instance, examined in detail in Refs. [3]-[5]. This mode is unacceptable for quantum mechanics.
Within the framework of the general relativity (GR), the coordinate singularity of the Schwarzschild metric (S) can be eliminated by using the appropriate coordinate transformations of an initial metric. Unlike the Schwarzschild metric, in the space-time of transformed metrics, classical particles can intersect the event horizon without emergence of singularities. Many of the researchers apply this conclusion to the quantum theory as well. However, it is not correct. We proved that the coordinate transformations of the Schwarzschild metric in the quantum mechanics of stationary states do not eliminate the mode of a particle "fall" to the event horizon.
As an example, it is proved by using Eddington-Finkelstein (EF) [6, 7], and Painleve-Gullstrand (PG) [8, 9] stationary metrics. This conclusion is also valid for any transformed static metrics, including the transition in the Schwarzschild solution to the "tortoise" coordinate [10].
The quantum mechanics in the space-time of non-stationary Lemaitre-Finkelstein [7, 11], and Kruskal- Szekeres [12, 13] metrics leads to time-dependent Hamiltonians [14]. Study of stationary states of particles, with representation of wave functions in coordinates of these metrics in the form of \(\sim\Psi\left({\bf R}\right)e^{-iET}\) for the Lemaitre-Finkelstein
metric and in the form of \(\sim\Psi\left(\mathbf{u}\right)e^{-iE\nu}\) for the Kruskal-Szekeres metric, is impossible in this case.
In this paper, our study will be considered for fermions. The obtained results can be extended to the equations and wave functions of photons and spinless particles.
The paper is arranged as follows. In Sec. 2, for coherence of the description, we present basic features of the quantum-mechanical mode of a particle "fall" to the event horizon. In Secs. 3 and 4, we introduce the covariant Dirac equation and provide the bases of the theory of coordinate transformations of Hamiltonians for the Dirac equation in the space-time of (GR) metrics. These results are presented in many textbooks, monographs and papers (see, for instance, Refs. [15], [16] as well as our papers Refs. [14], [17]). In Sec. 5, we analyze the solutions of the Dirac equation for a Schwarzschild metric. In Secs. 6 - 8, we prove the presence of the quantum-mechanical mode of particle "fall" by using stationary EF and PG metrics as well as by using the "tortoise" coordinate in the Schwarzschild static metric. In Sec. 10, we formulate basic results of the paper.
## 2 The mode of a particle "fall" to the event horizon
For all black holes, the behavior of effective potentials of the radial equation for Schrodinger-type fermions in the neighborhood of event horizons has the form of an infinitely deep potential well [1, 2]
\[U_{eff}\left(r\right)\arrowvert_{r\to r_{\pm}}=-\frac{K_{1}}{\left(r-r_{\pm} \right)^{2}}, \tag{1}\]
where \(r_{\pm}\) are radii of external and internal event horizons and a coefficient \(K_{1}>1/8\). In this case, the mode of a particle "fall" to the event horizon is implemented (see Refs. [3] - [5]).
The behavior of the real radial function of the Schrodinger-type equation is given by+
Footnote †: In expression (2) and in what follows, the asymptotic behavior of transformation operators and wave functions is examined for neighborhoods of event horizons (\(r>r_{+}\)), (\(r<r_{-}\)).
\[R\left(r\right)\arrowvert_{r\to r_{\pm}}\sim\left|r-r_{\pm}\right|^{1/2}\sin \left(\sqrt{K_{2}}\ln\left|\frac{r}{r_{\pm}}-1\right|+\delta\right), \tag{2}\]
where \(K_{2}=2\left(K_{1}-1/8\right)\). At \(r\to r_{\pm}\), the radial functions of stationary states of discrete and continuous spectra have the infinite number of zeros, the discrete energy levels emerge and "dive" beyond the allowed domains of functions \(R\left(r\right)\). At \(r=r_{\pm}\), the functions of \(R\left(r\right)\) do not have definite values.
In the Hamiltonian formulation, the mode of a particle "fall" to the horizon means that a Hamiltonian \(H\) has non-zero deficiency indexes [18] - [20].
To remove this singular mode, unacceptable for quantum mechanics, it is necessary to choose additional boundary conditions on event horizons. The self-conjugate extension of a Hermitian operator \(H\) is determined by this choice.
The Dirac equation
In the system of units of \(\hbar=c=1\) and in the signature \(\left(+\ -\ -\ -\right)\), the Dirac equation equals
\[i\gamma^{\alpha}\left(\frac{\partial\psi}{\partial x^{\alpha}}+\Phi_{\alpha} \psi\right)-m\psi=0, \tag{3}\]
where \(m\) is a fermion mass, \(\Phi_{\alpha}\) are bispinor connections, \(\psi\) is a four-component bispinor, \(\gamma^{\alpha}\) are 4x4 Dirac matrixes with world indexes satisfying the relation of
\[\gamma^{\alpha}\gamma^{\beta}+\gamma^{\beta}\gamma^{\alpha}=2g^{\alpha\beta}E, \tag{4}\]
where \(g^{\alpha\beta}\) is an inverse metric tensor, \(E\) is a 4x4 unity matrix.
In expressions (3), (4) and in what follows, the values designated by letters of the Greek alphabet assume the values of 0, 1, 2, 3; those designated by the letters of the Latin alphabet take the values of 1, 2, 3. When upper and lower indices are the same, the summation of appropriate summands is implied.
Then, along with the Dirac matrixes of \(\gamma^{\alpha}\) with world indices, we use Dirac matrixes \(\gamma^{\underline{\alpha}}\) with local indices satisfying the relationship of
\[\gamma^{\underline{\alpha}}\gamma^{\underline{\beta}}+\gamma^{\underline{ \beta}}\gamma^{\underline{\alpha}}=2\eta^{\underline{\alpha}\underline{\beta} }E, \tag{5}\]
where \(\eta^{\underline{\alpha}\underline{\beta}}\) corresponds to the inverse metric tensor of the plane Minkowski space with the signature of \(\eta_{\underline{\alpha}\underline{\beta}}=\mathrm{diag}\left[1,-1,-1,-1 \right].\)
In Eq. (3), for determination of bispinor connections \(\Phi_{\alpha}\), it is necessary to choose a definite system of tetrad vectors \(H_{\underline{\alpha}}^{\mu}\), satisfying the relation of \(H_{\underline{\alpha}}^{\mu}H_{\underline{\beta}}^{\nu}g_{\mu\nu}=\eta_{ \underline{\alpha}\underline{\beta}}\). The bispinor connections are determined by using Christoffel derivatives of tetrad vectors.
\[\Phi_{\alpha}=-\frac{1}{4}H_{\underline{\mu}}^{\underline{\varepsilon}}H_{ \underline{\nu}\underline{\varepsilon}\alpha}S^{\mu\nu}=\frac{1}{4}H_{ \underline{\mu}}^{\varepsilon}H_{\underline{\nu}\underline{\varepsilon}; \alpha}S^{\underline{\mu}\underline{\nu}}, \tag{6}\]
where
\[\begin{array}{l}S^{\mu\nu}=\frac{1}{2}\left(\gamma^{\mu}\gamma^{\nu}-\gamma^ {\nu}\gamma^{\mu}\right),\\ S^{\underline{\mu}\underline{\nu}}=\frac{1}{2}\left(\gamma^{\underline{\mu}} \gamma^{\underline{\nu}}-\gamma^{\underline{\nu}}\gamma^{\underline{\mu}} \right).\end{array} \tag{7}\]
The relation between\(\gamma^{\alpha}\) and \(\gamma^{\underline{\alpha}}\) is specified by the equality of
\[\gamma^{\alpha}=H_{\underline{\beta}}^{\alpha}\gamma^{\underline{\beta}}. \tag{8}\]
For our analysis, it is convenient to use the Dirac equation in the Hamiltonian form of
\[i\frac{\partial\psi}{\partial t}=H\psi. \tag{9}\]
Here \(t=x^{0}\), \(H\) is the Hamilton operator.
Taking into account the equation of \(\gamma^{0}\gamma^{0}=g^{00}\), we can derive the following expression for the Hamiltonian from Eq. (3)
\[H=\frac{m}{g^{00}}\gamma^{0}-\frac{1}{g^{00}}i\gamma^{0}\gamma^{k}\frac{ \partial}{\partial x^{k}}-i\Phi_{0}+\frac{1}{g^{00}}i\gamma^{0}\gamma^{k}\Phi _{k}. \tag{10}\]
### The formalism of pseudo-Hermitian quantum mechanics [21] - [23]
Hamiltonians (10) describing the motion of Dirac particles in arbitrary gravitational fields are pseudo-Hermitian [17].
The condition of pseudo-Hermitian character for Hamiltonians assumes the existence of invertible operator \(\rho_{{}_{P}}\), satisfying the relationship of
\[\rho_{{}_{P}}H\rho_{{}_{P}}^{-1}=H^{+}. \tag{11}\]
If in this case, there exists an operator \(\eta\) satisfying the condition of
\[\rho_{{}_{P}}=\eta^{+}\eta, \tag{12}\]
then for stationary case, we obtain the following Hamiltonian in \(\eta\)-representation:
\[H_{\eta}=\eta H\eta^{-1}=H_{\eta}^{+}. \tag{13}\]
Hamiltonian \(H_{\eta}\) is self-conjugate with the spectrum of eigenvalues coinciding with the spectrum of an initial Hamiltonian \(H\).
The wave-function in \(\eta\)-representation equals
\[\psi_{\eta}=\eta\psi, \tag{14}\]
where \(\psi\) is a wave function in the Dirac equation (9).
The scalar product of wave functions for pseudo-Hermitian Hamiltonians is written with a Parker weight operator \(\rho_{{}_{P}}\)[14], [17], [25]. For a wave function in \(\eta\)-representation, the scalar product has a form, typical for the Hermitian quantum mechanics (a plane scalar product with \(\rho_{{}_{P}}\) = 1 )
\[(\varphi_{\eta}\text{,}\psi_{\eta})=\int d\mathbf{x}\,\big{(}\varphi_{\eta}^ {+}\psi_{\eta}\big{)}. \tag{15}\]
Below, for the analysis, we will use Hamiltonians and wave functions in \(\eta\)-representation.
### The system of tetrad vectors in the Schwinger gauge
Many of the researchers use the convenient system of tetrad vectors of \(\left\{\tilde{H}_{\underline{\alpha}}^{\mu}\right\}\) in the Schwinger gauge [25]. For this system,
\[\tilde{H}_{\underline{0}}^{0}=\sqrt{g^{00}},\quad\tilde{H}_{\underline{0}}^{k} =-g^{k0}/g^{00},\quad\tilde{H}_{\underline{k}}^{0}=0. \tag{16}\]
Any spatial tetrads, satisfying the following relations:
\[\tilde{H}_{\underline{k}}^{m}\tilde{H}_{\underline{k}}^{n}=f^{mn},\quad f^{mn }=g^{mn}+\frac{g^{0m}g^{0n}}{g^{00}},\quad f^{mn}g_{nk}=\delta_{k}^{m} \tag{17}\]
can be used as tetrad vectors of \(\tilde{H}_{m}^{n}\).
Taking into account some freedom of choice for spatial tetrads, we can obtain expressions for Hamiltonians non-coincident with each other. These Hamiltonians are physically equivalent since they are connected by unitary matrixes of spatial rotations [17].
Coordinate transformations
At coordinate transformations (at transition to a another space-time)
\[\left\{x^{\alpha}\right\}\rightarrow\left\{x^{\prime\alpha}\right\} \tag{18}\]
the following relationships are fulfilled:
\[H^{\prime\alpha}_{\ \underline{\beta}}=\frac{\partial x^{\prime\alpha}}{\partial x ^{\mu}}H^{\mu}_{\underline{\beta}},\ \ \gamma^{\prime\alpha}=\frac{\partial x^{\prime\alpha}}{\partial x^{\beta}}\gamma ^{\beta},\ \ \ \Phi^{\prime\alpha}=\frac{\partial x^{\prime\beta}}{\partial x^{\alpha}}\Phi_{ \beta}. \tag{19}\]
At transformations of (19), the form of wave functions of the Dirac equation remains invariable except for an appropriate replacement of the variables.
In one and the same space-time, we can transfer from any system of tetrad vectors of \(\left\{H^{\prime\mu}_{\ \underline{\alpha}}\left(x\right)\right\}\) to another system of tetrad vectors of \(\left\{H^{\mu}_{\ \underline{\alpha}}\left(x\right)\right\}\) by using the Lorentz transformation of \(L\left(x\right)\). In this case,
\[H^{\mu}_{\underline{\alpha}}\left(x\right)=\Lambda^{\underline{\beta}}_{ \underline{\alpha}}(x)\,H^{\prime\mu}_{\ \underline{\beta}}\left(x\right). \tag{20}\]
The values of \(\Lambda^{\underline{\beta}}_{\underline{\alpha}}\) satisfy the relations
\[\Lambda^{\underline{\mu}}_{\underline{\alpha}}(x)\,\Lambda^{\underline{\nu}}_ {\underline{\beta}}(x)\,\eta^{\underline{\alpha}\underline{\beta}}=\eta^{ \underline{\mu}\underline{\nu}}, \tag{21}\]
\[\Lambda^{\underline{\mu}}_{\underline{\alpha}}(x)\,\Lambda^{\underline{\nu}}_ {\underline{\beta}}(x)\,\eta_{\underline{\mu}\underline{\nu}}=\eta_{ \underline{\alpha}\underline{\beta}}. \tag{22}\]
The matrixes of Lorentz transformation \(L\),\(L^{-1}\) are determined on the basis of the invariability of Dirac matrixes with local indices of \(\gamma^{\underline{\alpha}}\) at the transformations of (20)
\[L\left(x\right)\gamma^{\underline{\alpha}}L^{-1}\left(x\right)=\gamma^{ \underline{\beta}}\Lambda^{\underline{\alpha}}_{\underline{\beta}}(x)\,. \tag{23}\]
At Lorentz transformations, Dirac currents of particles are preserved.
## 5 The Schwarzschild metric
The Schwarzschild metric in the coordinates of (\(t\),\(r\),\(\theta\),\(\varphi\)) is given by
\[ds^{2}=f_{S}dt^{2}-\frac{dr^{2}}{f_{S}}-r^{2}\left(d\theta^{2}+\sin^{2}\theta d \varphi^{2}\right), \tag{24}\]
where \(f_{S}=1-r_{0}/r\), \(r_{0}=2GM/c^{2}\) is the event horizon (gravitational radius).
### The Dirac equation
The nonzero tetrads in the Schwinger gauge of \(\tilde{H}^{\mu}_{\underline{\alpha}}\) equal
\[\tilde{H}^{0}_{\underline{0}}=1/\sqrt{f_{S}};\ \ \tilde{H}^{1}_{\underline{1}}= \sqrt{f_{S}};\ \ \tilde{H}^{2}_{\underline{2}}=1/r;\ \ \tilde{H}^{3}_{\underline{3}}=1/(r\sin\theta). \tag{25}\]
In compliance with (8), the matrixes of \(\tilde{\gamma}^{\alpha}\) are as follows:
\[\tilde{\gamma}^{0}=\gamma^{\underline{0}}/\sqrt{f_{S}},\ \ \tilde{\gamma}^{1}=\sqrt{f_{S}}\gamma^{ \underline{1}},\ \ \tilde{\gamma}^{2}=\gamma^{\underline{2}}/r,\ \ \tilde{\gamma}^{3}=\gamma^{ \underline{3}}/(r\sin\theta). \tag{26}\]
The bispinor connections of \(\tilde{\Phi}_{\alpha}\) are as follows:
\[\begin{array}{l}\tilde{\Phi}_{0}=\frac{r_{0}}{4r^{2}}\gamma^{0}\gamma^{1},\ \tilde{\Phi}_{1}=0,\ \tilde{\Phi}_{2}=-\frac{1}{2}\sqrt{f_{ S}}\gamma^{1}\gamma^{2},\\ \tilde{\Phi}_{3}=-\frac{1}{2}\cos\theta\gamma^{2}\gamma^{3}+\frac{1}{2}\sqrt{f _{S}}\sin\theta\gamma^{2}\gamma^{1}.\end{array} \tag{27}\]
Taking into account (25) - (27), the Dirac Hamiltonian of (10) is given by
\[\begin{array}{l}\tilde{H}_{S}=\sqrt{f_{S}}m\gamma^{0}\!\!\!\!/-i\sqrt{f_{S}} \gamma^{0}\!\!\!\!/\left\{\gamma^{1}\!\!\!\!/\sqrt{f_{S}}\left(\frac{\partial} {\partial r}+\frac{1}{r}\right)+\gamma^{2}\frac{1}{r}\left(\frac{\partial}{ \partial\theta}+\frac{1}{2}\mbox{ctg}\theta\right)\right.+\\ \left.+\gamma^{\frac{3}{2}}\frac{1}{r\sin\theta}\frac{\partial}{\partial\varphi }\right\}-i\frac{r_{0}}{4r^{2}}\gamma^{0}\gamma^{1}\!\!\!\!/.\end{array} \tag{28}\]
Hamiltonian (28) is pseudo-Hermitian with the weight operator \(\rho_{{}_{P}}=f_{S}^{-1/2}\)[14], [17], [25].
The self-conjugate Hamiltonian with a plane scalar product of wave functions has the following form (see Refs. [14] and [17]):
\[\begin{array}{l}H_{\eta_{{}_{S}}}=H_{\eta_{{}_{S}}}^{+}=\eta_{{}_{S}}\tilde{ H}_{S}\eta_{{}_{S}}^{-1}=\sqrt{f_{S}}m\gamma^{0}\!\!\!\!/-i\sqrt{f_{S}}\gamma^{0}\! \!\!\!/\left\{\gamma^{1}\!\!\!\!/\sqrt{f_{S}}\left(\frac{\partial}{\partial r }+\frac{1}{r}\right)\right.+\\ \left.+\gamma^{2}\frac{1}{r}\left(\frac{\partial}{\partial\theta}+ \frac{1}{2}\mbox{ctg}\theta\right)+\gamma^{\frac{3}{2}}\frac{1}{r\sin\theta} \frac{\partial}{\partial\varphi}\right\}-i\frac{r_{0}}{2r^{2}}\gamma^{0}\! \!\!\!/\frac{1}{,}\end{array} \tag{29}\]
where \(\eta_{{}_{S}}=f_{S}^{-1/4}\).
For separation of variables, let us present a bispinor \(\psi_{\eta_{{}_{S}}}\) (\(t\),\(r\),\(\theta\),\(\varphi\)) as
\[\psi_{\eta_{{}_{S}}}\left(\mbox{$t$,$r$,$\theta$,$\varphi$}\right)=\left( \begin{array}{c}F\left(r\right)\xi\left(\theta\right)\\ -iG\left(r\right)\sigma^{3}\xi\left(\theta\right)\end{array}\right)e^{-iEt}e^{ im_{\varphi}\varphi}. \tag{30}\]
Hereinafter, \(\sigma^{3}\) is a two-dimensional Pauli matrix.
Spinor \(\xi\left(\theta\right)\) represents spherical harmonics of spin one-half. \(E\),\(m\) are energy and mass of the Dirac particle, \(m_{\varphi}=-j,-j+1\),...\(j-1\), \(j\) is an azimuth component of a total momentum \(j\), \(\kappa\) is a quantum number of the Dirac equation:
\[\kappa=\mp 1,\mp 2...=\left\{\begin{array}{c}-\left(l+1\right),\ j=l+ \frac{1}{2}\\ l,\ \ \ \ j=l-\frac{1}{2},\end{array}\right.\]
where \(j\),\(l\) are quantum numbers of the total and orbital momenta of a Dirac particle.
For separation of variables, it is convenient to perform the following equivalent substitution of matrixes:
\[\gamma^{1}\!\!\!\!/\rightarrow\gamma^{3}\!\!\!/,\ \ \gamma^{3}\!\!\!\!/ \rightarrow\gamma^{2}\!\!\!/,\ \ \gamma^{2}\!\!\!\rightarrow\gamma^{1}\!\!\!/. \tag{31}\]
After separation of the variables, the equations for radial functions have the following form:
\[\begin{array}{l}f_{S}\frac{dF}{dr}+\left(\frac{1+\kappa\sqrt{f_{S}}}{r}- \frac{r_{0}}{2r^{2}}\right)F-\left(E+m\sqrt{f_{S}}\right)G=0,\\ f_{S}\frac{dG}{dr}+\left(\frac{1-\kappa\sqrt{f_{S}}}{r}-\frac{r_{0}}{2r^{2}} \right)G+\left(E-\sqrt{f_{S}}\right)F=0.\end{array} \tag{32}\]
If at \(r\to r_{0}\), we present the functions of \(F\left(\rho\right),G\left(\rho\right)\), as follows:
\[F\big{|}_{r\to r_{0}}=\left(r-r_{0}\right)^{p}\sum_{k=0}^{\infty}f_{k}\left(r- r_{0}\right)^{k}\!\!\!,\,\,\,\,G\big{|}_{r\to r_{0}}=\left(r-r_{0}\right)^{p} \sum_{k=0}^{\infty}g_{k}\left(r-r_{0}\right)^{k}\!\!, \tag{33}\]
then, the indicial equation for system (32) leads to the two solutions with identical sense:
\[\left(p\right)_{1,2}=-\frac{1}{2}\pm ir_{0}E. \tag{34}\]
\[F_{1}\big{|}_{r\to r_{0}}\!\!\!,\,\,\,G_{1}\big{|}_{r\to r_{0}}\sim\frac{1}{ \left(r-r_{0}\right)^{1/2}}\exp\left\{ir_{0}E\ln\left(\frac{r}{r_{0}}-1 \right)\right\}, \tag{35}\]
\[F_{2}\big{|}_{r\to r_{0}}\!\!\!,\,\,\,G_{2}\big{|}_{r\to r_{0}}\sim\frac{1}{ \left(r-r_{0}\right)^{1/2}}\exp\left\{-ir_{0}E\ln\left(\frac{r}{r_{0}}-1 \right)\right\}. \tag{36}\]
In this case, solutions (32) can be represented by real functions [3, 5]. Taking into account (34), we can write down the asymptotics of the solutions at \(r\to r_{0}\) as
\[\begin{split} F\big{|}_{r\to r_{0}}&=\frac{L}{ \sqrt{r-r_{0}}}\sin\left(r_{0}E\ln\left(\frac{r}{r_{0}}-1\right)+\delta\right),\\ G\big{|}_{r\to r_{0}}&=\frac{L}{\sqrt{r-r_{0}}}\cos \left(r_{0}E\ln\left(\frac{r}{r_{0}}-1\right)+\delta\right),\end{split} \tag{37}\]
where \(L\),\(\delta\) are integration constants. In some of the scattering problems, one can use a complex phase \(\delta\)[5].
In expressions (35) - (37), the functions of \(F\left(r\right),G\left(r\right)\) are square-nonintegrable at \(r\to r_{0}\). The oscillating form of the functions of \(F\left(r\right),G\left(r\right)\) for \(E\neq 0\) testifies to implementation of the mode of a particle "fall" to event horizons (see Eq. (2) in Sec. 2).
The solutions of the indicial equation for the system of equations for radial functions of Hamiltonian (28) are as follows:
\[\left(p_{S}\right)_{1,2}=-\frac{1}{4}\pm ir_{0}E. \tag{38}\]
The asymptotics of the radial functions at \(r\to r_{0}\) are given by
\[\left(F_{S}\right)_{1}\big{|}_{r\to r_{0}}\!\!\!,\,\,\,\left(G_{S}\right)_{1} \big{|}_{r\to r_{0}}\sim\frac{1}{\left(r-r_{0}\right)^{1/4}}\exp\left\{ir_{0} E\ln\left(\frac{r}{r_{0}}-1\right)\right\}, \tag{39}\]
\[\left(F_{S}\right)_{2}\big{|}_{r\to r_{0}}\!\!\!,\,\,\,\left(G_{S}\right)_{2} \big{|}_{r\to r_{0}}\sim\frac{1}{\left(r-r_{0}\right)^{1/4}}\exp\left\{-ir_{0 }E\ln\left(\frac{r}{r_{0}}-1\right)\right\}. \tag{40}\]
Functions \(F_{S}\left(r\right),G_{S}\left(r\right)\) are square-nonintegrable due to the presence of Parker weight operator \(\rho_{{}_{P}}=f_{S}^{-1/2}\) in the scalar product [14], [17], [25]. In this case, the mode of a particle "fall" to the event horizon of \(r_{0}\) is also present.
Below, the problem of the possibility to eliminate the mode of a particle "fall" by means of the transformations of the coordinates of the Schwarzschild's metric is analyzed.
The stationary Eddington-Finkelstein metric
The coordinate transformation of the Schwarzschild metric
\[\left(t,\!r,\!\theta,\!\varphi\right)\rightarrow\left(T,\!r,\!\theta,\!\varphi\right) \tag{41}\]
has the form of
\[dT=dt+\frac{r_{0}}{r}\frac{dr}{f_{S}}. \tag{42}\]
Then, the EF metric is written as
\[ds^{2}=f_{S}dT^{2}-2\frac{r_{0}}{r}dTdr-\left(1+\frac{r_{0}}{r}\right)dr^{2}-r ^{2}\left(d\theta^{2}+\sin^{2}\theta d\varphi^{2}\right). \tag{43}\]
From Hamiltonian \(\tilde{H}_{S}\) in the Schwarzschild space-time (28), we will obtain Hamiltonian \(H_{EF}\) in the EF space-time, by using relation (19) and Eqs. (25) - (27).
As the result,
\[\begin{array}{l}\left(H_{EF}\right)_{\underline{0}}^{0}=\frac{1}{\sqrt{f_{S} }},\ \ \left(H_{EF}\right)_{\underline{1}}^{0}=\frac{r_{0}/r}{\sqrt{f_{S}}},\ \ \left(H_{EF}\right)_{ \underline{1}}^{1}=\sqrt{f_{S}},\\ \left(H_{EF}\right)_{\underline{2}}^{2}=\frac{1}{r},\ \ \ \left(H_{EF}\right)_{ \underline{3}}^{3}=\frac{1}{r\sin\theta},\end{array} \tag{44}\]
\[\begin{array}{l}\gamma_{EF}^{0}=\frac{1}{\sqrt{f_{S}}}\gamma^{\underline{0} }+\frac{r_{0}/r}{\sqrt{f_{S}}}\gamma^{\underline{1}},\ \ \gamma_{EF}^{1}=\sqrt{f_{S}}\gamma^{\underline{1}},\ \gamma_{EF}^{2}= \frac{1}{r}\gamma^{\underline{2}},\ \ \ \gamma_{EF}^{3}=\frac{1}{r\sin\theta} \gamma^{\underline{3}},\end{array} \tag{45}\]
\[\begin{array}{l}\left(\Phi_{EF}\right)_{0}=\frac{r_{0}}{4r^{2}}\gamma^{ \underline{0}}\gamma^{\underline{1}},\ \ \ \left(\Phi_{EF}\right)_{1}=\frac{r_{0}}{r}\frac{1}{f_{S}}\frac{r_{0}}{4r^{2}} \gamma^{\underline{0}}\gamma^{\underline{1}},\\ \left(\Phi_{EF}\right)_{2}=-\frac{1}{2}\sqrt{f_{S}}\gamma^{\underline{1}} \gamma^{\underline{2}},\ \ \ \left(\Phi_{EF}\right)_{3}=-\frac{1}{2}\cos\theta\,\gamma^{ \underline{2}}\gamma^{\underline{3}}+\frac{1}{2}\sin\theta\gamma^{\underline{ 3}}\gamma^{\underline{1}}.\end{array} \tag{46}\]
For the EF metric (see Eq. (43)), the inverse metric tensor is
\[g^{00}=1+\frac{r_{0}}{r}. \tag{47}\]
By substituting (44) - (47) to Eq. (10), we obtain the Hamiltonian in the EF space-time
\[H_{EF}=\frac{1-\left(r_{0}/r\right)\gamma^{\underline{0}}\gamma^{\underline{ 1}}}{1-r_{0}^{2}/r^{2}}\tilde{H}_{S}. \tag{48}\]
Inverse equality is
\[\tilde{H}_{S}=\left(1+\frac{r_{0}}{r}\gamma^{\underline{0}}\gamma^{\underline {1}}\right)H_{EF}. \tag{49}\]
Let us demonstrate that Hamiltonians \(H_{EF}\) and \(\tilde{H}_{S}\) are connected with each other through the unitary transformation.
At transformations (19), the form of wave functions does not vary except for substitution of the variables. Let us denote
\[\varphi_{{}_{EF}}\left(r\right)=\int\frac{r_{0}}{r}\frac{dr}{f_{S}}. \tag{50}\]
It follows from (42), that \(t=T-\varphi_{{}_{EF}}\left(r\right)=T-\int\frac{r_{0}}{r}\frac{dr}{f_{S}}=T- \left(r_{0}\text{ln}\left(\frac{r}{r_{0}}-1\right)+\text{const}\right)\). Since
\[\tilde{\psi}_{S}\left(\mathbf{r,}t\right)=\psi_{{}_{EF}}\left(\mathbf{r,}T \right), \tag{51}\]
then
\[\tilde{\psi}_{S}\left(\mathbf{r}\right)e^{-iEt}=\tilde{\psi}_{S}\left(\mathbf{ r}\right)e^{iE\varphi_{{}_{EF}}\left(r\right)}e^{-iET}=\psi_{{}_{EF}}\left( \mathbf{r}\right)e^{-iET}. \tag{52}\]
The equation for stationary states in the Schwarzschild space-time equals
\[\tilde{H}_{S}\tilde{\psi}_{S}\left(\mathbf{r}\right)=E\tilde{\psi}_{S}\left( \mathbf{r}\right). \tag{53}\]
It follows from (52), that \(\tilde{\psi}_{S}\left(\mathbf{r}\right)=e^{-iE\varphi_{{}_{EF}}\left(r\right) }\psi_{{}_{EF}}\left(\mathbf{r}\right)\), and we obtain that
\[e^{iE\varphi_{{}_{EF}}\left(r\right)}\tilde{H}_{S}e^{-iE\varphi_{{}_{EF}} \left(r\right)}=E\psi_{{}_{EF}}\left(\mathbf{r}\right). \tag{54}\]
Using the explicit form of \(\tilde{H}_{S}\) in (28) and equality (49), we obtain that
\[\left(1+\frac{r_{0}}{r}\gamma^{0}\gamma^{1}\right)\left[H_{EF}\psi_{EF}\left( \mathbf{r}\right)-E\psi_{EF}\left(\mathbf{r}\right)\right]=0. \tag{55}\]
By multiplying (55) on the left by matrix \(\left(1-\frac{r_{0}}{r}\gamma^{0}\gamma^{1}\right)\)/\(\left(1-r_{0}^{2}/r^{2}\right)\), we will obtain an equation for stationary states in the EF space-time:
\[H_{EF}\psi_{EF}\left(\mathbf{r}\right)=E\psi_{EF}\left(\mathbf{r}\right). \tag{56}\]
In this case, in compliance with (54) and (52), the following relations are valid:
\[H_{EF}=e^{iE\varphi_{{}_{EF}}\tilde{H}_{S}e^{-iE\varphi_{{}_{EF}}}}, \tag{57}\]
\[\psi_{EF}\left(\mathbf{r}\right)=\tilde{\psi}_{S}\left(\mathbf{r}\right)e^{iE \varphi_{{}_{EF}}}. \tag{58}\]
Thus, it is demonstrated that Hamiltonians \(H_{EF}\) and \(\tilde{H}_{S}\) are connected with each other through the unitary transformation:
\[U_{EF}=e^{iE\varphi_{{}_{EF}}\left(r\right)}. \tag{59}\]
It follows from (58) that the asymptotics of radial functions in the EF space-time can be obtained by multiplication of asymptotics (39) and (40) by unitary operator (59). Then,
\[\left(F_{EF}\right)_{1}\big{|}_{r\to r_{0}},\ \left(G_{EF}\right)_{1}\big{|}_{r \to r_{0}}\sim\frac{1}{\left(r-r_{0}\right)^{1/4}}\exp\left\{i2r_{0}E\ln \left(\frac{r}{r_{0}}-1\right)\right\}, \tag{60}\]
\[\left(F_{EF}\right)_{2}\big{|}_{r\to r_{0}},\ \left(G_{EF}\right)_{2}\big{|}_{r \to r_{0}}\sim\frac{1}{\left(r-r_{0}\right)^{1/4}}. \tag{61}\]
The asymptotics (60) and (61) can be obtained by using another way.
We can separate variables in Eq. (56) by representing the wave function in the form of (30). The equations for radial functions of \(F_{EF}\left(r\right),G_{EF}\left(r\right)\) are given by
\[\begin{array}{l}f_{S}\frac{dF_{EF}}{dr}+\left(\frac{1+\kappa \sqrt{f_{S}}}{r}-\frac{3}{4}\frac{r_{0}}{r^{2}}\right)F_{EF}-i\frac{r_{0}}{r} EF_{EF}-\left(E+\sqrt{f_{S}}m\right)G_{EF}=0,\\ f_{S}\frac{dG_{EF}}{dr}+\left(\frac{1-\kappa\sqrt{f_{S}}}{r}- \frac{3}{4}\frac{r_{0}}{r^{2}}\right)G_{EF}-i\frac{r_{0}}{r}EG_{EF}+\left(E- \sqrt{f_{S}}m\right)F_{EF}=0.\end{array} \tag{62}\]
To determine the asymptotics of radial functions at \(r\to r_{0}\), let us present functions \(\left.\left(F_{EF}\right)\right|_{r\to r_{0}},\)\(\left.\left(G_{EF}\right)\right|_{r\to r_{0}}\) in the form of (33). Let us write the indicial equation for system (62) as an equation for the determinant:
\[\left|\begin{pmatrix}p+\frac{1}{4}\end{pmatrix}\frac{1}{r_{0}}-iE&-E\\ E&\left(p+\frac{1}{4}\right)\frac{1}{r_{0}}-iE\end{pmatrix}=0. \tag{63}\]
The solutions for Eq. (63) are given by
\[\begin{split} p_{1}&=-\frac{1}{4}+2ir_{0}E,\\ p_{2}&=-\frac{1}{4}.\end{split} \tag{64}\]
Equalities (64) show that the required asymptotics coincide with asymptotics (60), (61).
The Parker operator in the (EF) space-time with the set of tetrads (44) is
\[\left(\rho_{{}_{P}}\right)_{EF}=\gamma^{0}\gamma^{0}_{EF}=\frac{1}{\sqrt{f_{ S}}}\left(1+\frac{r_{0}}{r}\gamma^{0}\gamma^{1}\right). \tag{65}\]
For both asymptotics (60), (61), the scalar product in the neighborhood of the event horizon diverge lolgarithmically.
We can formally determine the form of the operator of transition to a self-conjugate Hamiltonian in the EF space-time with a plane scalar product Eq. (see (15))
\[\begin{split}&\left(\rho_{{}_{P}}\right)_{EF}=\eta^{+}_{{}_{EF} }\eta_{{}_{EF}},\\ &\eta_{{}_{EF}}=\frac{1}{f_{S}^{1/4}}\left(1+\frac{r_{0}}{r} \gamma^{0}\gamma^{1}\right)^{1/2},\end{split} \tag{66}\]
where \(\eta^{+}_{{}_{EF}}=\eta_{{}_{EF}}\). However, the application of a matrix operator \(\eta_{{}_{EF}}\) is difficult in practice.
### The Lorentz transformation
To obtain an operator \(\eta_{{}_{EF}}\) in an acceptable matrix-free form, let us transfer from the system of tetrad vectors of \(\left\{\left(H_{EF}\right)_{\underline{\alpha}}^{\mu}\right\}\) to tetrad vectors in the Schwinger gauge of \(\left\{\left(\tilde{H}_{EF}\right)_{\underline{\alpha}}^{\mu}\right\}\) by using Lorentz transformation of (20) - (23). In this system, the tetrads of \(\tilde{H}_{\underline{k}}^{0}\) are equal to zero and an operator \(\eta_{{}_{EF}}\) is free from matrixes \(\gamma^{\underline{0}}\),\(\gamma^{\underline{1}}\). The nonzero tetrads of \(\left(\tilde{H}_{EF}\right)_{\underline{\alpha}}^{\mu}\) in the Schwinger gauge are determined by the expressions of
\[\begin{split}&\left(\tilde{H}_{EF}\right)_{\underline{0}}^{0}= \sqrt{1+\frac{r_{0}}{r}},\quad\left(\tilde{H}_{EF}\right)_{\underline{0}}^{1 }=-\frac{r_{0}/r}{\sqrt{1+r_{0}/r}},\quad\left(\tilde{H}_{EF}\right)_{ \underline{1}}^{1}=\frac{1}{\sqrt{1+r_{0}/r}},\\ &\left(\tilde{H}_{EF}\right)_{\underline{2}}^{2}=\frac{1}{r}, \quad\left(H_{EF}\right)_{\underline{3}}^{3}=\frac{1}{r\sin\theta}.\end{split} \tag{67}\]
In considered case, the nonzero values of \(\Lambda_{\underline{\alpha}}^{\underline{\beta}}(r)\) in (20) are equal to:
\[\Lambda_{\underline{0}}^{\underline{0}}=\Lambda_{\underline{1}}^{\underline{1}}= \frac{1}{\sqrt{f_{S}}\sqrt{1+r_{0}/r}},\ \ \Lambda_{\underline{0}}^{\underline{1}}=\Lambda_{ \underline{1}}^{\underline{0}}=-\frac{r_{0}/r}{\sqrt{f_{S}}\sqrt{1+r_{0}/r}}. \tag{68}\]
Let us determine the form of the Lorentz transformation matrixes from Eqs. (23) with the values of \(\Lambda_{\underline{\alpha}}^{\underline{\beta}}(x)\) from (68)
\[L\gamma^{\underline{0}}L^{-1}=\frac{1}{\sqrt{1-r_{0}/r}\sqrt{1+r_{0}/r}}\left( \gamma^{\underline{0}}-\frac{r_{0}}{r}\gamma^{\underline{1}}\right), \tag{69}\]
\[L\gamma^{\underline{1}}L^{-1}=\frac{1}{\sqrt{1-r_{0}/r}\sqrt{1+r_{0}/r}}\left( -\frac{r_{0}}{r}\gamma^{\underline{0}}+\gamma^{\underline{1}}\right). \tag{70}\]
It is well-known that Lorentz transformations can unambiguously be presented as a product of either a boost (a Hermitian factor) by a spatial rotation matrix \(R\) (a unitary factor), or, vice versa, as a product of a matrix \(R\) by a boost. For our case, it is sufficient to use a unity matrix \(R=1\).
The following forms of \(L\) and \(L^{-1}\) are reasonable:
\[\begin{array}{l}L=\exp\left(\frac{\theta}{2}\gamma^{\underline{0}}\gamma^{ \underline{1}}\right)=\mbox{ch}\frac{\theta}{2}+\mbox{sh}\frac{\theta}{2} \gamma^{\underline{0}}\gamma^{\underline{1}}=\frac{1+\left(B/A\right)\gamma^{ \underline{0}}\gamma^{\underline{1}}}{\sqrt{1-B^{2}/A^{2}}},\\ L^{-1}=\exp\left(-\frac{\theta}{2}\gamma^{\underline{0}}\gamma^{ \underline{1}}\right)=\mbox{ch}\frac{\theta}{2}-\mbox{sh}\frac{\theta}{2} \gamma^{\underline{0}}\gamma^{\underline{1}}=\frac{1-\left(B/A\right)\gamma^{ \underline{0}}\gamma^{\underline{1}}}{\sqrt{1-B^{2}/A^{2}}}.\end{array} \tag{71}\]
In (71), the matrix \(L\) represents a transformation of hyperbolic rotation (i.e. boost) by an angle \(\theta\).
Substituting (71) in Eqs. (69) and (70), we obtain that
\[\frac{B}{A}=\frac{\sqrt{1+r_{0}/r}-\sqrt{1-r_{0}/r}}{\sqrt{1+r_{0}/r}+\sqrt{1 -r_{0}/r}}. \tag{72}\]
Then,
\[L\left(r\right)=\frac{\sqrt{1+r_{0}/r}+\sqrt{1-r_{0}/r}+\left(\sqrt{1+r_{0}/r }-\sqrt{1-r_{0}/r}\right)\gamma^{\underline{0}}\gamma^{\underline{1}}}{2\sqrt {\sqrt{1+r_{0}/r}\sqrt{1-r_{0}/r}}}, \tag{73}\]
\[L^{-1}\left(r\right)=\frac{\sqrt{1+r_{0}/r}+\sqrt{1-r_{0}/r}+\left(\sqrt{1+r_ {0}/r}-\sqrt{1-r_{0}/r}\right)\gamma^{\underline{1}}\gamma^{\underline{0}}}{2 \sqrt{\sqrt{1+r_{0}/r}\sqrt{1-r_{0}/r}}}. \tag{74}\]
After Lorenz transformation of (73) and (74), Hamiltonian (57) is transformed to the Hamiltonian with tetrads in Schwinger gauge [14]:
\[\begin{array}{l}\tilde{H}_{EF}=L\left(r\right)H_{EF}L^{-1}\left(r\right)= \\ =\frac{m}{\sqrt{1+r_{0}/r}}\gamma^{\underline{0}}-i\gamma^{\underline{0}} \gamma^{\underline{1}}\frac{1}{1+r_{0}/r}\left(\frac{\partial}{\partial r}+ \frac{1}{r}+\frac{r_{0}}{4r^{2}}\frac{1}{1+r_{0}/r}\right)-\\ -i\gamma^{\underline{0}}\gamma^{\underline{2}}\frac{1}{\sqrt{1+r_{0}/r}}\frac{ 1}{r}\left(\frac{\partial}{\partial\theta}+\frac{1}{2}\mbox{ctg}\theta \right)-i\gamma^{\underline{0}}\gamma^{\underline{3}}\frac{1}{\sqrt{1+r_{0}/r }}\frac{1}{r\,\mbox{sin}\theta}\frac{\partial}{\partial\varphi}+\\ +i\frac{r_{0}}{r}\frac{1}{1+r_{0}/r}\left(\frac{\partial}{\partial r}+\frac{1} {r}-\frac{1}{4r\left(1+r_{0}/r\right)}-\frac{1}{4r}\right).\end{array} \tag{75}\]
In this case,
\[\tilde{\psi}_{EF}=L\left(r\right)\psi_{EF}. \tag{76}\]
According to (73), the asymptotics of \(L\left(r\right)\) at \(r\to r_{0}\) are
\[L\big{|}_{r\to r_{0}}=\frac{r_{0}^{1/4}\left(1+\gamma^{\underline{0}}\gamma^{ \underline{1}}\right)}{2^{3/4}\left(r-r_{0}\right)^{1/4}}. \tag{77}\]
In compliance with (76) and (77), after the similarity transformation \(L\left(r\right)\), the asymptotics of radial functions (60) and (61) are transformed to the following form:
\[\left(\tilde{F}_{EF}\right)_{1}\Big{|}_{r\to r_{0}},\ \left(\tilde{G}_{EF} \right)_{1}\Big{|}_{r\to r_{0}}\sim\frac{1}{\left(r-r_{0}\right)^{1/2}}\exp \left\{i2r_{0}E\ln\left(\frac{r}{r_{0}}-1\right)\right\}, \tag{78}\]
\[\left(\tilde{F}_{EF}\right)_{2}\Big{|}_{r\to r_{0}},\ \left(\tilde{G}_{EF} \right)_{2}\Big{|}_{r\to r_{0}}\sim\frac{1}{\left(r-r_{0}\right)^{1/2}}. \tag{79}\]
Now, let us obtain asymptotics of radial functions by using an another technique. If we present the wave function of \(\tilde{\psi}_{EF}\left(\textbf{r},\)\(T\right)\) in a form similar to Eq. (30), then in the equation of
\[\tilde{H}_{EF}\tilde{\psi}_{EF}=E\tilde{\psi}_{EF} \tag{80}\]
we can perform separation of the variables. So,
\[\tilde{\psi}_{EF}\left(r,\theta,\varphi,T\right)=\left(\begin{array}{c}\tilde {F}_{EF}\left(r\right)\xi\left(\theta\right)\\ -i\tilde{G}_{EF}\left(r\right)\sigma^{3}\xi\left(\theta\right)\end{array} \right)e^{-iET+im_{\varphi}\varphi}. \tag{81}\]
For convenience, while separating the variables, the equivalent substitution of matrixes \(\gamma^{\underline{i}}\) is carried out (see Eq. (31)). The system of equations for radial wave functions of \(\tilde{F}_{EF}\left(r\right),\)\(\tilde{G}_{EF}\left(r\right)\) is given by
\[\begin{array}{l}\left(1-\frac{r_{0}}{r}\right)\frac{d\tilde{F}_{EF}}{dr}+ \left(\frac{1-r_{0}/r+\kappa/\sqrt{1+r_{0}/r}}{r}+\frac{r_{0}}{4r^{2}}\right) \tilde{F}_{EF}-\\ -i\frac{r_{0}}{r}\left(E-\frac{m}{\sqrt{1+r_{0}/r}}\right)\tilde{F}_{EF}+i \left(-\frac{1}{1+r_{0}/r}\frac{r_{0}}{2r^{2}}+\frac{r_{0}}{r^{2}}\frac{ \kappa}{\sqrt{1+r_{0}/r}}\right)\tilde{G}_{EF}-\\ -\left(E+\frac{m}{\sqrt{1+r_{0}/r}}\right)\tilde{G}_{EF}=0,\\ \left(1-\frac{r_{0}}{r}\right)\frac{d\tilde{G}_{EF}}{dr}+\left( \frac{1-r_{0}/r-\kappa/\sqrt{1+r_{0}/r}}{r}+\frac{r_{0}}{4r^{2}}\right)\tilde {G}_{EF}-\\ -i\frac{r_{0}}{r}\left(E+\frac{m}{\sqrt{1+r_{0}/r}}\right)\tilde{G}_{EF}+i \left(\frac{1}{1+r_{0}/r}\frac{r_{0}}{2r^{2}}+\frac{r_{0}}{r^{2}}\frac{\kappa }{\sqrt{1+r_{0}/r}}\right)\tilde{F}_{EF}+\\ +\left(E-\frac{m}{\sqrt{1+r_{0}/r}}\right)\tilde{F}_{EF}=0.\end{array} \tag{82}\]
If in the neighborhood of the event horizon of \(\left(r\to r_{0}\right)\), we determine
\[\tilde{F}_{EF}\Big{|}_{r\to r_{0}}\ =\ (r-r_{0})^{s}\sum\limits_{k=0}^{\infty} \tilde{f}_{k}\left(r-r_{0}\right)^{k},\ \ \tilde{G}_{EF}\Big{|}_{r\to r_{0}}\ =\ \left(r-r_{0}\right)^{s}\sum\limits_{k=0}^{ \infty}\tilde{g}_{k}\left(r-r_{0}\right)^{k},\ \text{then}\]
the indicial equation for system (82) is
\[s\left(s+\frac{1}{2}-i2r_{0}E\right)=0. \tag{83}\]
The solutions of (83) are
\[s_{1}=-\frac{1}{2}+i2r_{0}E,\ s_{2}=0. \tag{84}\]
The solution of \(s_{1}=-1/2+i2r_{0}E\) corresponds to asymptotics for square-nonintegrable wave functions with the mode of a particle "fall" to the event horizon. The asymptotics with solution \(s_{1}\) coincide with asymptotics (78). These asymptotics agree with the transformations of wave function asymptotics \(F_{1}\),\(G_{1}\) (see Eq. (35)) at transition to the EF metric and to the Hamiltonian with tetrads in Schwinger gauge.
In this case, the transition to the stationary EF metric does not eliminate the mode of a particle "fall" to the event horizon. The wave functions are square-nonintegrable in the neighborhood of the event horizon.
Since for the (EF) metric, \(\eta_{{}_{EF}}=\left(1+r_{0}/r\right)^{1/4}\), these conclusions are also valid for the wave functions of the self-conjugate Hamiltonian of
\[H_{\eta_{{}_{EF}}}=\eta_{{}_{EF}}\tilde{H}_{{}_{EF}}\eta_{{}_{EF}}^{-1}, \tag{85}\]
\[\psi_{\eta_{{}_{EF}}}=\eta_{{}_{EF}}\tilde{\psi}_{{}_{EF}}. \tag{86}\]
The smooth square-integrable wave functions of \(\left(\tilde{F}_{EF}\right)_{2}\Big{|}_{r\to r_{0}}=\text{const}\ 1,\ \left(\tilde{G}_{EF}\right)_{2}\Big{|}_{r\to r_{0}}= \text{const}\ 2\) without a mode of a particle "fall" to event horizons correspond to asymptotics with the solution of \(s_{2}=0\) (see Eq. (84)). Some of the researches use this solution to prove the possibility of intersection of the event horizon by quantum mechanical particles and their "sink" to singularity at \(r\to 0\) (see, for instance, Ref. [26]). However, the solution of \(s_{2}=0\) does not correspond to the history of wave function transformations (see (58), (76) and (86)) at the transition from the self-conjugate Hamiltonian with the Schwarzschild metric (29) to the self-conjugate Hamiltonian with the EF metric (85).
Of course, the wave functions with \(s_{2}=0\) are accurate solutions of GR equations in the EF space-time. However, these solutions are not associated with the Schwarzschild metric. Such an unambiguous link exists only for wave functions with the index of \(s_{1}=-\frac{1}{2}+i2r_{0}E\) (see Eq. (84)).
In accordance with Birkhoff's theorem, the geometry of any spherically symmetric vacuum region of spacetime with \(r>r_{0}\) is a piece of the Schwarzschild geometry. Hence, at coordinate transformations of a static Schwarzschild metric to a stationary EF metric, only wave functions with an index \(s_{1}\) should be used. This leads to conservation of the mode of a particle "fall" to the event horizon.
## 7 The Painleve-Gullstrand stationary metric
For a PG metric, as well as in Sec. 6, we prove the impossibility to eliminate the singular mode of a particle "fall" by using coordinate transformations of the Schwarzschild
metric. Because of the logic closeness of Secs. 6 and 7, we present a short form of the proofs.
The coordinate transformation is given by
\[dT=dt+\sqrt{\frac{r_{0}}{r}}\frac{dr}{f_{S}}. \tag{87}\]
The PG metric is written as
\[ds^{2}=f_{S}dT^{2}-2\sqrt{\frac{r_{0}}{r}}dTdr-dr^{2}-r^{2}\left(d\theta^{2}+ \sin^{2}\theta d\varphi^{2}\right). \tag{88}\]
From a Hamiltonian \(\tilde{H}_{S}\) in the Schwarzschild space-time (28), the Hamiltonian \(H_{PG}\) is obtained in the PG space-time by using relationship (19) and equalities (25) - (27).
As the result, we obtain that
\[\begin{array}{c}(H_{PG})_{\underline{0}}^{0}=\frac{1}{\sqrt{f_{S}}},\ \ \left(H_{PG}\right)_{\underline{1}}^{0}=\frac{\sqrt{r_{0}/r}}{\sqrt{f_{S}}}, \ \ \left(H_{PG}\right)_{\underline{1}}^{1}=\sqrt{f_{S}},\\ (H_{PG})_{\underline{2}}^{2}=\frac{1}{r},\ \ \ \left(H_{PG}\right)_{ \underline{3}}^{3}=\frac{1}{r\sin\theta},\end{array} \tag{89}\]
\[\begin{array}{c}\gamma_{PG}^{0}=\frac{1}{\sqrt{f_{S}}}\gamma^{\underline{0} }+\frac{\sqrt{r_{0}/r}}{\sqrt{f_{S}}}\gamma^{\underline{1}},\ \ \gamma_{PG}^{1}=\sqrt{f_{S}}\gamma^{\underline{1}},\ \gamma_{PG}^{2}=\frac{1}{r}\gamma^{2},\ \ \ \gamma_{PG}^{3}= \frac{1}{r\sin\theta}\gamma^{\underline{3}},\end{array} \tag{90}\]
\[\begin{array}{c}\left(\Phi_{PG}\right)_{0}=\frac{r_{0}}{4r^{2}}\gamma^{ \underline{0}}\gamma^{\underline{1}},\ \ \ \left(\Phi_{PG}\right)_{1}=\frac{\sqrt{r_{0}/r}}{f_{S}}\frac{r_{0}}{4r^{2}} \gamma^{\underline{0}}\gamma^{\underline{1}},\\ \left(\Phi_{PG}\right)_{2}=-\frac{1}{2}\sqrt{f_{S}}\gamma^{\underline{1}} \gamma^{\underline{2}},\ \ \ \left(\Phi_{EF}\right)_{3}=-\frac{1}{2}\cos\theta\,\gamma^{ \underline{2}}\gamma^{\underline{3}}+\frac{1}{2}\sqrt{f_{S}}\sin\theta\gamma^ {\underline{3}}\gamma^{\underline{1}}.\end{array} \tag{91}\]
For the (PG) metric
\[g^{00}=1. \tag{92}\]
By substituting (89) - (92) to Eq. (10), we obtain the Hamiltonian in the PG space-time
\[H_{PG}=\frac{1-\sqrt{r_{0}/r}\gamma^{\underline{0}}\gamma^{\underline{1}}}{f _{S}}\tilde{H}_{S}. \tag{93}\]
By analogy with the EF metric (see Eqs. (50) - (58)), it is possible to demonstrate that Hamiltonians \(H_{PG}\) and \(\tilde{H}_{S}\) are connected with each other through the unitary transformation:
\[H_{{}_{PG}}=e^{iE\varphi_{PG}}\tilde{H}_{S}e^{-iE\varphi_{PG}}, \tag{94}\]
\[\psi_{{}_{PG}}\left(\mathbf{r}\right)=\tilde{\psi}_{{}_{S}}\left(\mathbf{r} \right)e^{iE\varphi_{PG}}, \tag{95}\]
where
\[\varphi_{{}_{PG}}\left(r\right)=\int\sqrt{\frac{r_{0}}{r}}\frac{dr}{f_{S}}. \tag{96}\]
Multiplication of asymptotics (39) and (40) by the unitary factor of \(U_{PG}=e^{iE\varphi_{PG}}\) leads to a form similar to the formulas (60) and (61) for the EF metric:
\[\left(F_{PG}\right)_{1}\big{|}_{r\to r_{0}},\ \left(G_{PG}\right)_{1} \big{|}_{r\to r_{0}}\sim\frac{1}{\left(r-r_{0}\right)^{1/4}}\exp\left\{i2r_{0} E\ln\left(\frac{r}{r_{0}}-1\right)\right\}, \tag{97}\]
\[\left(F_{PG}\right)_{2}\big{|}_{r\to r_{0}},\ \left(G_{PG}\right)_{2}\big{|}_{r \to r_{0}}\sim\frac{1}{\left(r-r_{0}\right)^{1/4}}. \tag{98}\]
The asymptotics (97) and (98) can be obtained in a different way. For this, in the equation:
\[H_{PG}\psi_{PG}=E\psi_{PG}, \tag{99}\]
it is necessary to separate the variables and obtain the system of equations for the radial wave functions \(F_{PG}\),\(G_{PG}\). Then, in the neighborhood of the event horizon for this system of equations, we should obtain the solutions of the indicial equation. As the result, by analogy with (62) - (64), we obtain the asymptotics of the radial functions coinciding with (97) and (98).
The Parker operator in the PG space-time with the set of tetrads (89) is
\[\left(\rho_{{}_{P}}\right)_{PG}=\gamma^{0}\gamma^{0}_{PG}=\frac{1}{\sqrt{f_{S} }}\left(1+\sqrt{\frac{r_{0}}{r}}\gamma^{0}\gamma^{1}\right). \tag{100}\]
The operator of transition to the self-conjugate Hamiltonian with a plane scalar product is given by
\[\eta_{{}_{PG}}=\frac{1}{f_{S}^{1/4}}\left(1+\sqrt{\frac{r_{0}}{r}}\gamma^{0} \gamma^{1}\right)^{1/2}. \tag{101}\]
The practical use of a matrix operator \(\eta_{{}_{PG}}\) seems to be difficult.
### The Lorentz transformation
In order to obtain \(\eta_{{}_{PG}}\) in an acceptable matrix-free form, let us transfer, as well as in Sec. 6, from the system of tetrad vectors of \(\left\{\left(H_{PG}\right)_{\underline{\alpha}}^{\mu}\right\}\) to tetrad vectors of \(\left\{\left(\tilde{H}_{PG}\right)_{\underline{\alpha}}^{\mu}\right\}\) in the Schwinger gauge using the Lorentz transformation of (20) - (23). Non-zero tetrads \(\left(\tilde{H}_{PG}\right)_{\underline{\alpha}}^{\mu}\) in the Schwinger gauge are determined by the following expressions:
\[\begin{array}{l}(\tilde{H}_{PG})_{\underline{0}}^{0}=1,\ \ \ (\tilde{H}_{PG})_{ \underline{0}}^{1}=-\sqrt{\frac{r_{0}}{r}},\ \ (\tilde{H}_{PG})_{ \underline{1}}^{1}=1,\\ (\tilde{H}_{PG})_{\underline{2}}^{2}=\frac{1}{r},\ \ \ (H_{PG})_{ \underline{3}}^{3}=\frac{1}{r\sin\theta}.\end{array} \tag{102}\]
For the PG metric, the non-zero values of \(\Lambda_{\underline{\alpha}}^{\underline{\beta}}(r)\) in (20) are
\[\Lambda_{\underline{0}}^{\underline{0}}=\Lambda_{\underline{1}}^{\underline{ 1}}=\frac{1}{\sqrt{f_{S}}},\ \ \Lambda_{\underline{0}}^{\underline{1}}=\Lambda_{ \underline{1}}^{\underline{0}}=-\frac{\sqrt{r_{0}/r}}{\sqrt{f_{S}}}. \tag{103}\]
Similarly to Eqs. (69) - (74), we obtain the following form of a matrix \(L\),\(L^{-1}\) for the PG metric:
\[L\left(r\right)=\frac{1+\sqrt{r_{0}/r}+\sqrt{f_{S}}+\left(1+\sqrt{r_{0}/r}- \sqrt{f_{S}}\right)\gamma^{0}\gamma^{1}}{2\sqrt{\sqrt{f_{S}}\left(1+\sqrt{r_{0 }/r}\right)}}, \tag{104}\]
\[L^{-1}\left(r\right)=\frac{1+\sqrt{r_{0}/r}+\sqrt{f_{S}}+\left(1+\sqrt{r_{0}/r}- \sqrt{f_{S}}\right)\gamma^{\underline{1}}\gamma^{\underline{0}}}{2\sqrt{\sqrt{f _{S}}\left(1+\sqrt{r_{0}/r}\right)}}. \tag{105}\]
After Lorentz transformation (104) and (105), Hamiltonian (94) is transformed to a self-conjugate Hamiltonian with tetrads in the Schwinger gauge [14], [26].
For PG metrics, \(\eta_{{}_{PG}}=1\) and \(\tilde{H}_{PG}=H_{\eta_{{}_{PG}}}\), where
\[\begin{array}{l}H_{\eta_{{}_{PG}}}=L\left(r\right)H_{PG}L^{-1}\left(r\right)= \\ =m\gamma^{\underline{0}}-i\gamma^{\underline{0}}\left\{\gamma^{ \underline{1}}\left(\frac{\partial}{\partial r}+\frac{1}{r}\right)+\gamma^{ \underline{2}}\frac{1}{r}\left(\frac{\partial}{\partial\theta}+\frac{1}{2} \mathrm{ctg}\theta\right)+\gamma^{\underline{3}}\frac{1}{r\sin\theta}\right\} +\\ +i\sqrt{\frac{r_{0}}{r}}\left(\frac{\partial}{\partial r}+\frac{3}{4}\frac{1}{ r}\right),\\ \psi_{\eta_{{}_{PG}}}=L\left(r\right)\psi_{PG}.\end{array} \tag{106}\]
According to (104), the asymptotics \(L\left(r\right)\) at \(r\to r_{0}\) equals
\[L|_{r\to r_{0}}=\frac{r_{0}^{1/4}\left(1+\gamma^{\underline{0}}\gamma^{ \underline{1}}\right)}{2^{1/2}\left(r-r_{0}\right)^{1/4}}. \tag{107}\]
In compliance with (106) and (107), after the similarity transformation \(L\left(r\right)\), the asymptotics of radial functions (97), (98) become equal to
\[\left(\tilde{F}_{PG}\right)_{1}\Big{|}_{r\to r_{0}},\ \ \left(\tilde{G}_{PG} \right)_{1}\Big{|}_{r\to r_{0}}\sim\frac{1}{\left(r-r_{0}\right)^{1/2}}\exp \left\{i2r_{0}E\ln\left(\frac{r}{r_{0}}-1\right)\right\}, \tag{108}\]
\[\left(\tilde{F}_{PG}\right)_{2}\Big{|}_{r\to r_{0}},\ \ \left(\tilde{G}_{PG} \right)_{2}\Big{|}_{r\to r_{0}}\sim\frac{1}{\left(r-r_{0}\right)^{1/2}}. \tag{109}\]
Then, we will obtain asymptotics of radial functions in a different way.
While presenting a wave function \(\psi_{\eta_{{}_{PG}}}\) (**r**,\(T\)) in the form of (30) in the equation of
\[H_{\eta_{{}_{PG}}}\psi_{\eta_{{}_{PG}}}=E\psi_{\eta_{{}_{PG}}} \tag{110}\]
we can separate the variables. While separating the variables, as well as in Secs. 5 and 6, let us perform an equivalent substitution of matrixes \(\gamma^{\underline{i}}\) (see Eq. (31)).
The system of equations for radial wave functions \(\tilde{F}_{PG}\left(r\right),\tilde{G}_{PG}\left(r\right)\) is given by
\[\begin{array}{l}f_{S}\frac{d\tilde{F}_{PG}}{dr}+\left(\frac{1+\kappa}{r}- \frac{3}{4}\frac{r_{0}}{r^{2}}-i\left(E-m\right)\sqrt{\frac{r_{0}}{r}}\right) \tilde{F}_{PG}-\\ -\left(E+m+i\sqrt{\frac{r_{0}}{r}}\frac{1/4-\kappa}{r}\right) \tilde{G}_{PG}=0,\\ f_{S}\frac{d\tilde{G}_{PG}}{dr}+\left(\frac{1-\kappa}{r}- \frac{3}{4}\frac{r_{0}}{r^{2}}-i\left(E+m\right)\sqrt{\frac{r_{0}}{r}}\right) \tilde{G}_{PG}+\\ +\left(E-m+i\sqrt{\frac{r_{0}}{r}}\frac{1/4+\kappa}{r}\right) \tilde{F}_{PG}=0\end{array} \tag{111}\]
The indicial equation for system (111) coincides with equation (83) for the (EF) metric.
The solution of \(s_{1}=-1/2+i2r_{0}E\) corresponds to square-nonintegrable wave functions with the mode of a particle "fall" to the event horizon. This solution corresponds to the wave function transformations at transition to the PG metric.
The solution of \(s_{2}=0\) does not correspond to the transformation history of wave functions at transition from the Schwarzschild metric to the PG metric. As well as for the EF metric, the solution of \(s_{2}=0\) for the PG metric, being an accurate solution of the GR equations, is not connected with the Schwarzschild solution.
The connection with the original Schwarzschild metric is implemented only by the solution with the index of \(s_{1}=-1/2+i2r_{0}E\).
It follows therefrom that the transition to the (PG) stationary metric does not eliminate the mode of a particle "fall" to the event horizon. The wave functions are square-nonintegrable in the neighborhood of the event horizon.
## 8 Static metrics and the Schwarzschild metric with "tortoise" coordinate
At spatial coordinate transformations of the Schwarzschild metric, the radius of the event horizon is changed. For instance, for the metric in isotropic coordinates [27, 28], the gravitational radius is \(\left(R_{is}\right)_{0}=r_{0}/4\), for the metric in spherical harmonic coordinates, \(\left(R_{g}\right)_{0}=r_{0}/2\). In this case, the mode of a particle "fall" on the event horizons is preserved (see, in Ref. [14] solutions (35) and (41), Hamiltonian (61) and the system of equations for radial functions (62)).
The similar situation is preserved when using the transformation of Ref. [10]
\[dr*=dr\left(1-\frac{r_{0}}{r}\right)^{-1}, \tag{112}\]
where
\[r*=r+r_{0}\mathrm{ln}\left(\frac{r_{0}}{r}-1\right)+\mathrm{const.} \tag{113}\]
A value \(r\) is the function of \(r*\).
The coordinate \(r*\rightarrow-\infty\) at \(r\left(r*\right)\to r_{0}\). In the system of equations (32), for the Schwarzschild metric at transformation (112), the first summands become equal to \(dF/dr*\), \(dG/dr*\). In the rest of the summands, the function of \(r\left(r*\right)\) is used. Asymptotically, at \(r*\rightarrow-\infty\) and \(r\left(r*\right)\to r_{0}\), the solutions of system (32) are given by
\[\left.\begin{array}{l}F\right|_{r*\rightarrow-\infty}=L*\left(r\left(r* \right)-r_{0}\right)^{-1/2}\sin\left(Er*+\delta*\right),\\ G\right|_{r*\rightarrow-\infty}=L*\left(r\left(r*\right)-r_{0}\right)^{-1/2} \cos\left(Er*+\delta*\right),\end{array} \tag{114}\]
where \(L*,\delta*\) are integration constants.
The wave functions in (114) preserve the mode of a particle "fall" and are square-nonintegrable (see also Eq. (37)).
## 9 Nonstationary metrics
After transformations from the Schwarzschild metric to the space-time of Lemaitre-Finkelstein and Kruskal-Szekeres nonstationary metrics, Dirac Hamiltonians explicitly
depend on time coordinate [14]. In these cases, examination of stationary states of particles is impossible [4].
## 10 Conclusions
The coordinate transformations of the Schwarzschild metric do not eliminate the singular mode of a particle "fall" to the event horizon of a black hole. This mode is unacceptable for quantum mechanics of stationary states.
This conclusion has been rigorously proved for EF and PG stationary metrics and is also valid for any transformed static metric, including Schwarzschild metric with a "tortoise" coordinate \(r*\).
## Acknowledgments
The author thanks M. A. Vronskiy and V.E. Shemarulin for the useful remarks and discussions. The author also thanks L. P. Babich and A. L. Novoselova for the essential technical assistance in preparation of the paper.
|
2305.01442 | A Direct Construction of Optimal Symmetrical Z-Complementary Code Sets
of Prime Power Lengths | This paper presents a direct construction of an optimal symmetrical
Z-complementary code set (SZCCS) of prime power lengths using a multi-variable
function (MVF). SZCCS is a natural extension of the Z-complementary code set
(ZCCS), which has only front-end zero correlation zone (ZCZ) width. SZCCS has
both front-end and tail-end ZCZ width. SZCCSs are used in developing optimal
training sequences for broadband generalized spatial modulation systems over
frequency-selective channels because they have ZCZ width on both the front and
tail ends. The construction of optimal SZCCS with large set sizes and prime
power lengths is presented for the first time in this paper. Furthermore, it is
worth noting that several existing works on ZCCS and SZCCS can be viewed as
special cases of the proposed construction. | Praveen Kumar, Sudhan Majhi, Subhabrata Paul | 2023-05-02T14:16:32Z | http://arxiv.org/abs/2305.01442v1 | # A Direct Construction of Optimal Symmetrical Z-Complementary Code Sets of Prime Power Lengths
###### Abstract
This paper presents a direct construction of an optimal symmetrical Z-complementary code set (SZCCS) of prime power lengths using a multi-variable function (MVF). SZCCS is a natural extension of the Z-complementary code set (ZCCS), which has only front-end zero correlation zone (ZCZ) width. SZCCS has both front-end and tail-end ZCZ width. SZCCSs are used in developing optimal training sequences for broadband generalized spatial modulation systems over frequency-selective channels because they have ZCZ width on both the front and tail ends. The construction of optimal SZCCS with large set sizes and prime power lengths is presented for the first time in this paper. Furthermore, it is worth noting that several existing works on ZCCS and SZCCS can be viewed as special cases of the proposed construction.
## I Introduction
The idea of Z-complementary pairs (ZCPs) was introduced by Fan _et al._[1]. The sum of the aperiodic auto-correlation function (AACF) of the two sequences in a ZCP is zero within a particular zone, which is referred to as the zero-correlation zone (ZCZ). When this ZCZ width \(Z\) is equal to the sequence length \(N\), ZCP becomes a Golay complementary pair (GCP). Unlike GCPs, ZCPs are available in arbitrary lengths with various ZCZ widths [2, 3, 4, 5, 6].
The idea of ZCPs introduced in [1] was generalized to a Z-complementary code set (ZCCS) by Feng _et al._ in [7]. ZCCS only considers the front-end ZCZ of the AACFs and aperiodic cross-correlation functions (ACCFs). Several generalized Boolean functions (GBFs) based constructions of ZCCSs of non-power-of-two lengths are proposed in the literature [8, 9, 10, 11, 12, 13]. A ZCCS with \(K\) codes, with each code having \(M\) sequences each of length \(N\) and ZCZ width \(Z\) is denoted by \((K,M,N,Z)\)-ZCCS. For the special case, \(K=M\) and \(N=Z\), it is known as a complete complementary code (CCC) and is denoted by \((K,K,N)\)-CCC. Recently, Li _et al._ proposed a direct construction of multiple CCC of prime power length and with inter set ZCZ width using a multi-variable function (MVF) [14]. After combining these multiple CCC, optimal \((p^{n+v},p^{n},p^{m},p^{m-v})\)-ZCCS are obtained [14].
Recently, the idea of ZCCS has been extended to symmetrical-ZCCS (SZCCS), which exhibits ZCZ properties for the front-end and the tail-end [15]. The authors in [15] have presented a GBF based construction of \((8,2,2^{m},2^{m-2}-1)\)-SZCCS. In practice, a front-end ZCZ and a tail-end ZCZ have a particular role in mitigating interference with small and large delays, respectively. SZCCSs with larger set sizes are used in designing training sequences for broadband generalized spatial modulation (GSM) systems over frequency-selective channels [15]. The fact that SZCCS is being widely applied in the GSM systems and also the unavailability of constructions with flexible parameters in terms of set size, flock size, and length have inspired the authors to propose a direct construction of SZCCSs of prime power lengths in this paper.
The proposed MVF-based construction of SZCCS has a set size \(p^{k+\delta}\), which is much larger than the flock size of \(p^{k}\), where \(m\geq 3,\ 0\leq\delta<m,\ 1\leq k\leq m-\delta\). Also, the proposed SZCCS has large ZCZ width of \(p^{m-\delta}-1\), and it achieves the optimality condition. Many of the existing constructions of ZCCS and SZCCS appear as special cases of the proposed construction.
The rest of the paper is structured as follows: preliminary work is covered in Section II, and the proposed SZCCS construction based on MVF is presented in Section III. Section IV of the paper presents a comprehensive comparison between the proposed constructions and the existing works, providing detailed insights. Following that, Section V concludes the paper.
## II Preliminaries
The essential concepts, notations, and previously established findings necessary for the proposed construction are explained in this section.
**Definition 1**: _Let \(\mathbf{u}=(u_{0},u_{1},\ldots,u_{N-1})\) and \(\mathbf{v}=(v_{0},v_{1},\ldots,v_{N-1})\) be two sequences of length \(N\) over \(\mathbb{Z}_{q}\). At a shift \(\tau\), the ACCF is defined as_
\[\mathcal{C}(\mathbf{u},\mathbf{v})(\tau)=\begin{cases}\sum_{i=0}^{N-1-\tau} \omega_{q}^{u_{i}-v_{i+\tau}},&0\leq\tau\leq N-1,\\ \sum_{i=0}^{N-1+\tau}\omega_{q}^{u_{i}-v_{i}},&-N+1\leq\tau\leq-1,\\ 0,&|\tau|\geq N,\end{cases} \tag{1}\]
_where \(q\) is a positive integer greater than \(2\), and \(\omega_{q}=\exp(2\pi\sqrt{-1}/q)\). For the special case, \(\mathbf{u}=\mathbf{v},\ \mathcal{C}(\mathbf{u},\mathbf{v})(\tau)\) is referred to as the AACF of \(\mathbf{u}\) and is denoted by \(\mathcal{A}(\mathbf{u})(\tau)\)._
_Consider a set \(\mathrm{C}=\left\{C^{0},C^{1},\ldots,C^{K-1}\right\}\), where each set \(C^{e}\) consists of \(M\) sequences, i.e., \(C^{e}=\left\{\mathbf{c}_{0}^{e},\mathbf{c}_{1}^{e},\ldots,\mathbf{c}_{M-1}^{e}\right\}\), and length of each sequence \(\mathbf{c}_{l}^{e}\) is \(N\), where \(0\leq e\leq K-1\) and \(0\leq l\leq M-1\)._
**Definition 2**: _The set \(\mathrm{C}\) defined above is called a ZCCS, denoted by \((K,M,N,Z)\)-ZCCS, if the ACCF of \(C^{e}\) and \(C^{e^{\prime}}\) satisfies_
\[\mathcal{C}\Big{(}C^{e},C^{e^{\prime}}\Big{)}(\tau) =\sum_{l=0}^{M-1}\mathcal{C}\Big{(}\mathbf{c}_{l}^{e},\mathbf{c}_ {l}^{e^{\prime}}\Big{)}(\tau) \tag{2}\] \[=\begin{cases}MN,&\tau=0,e=e^{\prime},\\ 0,&0<|\tau|<Z,e=e^{\prime},\\ 0,&|\tau|<Z,e\neq e^{\prime},\end{cases}\]
_where \(0\leq e,e^{\prime}\leq K-1\). For a general \((K,M,N,Z)\)-ZCCS, \(K\leq M\lfloor N/Z\rfloor\), whereas for the special case, \(K=M\lfloor N/Z\rfloor\), it becomes an optimal ZCCS [16]. Again, for the special case, \(K=M\) and \(Z=N\), the ZCCS is called a CCC of order \(K\) and length \(N\), and is denoted by \((K,K,N)\)-CCC._
**Definition 3**: _The set \(\mathrm{C}\) defined above is known as a SZCCS, denoted by \((K,M,N,Z)\)-SZCCS, if for \(\mathcal{T}_{1}=\{1,2,\cdots,Z\}\) and \(\mathcal{T}_{2}=\{N-Z,N-Z+1,\cdots,N-1\}\) with \(Z\leq N\), it satisfies the following properties_
\[P1: \sum_{i=0}^{M-1}\mathcal{A}(\mathbf{c}_{i}^{e})(\tau)=0,\quad \text{ for all }|\tau|\in(\mathcal{T}_{1}\cup\mathcal{T}_{2})\cap\mathcal{T}; \tag{3}\] \[P2: \sum_{i=0}^{M-1}\mathcal{C}(\mathbf{c}_{i}^{e},\mathbf{c}_{i}^{e^ {\prime}})(\tau)=0,\quad\text{ for all }|\tau|\in\{0\}\cup\mathcal{T}_{1}\cup\mathcal{T}_{2};\]
_where \(\mathcal{T}=\{1,2,\ldots,N-1\}\) and \(e\neq e^{\prime}\). For a \((K,M,N,Z)\)-SZCCS, \(K,M,N\) and \(Z\) are known as the set size, flock size, sequence length, and ZCZ width, respectively._
For a MVF \(f:\{0,1,\ldots,p-1\}^{m}\rightarrow\mathbb{Z}_{q}\) in \(m\)\(p\)-ary variables \(x_{1},x_{2},\ldots,x_{m}\), corresponding \(\mathbb{Z}_{q}\)-valued sequence is denoted by \(\mathbf{f}\), and calculated as \(\mathbf{f}=(f_{0},f_{1},\ldots,f_{p-1})\), where \(f_{r}=f(r_{1},r_{2},\ldots,r_{m})\), and \(r=(r_{1},r_{2},\ldots,r_{m})\) is the \(p\)-ary representation of the integer \(r\), i.e., \(r=\sum_{i=1}^{m}r_{i}p^{i-1}\). The complex-valued sequence associated with \(\mathbf{f}\) is produced as \(\Psi(f)=\left(\omega_{q}^{f_{0}},\omega_{q}^{f_{1}},\ldots,\omega_{q}^{f_{p-1 }}\right)\), where \(\omega_{q}\) is the \(q\)-th \((q\geq 2)\) root of unity [14].
**Lemma 1** ([16, 15]): _Any unimodular \((K,M,N,Z)\)-SZCCS satisfies \(K\leq M\left\lfloor\frac{N}{Z+1}\right\rfloor\), when \(K=M\left\lfloor\frac{N}{Z+1}\right\rfloor\), it is known as an optimal \((K,M,N,Z)\)-SZCCS._
## III Proposed Construction
This section explains a MVF based construction of optimal \((p^{k+\delta},p^{k},p^{m},p^{m-\delta}-1)\)-SZCCS.
**Theorem 1**: _For any positive integer \(m\geq 3\) and \(0\leq\delta<m\), we let the set \(\{1,2,\ldots,m-\delta\}\) be partitioned into \(k\) sets, \(S_{1},S_{2},\ldots,S_{k}\), where \(1\leq k\leq m-\delta\). Let the cardinality of the set \(S_{\beta}\) be \(m_{\beta}\), and \(\pi_{\beta}\) be a one-one and onto mapping from \(\{1,2,\ldots,m_{\beta}\}\) to \(S_{\beta}\) for \(\beta=1,2,\ldots,k\). Let us define a MVF \(g:\{0,1,\ldots,p-1\}^{m}\rightarrow\mathbb{Z}_{q}\) by_
\[g=\frac{q}{p}\sum_{\beta=1}^{k}\sum_{\gamma=1}^{m_{\beta}-1}x_{\pi_{\beta}( \gamma)}x_{\pi_{\beta}(\gamma+1)}+\sum_{l=1}^{m}\lambda_{l}x_{l}+\lambda_{0}, \tag{4}\]
_where \(p\) is a prime and \(p\mid q\), \(\lambda_{l}\in\mathbb{Z}_{q}\), \(0\leq l\leq m\). Then the set \(A=\left\{A^{0},A^{1},\ldots,A^{p^{k+\delta-1}}\right\}\) is a \((p^{k+\delta},p^{k},p^{m},p^{m-\delta}-1)\)-SZCCS, where \(A^{t}=\{\mathbf{a}_{0}^{t},\mathbf{a}_{1}^{t},\ldots,\mathbf{a}_{p^{k}-1}^{t}\}\) and_
\[\mathbf{a}_{\sigma}^{t}=\mathbf{g}+\frac{q}{p}\Bigg{(}\sum_{\beta=1}^{k}\sigma _{\beta}\mathbf{x}_{\pi_{\beta}(1)}+\sum_{\beta=1}^{k}t_{\beta}\mathbf{x}_{\pi_{\beta}( m_{\beta})} \tag{5}\] \[\qquad\qquad+\sum_{\beta=1}^{\delta}t_{k+\beta}\mathbf{x}_{m-\delta+ \beta}\Bigg{)},\]
_for \(\sigma=0,1,\ldots,p^{k}-1\) and \(t=0,1,\ldots,p^{k+\delta}-1\) with \(p\)-ary vector representation \((\sigma_{1},\sigma_{2},\ldots,\sigma_{k})\) and \((t_{1},t_{2},\ldots,t_{k},\ldots,t_{k+\delta})\), respectively._
We will prove that the set \(A=\left\{A^{0},A^{1},\ldots,A^{p^{k+\delta-1}}\right\}\) obtained from _Theorem 1_ satisfies two conditions \(P1\) and \(P2\) given in _Definition 3_, with \(K=p^{k+\delta},M=p^{k},N=p^{m}\), and \(Z=p^{m-\delta}-1\), and hence a \((p^{k+\delta},p^{k},p^{m},p^{m-\delta}-1)\)-SZCCS. Let the \(p\)-ary representations of \(0\leq r,s<p^{m}\) be \((r_{1},r_{2},\ldots,r_{m})\) and \((s_{1},s_{2},\ldots,s_{m})\), respectively. Let \(\mathbf{a}_{\sigma}^{t}=(a_{\sigma,0}^{t},a_{\sigma,1}^{t},\ldots,a_{\sigma,p^{m -1}}^{t})\).
First, we prove the property \(P1\) of _Definition 3_, i.e., for \(\mathcal{T}_{1}=\{1,2,\ldots,p^{m-\delta}-1\}\) and \(\mathcal{T}_{2}=\{p^{m}-p^{m-\delta}-1,p^{m}-p^{m-\delta},\ldots,p^{m}-1\}\) the AACF for every \(A^{t}\) zero, i.e.,
\[\sum_{\sigma=0}^{p^{k}-1}\mathcal{A}(\mathbf{a}_{\sigma}^{t})(\tau)=\sum_{r=0}^{p^ {m}-\tau-1}\sum_{\sigma=0}^{p^{k}-1}\omega_{q}^{(a_{\sigma,r}^{t}-a_{\sigma,r+ \tau}^{t})}=0, \tag{6}\]
for all \(|\tau|\in(\mathcal{T}_{1}\cup\mathcal{T}_{2})\cap\mathcal{T}\), where \(\mathcal{T}=\{1,2,\ldots,p^{m}-1\}\). Let \(s=r+\tau\) for any integer \(r\). Then we consider the two cases listed below.
_Case I:_\(r_{\pi_{\beta}(1)}\neq s_{\pi_{\beta}(1)}\) for some \(\beta\in\{1,2,\ldots,k\}\). Then there exist \(\mathbf{a}_{\sigma j,0}^{t}=(a_{\sigma j,0}^{t},a_{\sigma j,1}^{t},\ldots,a_{ \sigma j,p^{m}-1}^{t})=\mathbf{a}_{\sigma}^{t}+(jq/p)\mathbf{x}_{\pi_{\beta}(1)}\in A ^{t}\), where \(1\leq j\leq p-1\), such that
\[a_{\sigma j,r}^{t}-a_{\sigma,r}^{t}=\frac{jq}{p}r_{\pi_{\beta}(1)}, \tag{7}\]
and
\[a_{\sigma j,s}^{t}-a_{\sigma,s}^{t}=\frac{jq}{p}s_{\pi_{\beta}(1)}. \tag{8}\]
So from the above two equations, we get
\[\left(a_{\sigma j,r}^{t}-a_{\sigma j,s}^{t}\right)-\left(a_{\sigma,r}^{t}-a_{ \sigma,s}^{t}\right)=\frac{jq}{p}\big{(
So, the sum AACF is zero. i.e.,
\[\sum_{\sigma=0}^{p^{k}-1}\mathcal{A}(\mathbf{a}_{\sigma}^{t})(\tau)=0. \tag{12}\]
_Case II: \(r_{\pi_{\beta}(1)}=s_{\pi_{\beta}(1)}\) for all \(\beta\in\{1,2,\ldots,k\}\)._ Then, there exist integers \(\hat{\beta}\) and \(\hat{\gamma}\) such that \(\hat{\beta}\) is the largest integer satisfying \(r_{\pi_{\beta}(\gamma)}=s_{\pi_{\beta}(\gamma)}\) for all \(\beta=1,2,\ldots,\hat{\beta}-1\), and \(\gamma=1,2,\ldots,m_{\alpha}\), and \(\hat{\gamma}\) is the least integer satisfying \(r_{\pi_{\hat{\beta}}(\hat{\gamma})}\neq s_{\pi_{\hat{\beta}}(\hat{\gamma})}\). If the above statement doesn't hold, then we have \(r_{i}=s_{i}\) for \(i=1,2,\ldots,m-\delta\) since \(\bigcup_{\alpha=1}^{k}I_{\alpha}=\{1,2,\ldots,m-\delta\}\). Hence the lower bound of \(\tau\) is estimated as,
\[\tau=s-r=\sum_{i=m-\delta+1}^{m}(s_{i}-r_{i})p^{i-1}\geq p^{m-\delta}, \tag{13}\]
also the upper bound of \(\tau\) is estimated as,
\[\begin{split}\tau=& s-r=\sum_{i=m-\delta+1}^{m}(s_{ i}-r_{i})p^{i-1}\\ &\leq p^{m-\delta}+p^{m-\delta+1}+\ldots+p^{m-1}=p^{m}-p^{m- \delta}.\end{split} \tag{14}\]
The above two equations (13) and (14) contradict the assumption that \(\tau\in(\mathcal{T}_{1}\cup\mathcal{T}_{2})\cap\mathcal{T}\). So this guarantees the existence of integers \(\hat{\beta}\) and \(\hat{\gamma}\) with the above-mentioned conditions. Let \(r^{j}\) and \(s^{j}\) be two integer which differs from \(r\) and \(s\), respectively, in only one position \(\pi_{\hat{\beta}}(\hat{\gamma}-1)\) of their \(p\)-ary representation, i.e., \(r^{j}_{\pi_{\hat{\beta}}(\hat{\gamma}-1)}=r_{\pi_{\hat{\beta}}(\hat{\gamma}-1 )}-j\) and \(s^{j}_{\pi_{\hat{\beta}}(\hat{\gamma}-1)}=s_{\pi_{\hat{\beta}}(\hat{\gamma}-1 )}-j\), where \(1\leq j\leq p-1\). Since \(s=r+\tau\), we get \(s^{j}=r^{j}+\tau\). The difference between the terms \(a^{t}_{\sigma,r}-a^{t}_{\sigma,r^{j}}\) is calculated below as
\[\begin{split} a^{t}_{\sigma,r}-a^{t}_{\sigma,r^{j}}& =g_{r}-g_{r^{j}}\\ &=j\bigg{(}\frac{q}{p}r_{\pi_{\hat{\beta}}(\hat{\gamma}-2)}+ \frac{q}{p}r_{\pi_{\hat{\beta}}(\hat{\gamma})}+\lambda_{\pi_{\hat{\beta}}( \hat{\gamma}-1)}\bigg{)}.\end{split} \tag{15}\]
Similarly, it can be calculated that
\[a^{t}_{\sigma,s}-a^{t}_{\sigma,s^{j}}=j\bigg{(}\frac{q}{p}s_{\pi_{\hat{\beta}} (\hat{\gamma}-2)}+\frac{q}{p}s_{\pi_{\hat{\beta}}(\hat{\gamma})}+\lambda_{ \pi_{\hat{\beta}}(\hat{\gamma}-1)}\bigg{)}. \tag{16}\]
From the above two equations and using \(r_{\pi_{\hat{\beta}}(\hat{\gamma}-2)}\neq s_{\pi_{\hat{\beta}}(\hat{\gamma}- 2)}\), we get the following equality
\[a^{t}_{\sigma,r^{j}}-a^{t}_{\sigma,s^{j}}-\big{(}a^{t}_{\sigma,r}-a^{t}_{ \sigma,s}\big{)}=j\frac{q}{p}\Big{(}s_{\pi_{\hat{\beta}}(\hat{\gamma})}-r_{\pi _{\hat{\beta}}(\hat{\gamma})}\Big{)}. \tag{17}\]
Raising to the power of \(q\)-th root of unity and then taking sum over \(1\leq j\leq p-1\), we get the following expression
\[\sum_{j=1}^{p-1}\omega_{q}^{\big{(}a^{t}_{\sigma,r^{j}}-a^{t}_{ \sigma,s^{j}}\big{)}-\big{(}a^{t}_{\sigma,r}-a^{t}_{\sigma,s}\big{)}}=\sum_{j=1 }^{p-1}\omega_{p}^{j\big{(}s_{\pi_{\hat{\beta}}(\hat{\gamma})}-r_{\pi_{\hat{ \beta}}(\hat{\gamma})}\big{)}}=-1. \tag{18}\]
Hence,
\[\omega_{q}^{\big{(}a^{t}_{\sigma,r}-a^{t}_{\sigma,s}\big{)}}+\sum_{j=1}^{p-1} \omega_{q}^{\big{(}a^{t}_{\sigma,r^{j}}-a^{t}_{\sigma,s^{j}}\big{)}}=0. \tag{19}\]
So, the AACF is zero. Combining _Cases_ I and II, we get AACF of \(A^{t}\) is zero for \(\tau\in(\mathcal{T}_{1}\cup\mathcal{T}_{2})\cap\mathcal{T}\).
Next, in the following part, we will demonstrate that any two distinct sets \(A^{t_{1}}\) and \(A^{t_{2}}\) have zero ACCF for all \(|\tau|\in\{0\}\cup\mathcal{T}_{1}\cup\mathcal{T}_{2}\), i.e.,
\[\sum_{\sigma=0}^{p^{k}-1}\mathcal{C}(\mathbf{a}_{\sigma}^{t_{1}},\mathbf{a}_{ \sigma}^{t_{2}})(\tau)=\sum_{r=0}^{p^{m}-\tau-1}\sum_{\sigma=0}^{p^{k}-1} \omega_{q}^{\big{(}a^{t_{1}}_{\sigma,r}-a^{t_{2}}_{\sigma,r+\tau}\big{)}}=0. \tag{20}\]
Similar to the first part, we let \(s=r+\tau\) for any integer \(r\) and consider two cases.
_Case I_: Suppose \(r_{\pi_{\beta}(1)}\neq s_{\pi_{\beta}(1)}\) for some \(\beta\in\{1,2,\ldots,k\}\). In the same manner as _Case I_ in the first part there exist \(\mathbf{a}_{\sigma^{j}}^{t}=(a^{t}_{\sigma^{j},0},a^{t}_{\sigma^{j},1},\ldots,a^{t }_{\sigma^{j},p^{m}-1})=\mathbf{a}_{\sigma}^{t}+(jq/p)\mathbf{x}_{\pi_{\beta}(1)}\in A ^{t}\), for \(t=t_{1},t_{2}\), where \(1\leq j\leq p-1\), such that
\[\sum_{j=1}^{p-1}\omega_{q}^{\big{(}a^{t_{1}}_{\sigma^{j},r}-a^{t_{ 2}}_{\sigma^{j},r}\big{)}-\big{(}a^{t_{1}}_{\sigma,r}-a^{t_{2}}_{\sigma,s}\big{)}}= \sum_{j=1}^{p-1}\omega_{p}^{j\big{(}r_{\pi_{\beta}(1)}-s_{\pi_{\beta}(1)}\big{)}}=-1. \tag{21}\]
Hence, similar to the first part, the ACCF becomes zero, i.e.,
\[\sum_{\sigma=0}^{p^{k}-1}\omega_{q}^{\big{(}a^{t_{1}}_{\sigma,r}-a^{t_{2}}_{ \sigma,s}\big{)}}=0. \tag{22}\]
_Case II_: Suppose we have \(r_{\pi_{\beta}(1)}=s_{\pi_{\beta}(1)}\) for all \(\beta\in\{1,2,\ldots,k\}\). As argued in _Case II_ in the first part, a similar result can be obtained, i.e.,
\[\omega_{q}^{\big{(}a^{t_{1}}_{\sigma,r}-a^{t_{2}}_{\sigma,s}\big{)}}+\sum_{j=1}^{p-1 }\omega_{q}^{\big{(}a^{t_{1}}_{\sigma,r^{j}}-a^{t_{2}}_{\sigma,s^{j}}\big{)}}=0. \tag{23}\]
So, from _Case I_ and _Case II_, we get \(\sum_{\sigma=0}^{p^{k}-1}\mathcal{C}(\mathbf{a}_{\sigma}^{t_{1}},\mathbf{a}_{ \sigma}^{t_{2}})(\tau)=0\), for \(|\tau|\in\mathcal{T}_{1}\cup\mathcal{T}_{2}\). It only suffices to show that
\[\sum_{\sigma=0}^{p^{k}-1}\mathcal{C}(\mathbf{a}_{\sigma}^{t_{1}},\mathbf{a}_{ \sigma}^{t_{2}})(0)=\sum_{\sigma=0}^{p^{k}-1}\sum_{r=0}^{p^{m}-1}\omega_{q}^{ \big{(}a^{t_{1}}_{\sigma,r}-a^{t_{2}}_{\sigma,r}\big{)}}=0. \tag{24}\]
Let \(\oplus\) denotes modulo-\(p\) addition; let \((t_{11},t_{12},\ldots,t_{1k+\delta})\) and \((t_{21},t_{22},\ldots,t_{2k+\delta})\) denote the \(p\)-ary representations of \
The proposed SZCCS in _Theorem_ 1, is optimal since \(K=p^{k+\delta}=p^{k}(p^{m}/p^{m-\delta})=M(N/(Z+1))\).
**Remark 1**: _For \(\delta=0\), the proposed construction generates \((p^{k},p^{k},p^{m})\)-CCC. So, the construction of \((p^{k},p^{k},p^{m})\)-CCC, in [17, 18] become particular cases of the proposed construction._
**Remark 2**: _For \(p=2\), the proposed construction generates \((2^{k+\delta},2^{k},2^{m},2^{m-\delta})\)-ZCCS, so the available construction of ZCCS provided in [19] is a special case of the proposed construction._
**Remark 3**: _In [20], direct construction of \((p^{\delta+1},p,p^{m},p^{m-\delta})\)-ZCCS is provided, the proposed construction with \(k=1\) generates the ZCCS with the same parameter._
**Remark 4**: _Since every SZCCS is a ZCCS, the construction of \((p^{n+v},p^{n},p^{m},p^{m-v})\)-ZCCS in [14] occurs as a special case of the proposed construction in Theorem 1._
**Remark 5**: _The construction of \((8,2,2^{m},2^{m-2}-1)\)-SZCCS is given is [15], which appears as a special case of the proposed construction when \(p=2,\delta=2,k=1\)._
We provide the following example to illustrate how optimal SZCCS is obtained from _Theorem_ 1.
**Example 1**: _For \(m=3,\delta=1\), let \(\pi_{1}=\pi\) is the identity permutation of \(\{1,2\}\), i.e. \(\pi(1)=1\) and \(\pi(2)=2\). Further let us take \(p=3\) and \(q=3\) and define the MVF \(g:\{0,1,2\}^{3}\rightarrow\mathbb{Z}_{3}\) as \(g(x_{1},x_{2},x_{3})=x_{1}x_{2}\). Also let us define the sets \(A^{t}=\{\mathbf{f}+\sigma_{1}x_{1}+t_{1}x_{2}+t_{2}x_{3}:\sigma_{1}\in\{0,1,2\}\}\), for \(t=0,1,\ldots,8\) with \(p\)-ary representation \((t_{1},t_{2})\). So, from Theorem 1, \(\mathrm{A}=\{A^{0},A^{1},\ldots,A^{8}\}\) is a \((9,3,27,8)\)-SZCCS. The codes \(A^{0},A^{1},\ldots,A^{8}\) are listed in Table I explicitly, where by integer \(i\), we mean \(\omega^{i}\) where \(\omega=exp(2\pi\sqrt{-1}/3)\), and AACF graph of \(A^{0}\) is plotted in Fig. 1, and ACCF graph of \(A^{2}\) and \(A^{8}\) in Fig. 2._
## IV Comparison With Existing Works
In the literature, research on SZCCS is relatively recent, with only constructions available for \((8,2,2^{m},2^{m-2}-1)\)-SZCCS [15] and \((2,2,2^{m-1}+2^{v},2^{v}-1)\)-SZCCS [21]. Optimal SZCCS are utilized in the design of optimal training sequences for GSM systems, which can achieve superior channel estimation performance compared to other sequence classes [15, 21]. The proposed construction in _Theorem_ 1 produces optimal SZCCS with variable set size \((p^{k+\delta})\), flock size \((p^{k})\), sequence length \((p^{m})\), and ZCZ width \((p^{m-\delta}-1)\). Thus, due to the flexibility of parameters, the proposed SZCCS offers adaptability in generating optimal training matrices for GSM systems with multiple active transmit antennas [21].
Based on definitions 2 and 3, it is clear that a ZCCS can be regarded as a particular case of an SZCCS, where \(\mathcal{T}_{2}=\phi\) and \(\mathcal{T}_{1}=\{1,2,\ldots,Z-1\}\). Moreover, according to _Lemma_ 1, optimality of SZCCS implies optimality of ZCCS. Consequently, as detailed in _Remark_ 1-4, the existing constructions of CCC in [17, 18] and ZCCS in [14, 19, 20] can be viewed as special cases of the proposed construction.
## V Conclusion
An optimal SZCCS of prime power length has been constructed directly using MVF in this paper. Since SZCCS have both front-end and tail-end ZCZ width, they are used in
\begin{table}
\begin{tabular}{|c|c|} \hline \(A^{0}\) & \(A^{1}\) \\ \hline \(00001201200012012000012021\) & \(012012010001202100012021000\) \\ \(00012012000120210000120210210\) & \(01210222021012022201202222201202222\) \\ \(000201120002012102000210120\) & \(01210111021201110122101111112\) \\ \hline \(A^{2}\) & \(A^{3}\) \\ \hline \(02100001202100012021000012\) & \(0000120111112012222201210\) \\ \(02111110211021112012111201\) & \(000120120111120121021222012102\) \\ \(02122212002122212002122212200\) & \(00020120111012120122212002122\) \\ \hline \(A^{4}\) & \(A^{5}\) \\ \hline \(012020100120121120120222\) & \(0210000121021111202102222010\) \\ \(01210222212021000210211112012\) & \(012101112011022220121200012021222\) \\ \hline \(A^{8}\) & \(A^{9}\) \\ \hline \(012020001210021112012022201111120\) & \(021111201022220122012\) \\ \(02120111120122220110120001212222012\) & \(02122212010020021210111012\) \\ \hline \(A^{6}\) & \(A^{7}\) \\ \hline \(000012012222201210111120120\) & \(0120120102102222102111112021000\) \\ \(000120120222102212012110121021\) & \(01210202222010211111201000\) \\ \(0002012022210022210021110121201\) & \(0121011120110200120012021222\) \\ \hline \(A^{8}\) & \(A^{9}\) \\ \hline \(021000012120222011012111120\) & \(021111201012220122012\) \\ \(0212221202101111012102000201\) & \(02122212010111102000201\) \\ \hline \end{tabular}
\end{table} TABLE I: Different codes of the SZCCS obtained from _Example_ 1
Fig. 1: AACF of \(A^{0}\).
designing optimal training sequences for broadband GSM systems over frequency-selective channels. The proposed MVF-based construction generates optimal \((p^{k+\delta},p^{k},p^{m},p^{m-\delta}-1)\)-SZCCS, which generalizes many of the existing works of ZCCS and SZCCS.
|
2304.00475 | Dirac and Majorana neutrino scattering by cosmic torsion in spatial-flat
FRW spacetime background | The possibility of distinguishing Dirac and Majorana fermions by cosmic
torsion in the spatial-flat FRW spacetime is discussed. The scattering
amplitudes of two types of fermions deviate from each other by the vector part
of torsion in non-minimal coupling case. The scattering of massive fermions by
cosmic torsion leads to a shift of final state energy distribution. The
difference between shift values of two types of fermions can be used to
distinguish fermion types of neutrinos. | Wei Lin, Xun Xue | 2023-04-02T07:20:54Z | http://arxiv.org/abs/2304.00475v1 | # Dirac and Majorana neutrino scattering by cosmic torsion in spatial-flat FRW spacetime background
###### Abstract
The possibility of distinguishing Dirac and Majorana fermions by cosmic torsion in the spatial-flat FRW spacetime is discussed. The scattering amplitudes of two types of fermions deviate from each other by the vector part of torsion in non-minimal coupling case. The scattering of massive fermions by cosmic torsion leads to a shift of final state energy distribution. The difference between shift values of two types of fermions can be used to distinguish fermion types of neutrinos.
## 1 Introduction
The neutrino oscillation experiment reveals that neutrino has a tiny mass about \(0.1\)eV[1]. However, the origin of the neutrino mass remains to be unclear. In the Standard Model, fermions get Dirac type masses via the Yukawa coupling by the Higgs mechanism. If neutrino has only the Dirac type of mass, the Yukawa coupling constants of neutrino would be smaller than ones of charged leptons at the order of \(10^{-5}\), as the same order of their mass magnitudes difference, which is regarded as unnatural[2]. The see-saw mechanism can naturally explain the tiny mass of the left-hand neutrino in stead of assuming the huge difference between neutrino Yukawa coupling and charged lepton ones with the large Majorana mass for the right-hand neutrino. However the see-saw mechanism gives the Majorana type of masses to the left-hand neutrino which is the only observable part of neutrino at the Standard Model energy scale[3]. Whether neutrino mass is a Dirac one or a Majorana one concerns different neutrino mass generation mechanism. The Majorana nature of neutrino mass provides a direct indication of the existence of new physics beyond Standard Model. Neutrinoless double beta decay process is a direct way to distinguish the Majorana type of neutrino from Dirac one. However, the experiment is not able to give a deterministic result to the existence of neutrinoless double beta decay yet[4]. There are other proposals to determine the fermion type of neutrino, e.g., the idea to distinguish Dirac fermion from Majorana one by the scattering of fermion in the gravitational field[5, 6, 7]. Lai and Xue consider the possibility to determine the fermion type of neutrino using its scattering behavior by torsion field[8].
Torsion can be decomposed into the vector part, the axial vector part and the pure tensor part according to the irreducible representation under the global Lorentz group[9]. In the minimal coupling scheme, only the axial vector torsion can couple with spinor field. The vector torsion can have non-minimal coupling with spinor field while the pure tensor torsion is impossible to couple with spinor field even in the non-minimal coupling scheme[10, 11]. The non-minimal couple between torsion and spinor field is universal due to the renormalization effect of spinor field[12].
The idea of distinguishing fermion types by torsion field is first proposed by Lai and Xue[8] that the scattering by vector torsion field can distinguish the Dirac from Majorana neutrino in the non-minimal coupling case in the asymptotic Minkowski spacetime background. Although General Relativity is a torsion-free gravitation theory and has been verified by observations from solar system to galactic scale phenomenon[13, 14, 15], the \(H_{0}\) tension problem that the Hubble constant measured by cosmological model-independent standard candles calibration in the late-time universe and that calculated from CMB data based on \(\Lambda CDM\) model has a discrepancy more than \(3\sigma\) in the recent result[16] indicates that General relativity, the gravitation theory \(\Lambda CDM\) based on, may need modifications from quantum gravity at cosmic
scale[17, 18]. Some proposals of modified cosmological models predict non-zero torsion distribution[19, 20, 21, 22, 23, 24, 25, 26] which will take effect on processes happened at cosmic scale, e.g., the cosmic neutrino propagation process. The effect of expanding universe must be taken into account during cosmic scale process so that the scattering of neutrino should be treated in a FRW background rather than a asymptotic Minkowski spacetime.
## 2 Scattering amplitudes of fermions by torsion fields in spatial-flat FRW spacetime background
### Interaction Hamiltonian density
The scattering amplitude or S-matrix can be expressed as
\[{\bf S}={\bf T}\{e^{-i\int d^{4}x{\cal H}_{I}(x)}\} \tag{1}\]
in QFT framework[27] where \({\cal H}_{I}\) is the interaction Hamiltonian. In the case of fermion coupling with background gravity, the interaction Hamiltonian can be read from the the Dirac action in the curved spacetime,
\[S_{D}=\int d^{4}x\sqrt{-g}\bar{\psi}\left[\frac{i}{2}\gamma^{\mu}\overleftrightarrow{ D_{\mu}}\psi-m\psi\right]\;, \tag{2}\]
where \({\cal D}_{\mu}=\partial_{\mu}-iA_{\mu}\) is the Fock-Ivanenko covariant derivative in minimally coupling scheme, \(A_{\mu}=\frac{i}{2}A^{ab}_{\;\;\;\mu}S_{ab}\) is a Lorentz algebra valued 1-form known as the Lorentz connection or the spin connection and \(S_{ab}\) are the Lorentz generators in a given representation[9]. By convention, spacetime indices is denoted by the lowercase Greek letters, e.g. \(\mu,\nu,\rho,\cdots\) etc. while the tangent space indices by lowercase Latin letters, e.g. \(a,b,c,\cdots\) etc. but Latin letters \(i,j,k,\cdots\) are left for space indices of spacetime. Due to the equivalence principle, the tangent space can be equipped with local Lorentzian frame represented by tetrad fields \(h_{a}={h_{a}}^{\mu}\partial_{\mu}\) and the corresponding coframe \(h^{a}={h^{a}}_{\mu}dx^{\mu}\) satisfying \(g_{\mu\nu}=\eta_{ab}{h^{a}}_{\mu}{h^{b}}_{\nu}\) and \(\eta_{ab}=g_{\mu\nu}{h_{a}}^{\mu}{h_{b}}^{\nu}\) where \(\eta_{ab}=diag\left(1,-1,-1,-1\right)\) is the Minkowskian metric. For spinor field \(\psi\), \(S_{ab}\) are given by \(S_{ab}=\frac{i}{4}\left[\gamma_{a},\gamma_{b}\right]\) with \(\gamma_{a}\) the Dirac matrices. The Lorentz connection can be decomposed into \({A^{ab}_{\;\;\;\mu}=\tilde{A}^{ab}_{\;\;\mu}+K^{ab}_{\;\;\mu}}\) where the contortion tensor \({K^{ab}_{\;\;\;\mu}}\) is defined by \({K_{abc}=\frac{1}{2}\left(T_{bac}+T_{cab}-T_{abc}\right)}\) and \({\tilde{A}^{ab}_{\;\;\mu}}\) is the torsionless Levi-Civita spin connection which is determined completely by the choice of local tetrad fields expressed as \(\tilde{A}_{abc}=\frac{1}{2}\left(f_{bac}+f_{cab}-f_{abc}\right)\) and determines the curvature of spacetime completely in General Relativity, where the torsion fields \({T^{\rho}}_{\nu\mu}\) are defined by \({T^{\rho}}_{\nu\mu}={\Gamma^{\rho}}_{\mu\nu}-{\Gamma^{\rho}}_{\nu\mu}\) and \({f^{c}}_{ab}={h_{a}}^{\mu}{h_{b}}^{\nu}\) (\(\partial_{\nu}{h^{c}}_{\mu}-\partial_{\mu}{h^{c}}_{\nu}\)) are the structure coefficients of tetrad basis satisfying \(\left[h_{a},h_{b}\right]={f^{c}}_{ab}h_{c}\).
The Fock-Ivanenko covariant derivative contains both torsion field part and pure gravity part. To explore the effect of gravity and torsion separately, it would be better to decompose the Fock-Ivanenko covariant derivative into \({\cal D}_{\mu}=\tilde{\cal D}_{\mu}-iK_{\mu}\) where \(\tilde{\cal D}_{\mu}=\partial_{\mu}-\frac{i}{2}\tilde{A^{ab}}_{\mu}S_{ab}\) is the covariant derivative for the torsionless Levi-Civita connection and the effects of torsion are fully contained in the contortion part \(K_{\mu}=\frac{1}{2}{K^{ab}_{\;\;\mu}}S_{ab}\). In the minimally coupling case, the Dirac action (2) can be reduced to
\[S_{D}=\int d^{4}x\sqrt{-g}\bar{\psi}\left[i\left(\gamma^{\mu}\tilde{\cal D}_{ \mu}\psi-\frac{3}{4}i\gamma^{a}{\cal A}_{a}\gamma_{5}\psi\right)-m\psi\right] \tag{3}\]
where \(\gamma_{5}=i\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}\) and \({\cal A}^{a}=\frac{1}{6}\varepsilon^{abcd}T_{bcd}\) is the axial vector part of torsion tensor[10]. The vector part \({\cal V}_{a}={T^{b}}_{ba}\) does not couple with the spinor fields in the minimally coupling case. However, the non-minimal coupling is universal because the renormalization counter terms of the coupling between matter field and torsion can generate the non-minimal coupling even the original coupling is the minimal one at tree level[10, 11, 12]. Under the constraints of covariance, locality, dimension and parity preserving,
the action of fermion in the curved spacetime background with torsion is
\[S_{D}=\int d^{4}x\sqrt{-g}\bar{\psi}\left[i\left(\gamma^{\mu}\tilde{\cal D}_{\mu} \psi+i\eta_{1}\gamma^{a}{\cal V}_{a}\psi-\frac{3}{4}i\eta_{2}\gamma^{a}{\cal A}_ {a}\gamma_{5}\psi\right)-m\psi\right]\;, \tag{4}\]
where the possible allowed non-minimal terms reduce to the minimal coupling case when \(\eta_{1}=0\) and \(\eta_{2}=1\)[11].
The equation of motion(EoM) of spinor field correspond to the action (4) can be easily read as
\[i\left(\gamma^{\mu}\tilde{\cal D}_{\mu}\psi+i\eta_{1}\gamma^{a}{\cal V}_{a}\psi -\frac{3}{4}i\eta_{2}\gamma^{a}{\cal A}_{a}\gamma_{5}\psi\right)-m\psi=0. \tag{5}\]
The equation (5) is hard to solve for general torsion evolution. Fortunately, the solutions of spinor field for some special spacetime without torsion have been found.[28] Moreover, since general relativity has passed almost all the observational examination[13, 14, 15], the effects of torsion field should be relatively small compared to that of spacetime metric. Thus, we may take the torsion terms as perturbation to a torsion free theory. The torsion free Dirac action is
\[S_{0}=\int d^{4}x\sqrt{-g}\bar{\psi}\left[i\gamma^{\mu}\tilde{\cal D}_{\mu} \psi-m\psi\right] \tag{6}\]
with the corresponding equation of motion is
\[i\gamma^{\mu}\tilde{\cal D}_{\mu}\psi-m\psi=0. \tag{7}\]
The perturbation terms in the action can thus be read as
\[S_{I}=\int d^{4}x\sqrt{-g}\bar{\psi}\left[-\eta_{1}\gamma^{a}{\cal V}_{a}\psi+ \frac{3}{4}\eta_{2}\gamma^{a}{\cal A}_{a}\gamma_{5}\psi\right]=\int d^{4}x{ \cal L}_{I} \tag{8}\]
Since there is no time derivative terms of spinor field in the interacting Lagrangian given by (8), the interact Hamiltonian density has the property
\[\int d^{4}x{\cal H}_{I}\left(x\right)=-\int d^{4}x{\cal L}_{I}\left(x\right)=- S_{I}. \tag{9}\]
The S-matrix can then be calculated by
\[{\bf S}={\bf T}\{e^{iS_{I}}\}\;. \tag{10}\]
### Spinor fields in spatial-flat FRW spacetime
Since we take torsion as perturbation to the torsion free theory in a fixed spacetime background, the spatial-flat FRW spacetime in our cases, we need to start from the solution of eq (7) in spatial-flat FRW background. The spatial-flat FRW metric is
\[ds^{2}=dt^{2}-a^{2}\left(t\right)\left(dx^{2}+dy^{2}+dz^{2}\right) \tag{11}\]
Substituting the metric (11) into equation (7), we can get the equation of motion for spinor field in FRW spacetime,
\[i\left(\gamma^{0}\partial_{t}\psi+a^{-1}\vec{\gamma}\cdot\vec{\nabla}\psi+ \frac{3}{2}\frac{\dot{a}}{a}\gamma^{0}\psi\right)-m\psi=0\;, \tag{12}\]
where \(\vec{\gamma}\cdot\vec{\nabla}\) is just \(\gamma^{i}\partial_{i}\). The general solution of Eq.(12) has the form[29]
\[\psi_{D}=\frac{1}{\left(2\pi\right)^{3}}\sum_{s}\int d^{3}\vec{p}_{C}\frac{1 }{\sqrt{2E_{\vec{p}_{C}a}a^{3}}}\left[A_{s}(\vec{p}_{C},t)U_{s}(\vec{p}_{C},t) e^{-i\Omega(\vec{p}_{C})}+B^{\dagger}{}_{s}(\vec{p}_{C},t)V_{s}(\vec{p}_{C},t)e^{i \Omega(\vec{p}_{C})}\right] \tag{13}\]
where phase factor of the exponential is \(\Omega\left(\vec{p}_{C}\right)=\int_{-\infty}^{t}dt^{\prime}E_{\vec{p}_{C}a} \left(t^{\prime}\right)-\vec{p}_{C}\cdot\vec{x}\) and \(\vec{p}_{C}\) is the comoving coordinate 3-momentum rather than a physical one. The relation between the comoving 3-momentum
and the physical one is \(\vec{p}_{P}=\frac{\vec{p}_{C}}{a\left(t\right)}\) so that the energy \(E_{\vec{p}_{C}a}\) is given by \(E_{\vec{p}_{C}a}=\left(\frac{\left|\vec{p}_{C}\right|^{2}}{a\left(t\right)^{2} }+m^{2}\right)^{1/2}\). The time-dependent four-component spinors \(U_{s}(\vec{p}_{C},t)\) and \(V_{s}(\vec{p}_{C},t)\) are \(U_{s}(\vec{p}_{C},t)=u_{s}\left(\vec{p}_{C}/a\right)\) and \(V_{s}(\vec{p}_{C},t)=v_{s}\left(\vec{p}_{C}/a\right)\) which satisfy \(\left(\gamma^{a}p_{a}-m\right)u\left(\vec{p}\right)=0\) and \(\left(\gamma^{a}p_{a}+m\right)v\left(\vec{p}\right)=0\) with normalization \(\bar{u}_{r}\left(p\right)u_{s}\left(p\right)=2m\delta_{rs}\) and \(\bar{v}_{r}\left(p\right)v_{s}\left(p\right)=-2m\delta_{rs}\), while the time-dependent annihilation operators \(A_{s}(\vec{p},t)\) and \(B_{s}(\vec{p},t)\) for particles and anti-particles with spin \(s\) and momentum \(\vec{p}\) are the time-dependent (as well as \(\left|\vec{p}\right|\) dependent) complex linear combination of \(A_{s}(\vec{p})\) and \(B_{-s}{}^{\dagger}(-\vec{p})\) and \(B_{s}(\vec{p})\) and \(A_{-s}{}^{\dagger}(-\vec{p})\), i.e.
\[A_{s}(\vec{p},t)=D_{1,1}\left(\left|\vec{p}\right|,t\right)A_{s}(\vec{p})+D_{1,-1}\left(\left|\vec{p}\right|,t\right)B_{-s}{}^{\dagger}(-\vec{p}) \tag{14}\]
and
\[B_{s}(\vec{p},t)=D_{-1,-1}{}^{*}\left(\left|\vec{p}\right|,t\right)B_{s}(\vec {p})+D_{-1,1}{}^{*}\left(\left|\vec{p}\right|,t\right)A_{-s}{}^{\dagger}(-\vec {p})\;, \tag{15}\]
with the non-vanishing anti-commutators of the operators \(A,B\),
\[\left\{A_{r}\left(\vec{p}\right),A^{\dagger}{}_{s}\left(\vec{q}\right)\right\} =\left\{B_{r}\left(\vec{p}\right),B^{\dagger}{}_{s}\left(\vec{q}\right)\right\} =\left(2\pi\right)^{3}\delta^{3}\left(\vec{p}-\vec{q}\right)\delta_{rs}\;. \tag{16}\]
The time evolution of complex linear combination factor \(D_{a,b}\left(a,b=1,-1\right)\) is related to a factor \(S\left(\left|\vec{p}_{C}\right|,t\right)=\dot{a}a^{-2}\left[2E_{\vec{p}_{C}a} {}^{2}\right]^{-1}m\left|\vec{p}_{C}\right|\). The evolution of \(D_{a,b}\) leads to the evolution of average number of particles \(\left\langle N_{\vec{p}}\left(t\right)\right\rangle\). The expanding rate of the universe \(\dot{a}\) as well as the masses of neutrinos are small so that the factor \(S\left(\left|\vec{p}_{C}\right|,t\right)\) can be neglected. In this case, \(D_{a,b}\left(\left|\vec{p}\right|,t\right)\) will remain to be a constant, i.e. \(D_{a,b}\left(\left|\vec{p}\right|,t\right)=\delta_{ab}\)[29]. Hence, the Dirac spinor field in spatial-flat FRW spacetime can be written as
\[\psi_{D}=\frac{1}{\left(2\pi\right)^{3}}\sum_{s}\int d^{3}\vec{p}_{C}\frac{1} {\sqrt{2E_{\vec{p}_{C}a}a^{3}}}\left[A_{s}(\vec{p}_{C})u_{s}\left(\vec{p}_{C}/ a\right)e^{-i\Omega\left(\vec{p}_{C}\right)}+B^{\dagger}{}_{s}(\vec{p}_{C})v_{s} \left(\vec{p}_{C}/a\right)e^{i\Omega\left(\vec{p}_{C}\right)}\right]. \tag{17}\]
To have well defined one-particle states, we may assume the scale factor \(a\left(t\right)\) varies sufficiently smooth and approaches constant values \(a_{i}\) and \(a_{f}\) sufficiently fast as \(t\rightarrow-\infty\) and \(t\rightarrow+\infty\) respectively[30]. The one-particle states with momentum \(\vec{p}_{P}=\frac{\vec{p}_{C}}{a\left(t\right)}\) are created via creation operator with the comoving momentum \(A_{s}{}^{\dagger}(\vec{p}_{C})\)[30]. Moreover, a convenient Lorentz-invariant normalization in a finite box \(\left\langle\mathbf{p}_{1}\mid\mathbf{p}_{2}\right\rangle^{\left(R\right)}=2 E_{\mathbf{p}_{1}}V\delta_{\mathbf{p}_{1},\mathbf{p}_{2}}\)[31] can be employed. In infinite-volume limit, we take \(V\rightarrow\left(2\pi\right)^{3}\delta^{3}(0)\), hence at any fixed time \(t\), the normalization should be given via
\[\left\langle\mathbf{p}_{1}\mid\mathbf{p}_{2}\right\rangle=2E_{\mathbf{p}_{1}} \left(2\pi\right)^{3}\delta^{3}\left(\mathbf{p}_{1}-\mathbf{p}_{2}\right)=2E_{ \mathbf{p}_{1}}a\left(t\right)^{3}\left(2\pi\right)^{3}\delta^{3}\left( \mathbf{p}_{1}{}_{C}-\mathbf{p}_{2}{}_{C}\right) \tag{18}\]
Thus, the one-particle state at the fixed time \(t\) should be given by \(\left|\vec{p}_{P},s\right\rangle=\left|\frac{\vec{p}_{C}}{a},s\right\rangle= \sqrt{2E_{\vec{p}a}a^{3}}A_{s}{}^{\dagger}\left(\vec{p}_{C}\right)\left|0\right\rangle\). If we set the initial state as one particle with momentum \(\vec{k}_{P}\) and spin \(s\) and the final state is a particle with momentum \(\vec{k}_{P}^{\prime}\) and spin \(r\), the initial and final state can be written as
\[\left|i\right\rangle=\left|\vec{k}_{P},s\right\rangle=\left|\frac{\vec{k}_{C}}{ a_{i}},s\right\rangle=\sqrt{2E_{\vec{k}a_{i}}a_{i}{}^{3}}A_{s}{}^{\dagger}\left( \vec{k}_{C}\right)\left|0\right\rangle \tag{19}\]
and
\[\left|f\right\rangle=\left|\vec{k}_{P}^{\prime},r\right\rangle=\left|\frac{\vec{k }_{C}^{\prime}}{a_{f}},r\right\rangle=\sqrt{2E_{\vec{k}^{\prime}a_{g}}a_{f}{}^{3}}A _{r}{}^{\dagger}\left(\vec{k}_{C}^{\prime}\right)\left|0\right\rangle. \tag{20}\]
### Scattering amplitude calculation
Now we are ready to calculate the scattering amplitude. Since the torsion field in (8) is relatively small, it is convenient to expand the S-matrix (10) to the 1st order i.e.
\[\mathbf{S}\simeq 1+i\mathbf{T}\left\{S_{I}\right\} \tag{21}\]
and the S-matrix element can thus be calculate via
\[\left[\mathbf{S}\right]_{fi}=\left\langle f\left.\left|1\right|\right.i\right\rangle +i\left\langle f\left.\left|\mathbf{T}\left[S_{I}\right]\right|\right.i\right\rangle \tag{22}\]
Substituting the initial and final state (19) and (20) and the interact action ((8)) as well as the Dirac spinor field (17) into (37), the first term of (37) is
\[\left\langle f\left.\left|1\right|\right.i\right\rangle=\alpha\delta^{3} \left(\vec{k}_{C}-\vec{k}_{C}^{\prime}\right)\delta_{rs} \tag{23}\]
where \(\alpha=2(2\pi)^{3}\sqrt{E_{\vec{k}_{A_{a}}}a_{i}^{3}E_{\vec{k}_{A_{f}}}a_{f}}^{3}\) is a factor related to normalization factor and the second term of (37) is
\[i\left\langle f\left.\left|\mathbf{T}\left[S_{I_{D}}\right]\right|\right.i \right\rangle=\frac{\alpha}{2(2\pi)^{3}}\int d^{4}x\frac{1}{\sqrt{E_{\vec{k}_{C }}^{\prime}a}E_{\vec{k}_{C}a}}\left[e^{i\left(\int_{-\infty}^{t}dt^{\prime} \left(E_{\vec{k}_{C}a}-E_{\vec{k}_{C}a}\right)\left(t^{\prime}\right)-\left( \vec{k}_{C}-\vec{k}_{C}\right)\cdot\vec{x}\right)}\bar{u}_{r}\left(\vec{k}_{C} ^{\prime}/a\right)Xu_{s}\left(\vec{k}_{C}/a\right)\right] \tag{24}\]
where \(X=-i\eta_{1}\gamma^{a}\mathcal{V}_{a}+\frac{3}{4}i\eta_{2}\gamma^{a} \mathcal{A}_{a}\gamma_{5}\) is the interaction vertex. The cosmic torsion fields satisfying cosmological principle can have only two independent non-zero components[32],
\[T_{ijk}=-F(t)\epsilon_{ijk} \tag{25}\]
and
\[T^{i}{}_{j0}=\mathcal{K}(t)\delta_{j}^{i}\;. \tag{26}\]
The vector and axial vector parts of torsion thus have the form
\[\mathcal{V}_{0}=3\mathcal{K}\left(t\right)=\mathcal{V}_{0}\left(t\right), \mathcal{V}_{i}=0 \tag{27}\]
and
\[\mathcal{A}^{0}=-F\left(t\right),\mathcal{A}^{i}=0. \tag{28}\]
Because the 0th components of vector and axial vector parts of torsion which are the only non-zero one are time dependent and space independent, the S-matrix element (37) of Dirac spinor field can be simplified as
\[\left[\mathbf{S}_{D}\right]_{fi}=\alpha\delta^{3}\left(\vec{k}_{C}-\vec{k}_{C }^{\prime}\right)\left[\delta_{rs}-i\eta_{1}\int dt\mathcal{V}_{0}\left(t \right)\delta_{rs}+\int dt\bar{u}_{r}\left(\vec{k}_{C}/a\right)\left[i\eta_{2} \frac{3\mathcal{A}_{0}\gamma^{0}\gamma_{5}}{8E_{\vec{k}_{C}a}}\right]u_{s} \left(\vec{k}_{C}/a\right)\right]\;. \tag{29}\]
The scattering amplitude is proportional to \(\delta^{3}\left(\vec{k}_{C}-\vec{k}_{C}^{\prime}\right)\), which means that the particle will keep its comoving momentum after scattering so that the scattering is just a redshift to a particle. The \(\vec{k}_{C}\) dependence in the last term of the scattering amplitude implies the interaction rate is different for different initial momentum which may cause the change of temperature spectrum after scattering via torsion which will be discussed in the next section.
Now we pay attention on the Majorana case. The \(\psi_{M}\) and \(\bar{\psi}_{M}\) are not independent for Majorana spinor field, i.e. \(\bar{\psi}_{M}=\psi_{M}{}^{T}\mathcal{C}\). The action of Majorana spinor is[33]
\[S_{M}=\frac{1}{2}\int d^{4}x\sqrt{-g}\bar{\psi}_{M}\left[i\left(\gamma^{\mu} \tilde{\mathcal{D}}_{\mu}\psi_{M}+i\eta_{1}\gamma^{a}\mathcal{V}_{a}\psi_{M}- \frac{3}{4}i\eta_{2}\gamma^{a}\mathcal{A}_{a}\gamma_{5}\psi_{M}\right)-m\psi_{ M}\right] \tag{30}\]
which is one-half of the Dirac one formally where \(\psi_{M}\) is expanded by
\[\psi_{M}=\frac{1}{\left(2\pi\right)^{3}}\sum_{s}\int d^{3}\vec{p}_{C}\frac{1}{ \sqrt{2E_{\vec{p}_{C}}a^{3}}}\left[A_{s}(\vec{p}_{C})u_{s}\left(\vec{p}_{C}/a \right)e^{-i\Omega\left(\vec{p}_{C}\right)}+A^{\dagger}{}_{s}(\vec{p}_{C})v_{s }\left(\vec{p}_{C}/a\right)e^{i\Omega\left(\vec{p}_{C}\right)}\right] \tag{31}\]
rather than \(\psi_{D}\) given in (17). Using the action \(S_{M}\) and the Majorana spinor \(\psi_{M}\), we can calculate the scattering amplitude as what we have done for the Dirac one. The 1st order term is
\[i\left\langle f\left.\left|\mathbf{T}\left[S_{I_{M}}\right]\right|\right.i \right\rangle=\frac{1}{4}\alpha\delta^{3}\left(\vec{k}_{C}-\vec{k}_{C}^{ \prime}\right)\int dt\left[\frac{1}{E_{\vec{k}_{C}a}}\left(\bar{u}_{r}\left( \vec{k}_{C}^{\prime}/a\right)Xu_{s}\left(\vec{k}_{C}/a\right)-\bar{v}_{s} \left(\vec{k}_{C}/a\right)Xv_{r}\left(\vec{k}_{C}^{\prime}/a\right)\right)\right] \tag{32}\]
Here we use the fact that vertex is only dependent on time \(X=X\left(t\right)\). Using the relationship that \(v=C\bar{u}^{T}\) and \(C^{\dagger}=-i\gamma^{2}\gamma^{0}=i\gamma^{2}\gamma^{0}=-C\) we have
\[\bar{v}_{s}(k)\gamma^{a}v_{r}(k^{\prime})=u_{s}(k)^{T}C\gamma^{a}C\bar{u}_{r}^{ T}(k^{\prime})=u_{s}(k)^{T}\gamma^{a}{}^{T}\bar{u}_{r}^{T}(k^{\prime})=\bar{u}_{r}(k^ {\prime})\gamma^{a}u_{s}(k) \tag{33}\]
and
\[\bar{v}_{s}(k)\gamma^{a}\gamma_{5}v_{r}(k^{\prime})=u_{s}(k)^{T}C\gamma^{a} \gamma_{5}C\bar{u}_{r}^{T}(k^{\prime})=-u_{s}(k)^{T}C\gamma^{a}CC\gamma_{5}C \bar{u}_{r}^{T}(k^{\prime})\]
\[=u_{s}(k)^{T}\gamma^{a}{}^{T}\gamma_{5}{}^{T}\bar{u}_{r}^{T}(k^{\prime})=-u_{ s}(k)^{T}\gamma_{5}{}^{T}\gamma^{a}{}^{T}\bar{u}_{r}^{T}(k^{\prime})=-\bar{u}_{r}( k^{\prime})\gamma^{a}\gamma_{5}u_{s}(k)\;.\]
Then we have
\[i\left\langle f\left|{\bf T}\left[S_{I_{M}}\right]\right|i\right\rangle= \frac{1}{2}\alpha\delta^{3}\left(\vec{k}_{C}-\vec{k}_{C}^{\prime}\right)\int dt \left[\frac{1}{2E_{\vec{k}_{C}a}}\left(\bar{u}_{r}\left(\vec{k}_{C}^{\prime}/a \right)X^{\prime}u_{s}\left(\vec{k}_{C}/a\right)\right)\right] \tag{35}\]
where \(X^{\prime}=\frac{3}{4}i\eta_{2}\gamma^{a}{\cal A}_{a}\gamma_{5}\). In Majorana case, the axial vector part of torsion contributes the same to the scattering amplitude as in the Dirac case while the vector torsion has no effects on the scattering amplitude which is different from the Dirac case. In fact, for interaction Hamiltonian density \(\bar{\psi}\Gamma\psi\), the vertex for Dirac field scattering is \(\Gamma\) while effective vertex for Majorana field scattering is \(\Gamma^{\prime}=\frac{1}{2}\left(\Gamma+C\Gamma^{T}C^{-1}\right)\) since effective vertex for Majorana field scattering should keep invariant under charge-conjugation transformation \(\Gamma^{\prime}=C\Gamma^{\prime}C^{-1}\)[34]. The factor \(\gamma^{a}\gamma_{5}\) keeps invariant under charge-conjugation transformation, i.e. \(C\left(\gamma^{a}\gamma_{5}\right)^{T}C^{-1}=\gamma^{a}\gamma_{5}\), while the factor \(\gamma^{a}\) is not, i.e. \(C\left(\gamma^{a}\right)^{T}C^{-1}=-\gamma^{a}\), so that the field coupled with \(\gamma^{a}\) has no contribution to scattering amplitude which is the vector part of torsion in our case. The total Majorana scattering amplitude is
\[\left[{\bf S}_{M}\right]_{fi}=\alpha\delta^{3}\left(\vec{k}_{C}-\vec{k}_{C}^{ \prime}\right)\left[\delta_{rs}+\frac{3i\eta_{2}}{8}\int dt\bar{u}_{r}\left( \vec{k}_{C}/a\right)\left[\frac{{\cal A}_{0}\gamma^{0}\gamma_{5}}{E_{\vec{k}_ {C}a}}\right]u_{s}\left(\vec{k}_{C}/a\right)\right] \tag{36}\]
which differs (29) by the vector part of torsion.
## 3 Shift of energy distribution
As we mentioned before, the scattering amplitude is \(\vec{k}_{C}\) dependent which will cause the shift of the energy distribution after scattering. Notice that the scattering amplitude (29) and (36) can be generally written as
\[\left[{\bf S}\right]_{fi}=\alpha\delta^{3}\left(\vec{k}_{C}-\vec{k}_{C}^{ \prime}\right)M_{fi} \tag{37}\]
The factor \(\alpha\) is related to normalization. In torsion free spacetime, the scattering amplitude is \(\left[{\bf S}\right]_{fi}=\alpha\delta^{3}\left(\vec{k}_{C}-\vec{k}_{C}^{ \prime}\right)\). The effect of torsion to the rate of redshift is included in the factor \(M_{fi}\). The scattering rate from initial state \(\left|i\right\rangle\) to the final state \(\left|i\right\rangle\)\(W_{fi}\) is proportional to \(\left|\left[{\bf S}\right]_{fi}\right|^{2}\)
\[W_{fi}\propto\alpha^{2}\left(\delta^{3}\left(\vec{k}_{C}^{\prime}-\vec{k}_{C} \right)\right)^{2}\left|M_{fi}\right|^{2} \tag{38}\]
Experimentally, it is more common that the initial spin is unknown to us and the detector receives all final spin configurations. Thus, to compare with experiment, we should sum all the final spin configurations and average the initial ones for \(W_{fi}\). We define
\[\overline{W}_{fi}=\frac{1}{2}\sum_{\rm initial\ spins}\sum_{\rm final\ spins}W_{fi} \propto\alpha^{2}\left(\delta^{3}\left(\vec{k}_{C}^{\prime}-\vec{k}_{C} \right)\right)^{2}\overline{\left|M_{fi}\right|^{2}} \tag{39}\]
where
\[\overline{\left|M_{fi}\right|^{2}}=\frac{1}{2}\sum_{\rm initial\ spins}\sum_{ \rm final\ spins}\left|M_{fi}\right|^{2} \tag{40}\]
If the initial energy distribution is \(I_{i}\left(E_{i},T\right)\), the final energy distribution will be \(I_{f}\left(E,T\right)=\bar{W}_{fi}I_{i}\left(E_{f}\left(E\right),T\right)\) where the \(E_{f}\left(E_{i}\right)\) is the final energy dependency on the initial energy. Thus, if the final energy distribution in the torsion free background is \(I_{0}\left(E,T\right)\), the final energy distribution in the spacetime with cosmic torsion will become \(I_{T}\left(E,T\right)=\overline{\left|M_{fi}\right|^{2}}I_{0}\left(E,T\right)\). For Dirac field the scattering amplitude(29) can be calculated as
\[\overline{\left|M_{Dfi}\right|^{2}}=1+\mathcal{V}\text{ terms}+\mathcal{A}\text{ terms} \tag{41}\]
where
\[\mathcal{V}\text{ terms}=\eta_{1}{}^{2}\bigg{(}\int dt\mathcal{V}_{0}\bigg{)}^ {2} \tag{42}\]
and the \(\mathcal{A}\) terms is
\[-\frac{9}{32}\eta_{2}{}^{2}\int dt^{\prime}\int dt\left(\frac{1}{E_{\bar{E}_{C a^{\prime}}}\left[t^{\prime}\right]}\mathcal{A}_{0}\left[t^{\prime}\right] \frac{1}{E_{\bar{E}_{C}a}\left[t\right]}\mathcal{A}^{0}\left[t\right]k_{P;a^{ \prime}{}_{c}}\left[t^{\prime}\right]k_{P;a^{\prime}{}_{c}}\left[t^{\prime} \right]k_{P;a}{}^{c}\left[t\right]+\frac{1}{E_{\bar{E}_{C}a^{\prime}}\left[t^{ \prime}\right]}\mathcal{A}_{0}\left[t^{\prime}\right]\frac{1}{E_{\bar{E}_{C}a }\left[t\right]}\mathcal{A}^{0}\left[t\right]m^{2}\right)\] \[+\frac{9}{16}\eta_{2}{}^{2}\int dt^{\prime}\int dt\left(\mathcal{ A}_{0}\left[t^{\prime}\right]\mathcal{A}_{0}\left[t\right]\right). \tag{43}\]
For Majorana case, \(\overline{\left|M_{fi}\right|^{2}}\) differs from the Dirac one by the term related to the vector part of torsion, i.e.
\[\overline{\left|M_{Mfi}\right|^{2}}=1+\mathcal{A}\text{ terms} \tag{44}\]
Although neutrino is massless in Standard Model, the discovery of neutrino oscillations proved that neutrino is not exactly massless even though the mass of neutrino is very small. The upper limit on the absolute mass scale of neutrinos is 1.1 eV (90% confidence level) according to recent experiment[2]. It is common that the energy of neutrino is much larger than its mass, i.e. \(E\gg m\). In this case, \(E_{\bar{E}_{C}a}\) can be expanded as
\[E_{\bar{E}_{C}a}=\left(\frac{\left|\vec{k}_{C}\right|^{2}}{a(t)^{2}}+m^{2} \right)^{1/2}\simeq\frac{\left|\vec{k}_{C}\right|}{a\left(t\right)}+\frac{m^{2 }a\left(t\right)}{2\left|\vec{k}_{C}\right|} \tag{45}\]
and therefore the \(\mathcal{A}\) terms can be simplified as
\[\mathcal{A}\text{ terms}\simeq\frac{9}{32}\eta_{2}{}^{2}\left(2\mathcal{A}_{0} \left[t^{\prime}\right]\mathcal{A}_{0}\left[t\right]-\frac{m^{2}}{2E_{f}{}^{2 }a_{f}{}^{2}}(a\left(t\right)+a\left(t^{\prime}\right))^{2}\mathcal{A}_{0} \left[t^{\prime}\right]\mathcal{A}^{0}\left[t\right]\right)\text{.} \tag{46}\]
The dependence of \(\overline{\left|M_{fi}\right|^{2}}\) on \(E_{f}\) is also dependent on the mass \(m\) and the axial vector part of torsion \(\mathcal{A}\), which means if the particle mass is zero or there is no axial vector part of torsion, the interaction rates for particles with different energy are same which will cause that the final energy distribution scattering by torsion is the same as that given by torsion free case. In non-minimum coupling case that \(\eta_{1}\neq 0\), the existence of the vector part of torsion will result in the same \(\left|\vec{k}_{C}\right|^{2}\) dependence term in \(\overline{\left|M_{fi}\right|^{2}}\) in both Dirac and Majorana case on the energy distribution. That is, even if the \(\left|\vec{k}_{C}\right|^{2}\) dependence term that can takes effect on the final energy distribution is the same for both types of spinor field, the final energy distribution for the two case are different for the same final energy distribution in torsion free case.
If the final energy distribution in torsion free case is one given by Planck formula for black body radiation,
\[I_{0}\left(E,T\right)=\left(\frac{2E^{3}}{\left(2\pi\right)^{3}}\right)\frac{1 }{e^{\frac{E}{k_{B}T}}-1}. \tag{47}\]
The distribution reaches its peak at \(E_{0Max}\) given by
\[\frac{dI_{0}\left(E,T\right)}{dE}|_{E=E_{0Max}}=0 \tag{48}\]
which can be written as
\[\left(3-x\right)e^{x}=3, \tag{49}\]
where \(x\) is defined as \(x=\dfrac{E}{k_{B}T}\). The equation (49) has a positive solution \(x_{0}\). Therefore, the \(E_{0\,Max}\) can be given as
\[E_{0Max}=x_{0}k_{B}T \tag{50}\]
which is known as the Wien's displacement law.
Then we turn on the torsion field. In the Majorana case the distribution will shift to \(I_{M}\left(E,T\right)=\dfrac{\left|M_{Mf}\right|^{2}}{I_{0}\left(E,T\right)}\) whose peak is arrived at \(E_{M\,Max}\) given by \(\dfrac{dI}{dE}=0\) which can be derived as
\[\left(3-x\right)e^{x}=3+\zeta\left(x,T\right)\left(e^{x}-1-xe^{x}\right) \tag{51}\]
where the shift factor
\[\zeta\left(x,T\right)=\dfrac{\xi}{\left(1+\chi\right)\left(k_{B}T\right)^{2} x^{2}} \tag{52}\]
and the factor
\[\chi=\dfrac{9}{16}{\eta_{2}}^{2}\int dt^{\prime}\int dt\left(\mathcal{A}_{0} \left[t^{\prime}\right]\mathcal{A}_{0}\left[t\right]\right) \tag{53}\]
and
\[\xi=\dfrac{9m^{2}}{64{a_{f}}^{2}}{\eta_{2}}^{2}\int dt^{\prime}\int dt\left( \left(a\left(t\right)+a\left(t^{\prime}\right)\right)^{2}\mathcal{A}_{0}\left[ t^{\prime}\right]\mathcal{A}^{0}\left[t\right]\right). \tag{54}\]
The equation (51) is hard to solve analytically, and the solution \(x_{M}\) is dependent on the temperature rather than a constant like \(x_{0}\). However, it can be noticed that the left-hand side of equation (51) as well as the first term 3 in the right-hand side of (51) is just the equation (49) whose solution is known as the constant \(x_{0}\). In the theory that the cosmic torsion plays the role of part of the dark energy, the cosmic torsion is in the order of Hubble parameter \(H\)[24, 25]. If the scale factor \(a\propto t^{n}\), the integral \(\int_{t_{i}}^{t_{f}}H\left(t\right)dt\propto ln\left(\dfrac{t_{f}}{t_{i}} \right)H_{0}t_{0}\) where \(H_{0}\) is Hubble constant if the final time is today \(t_{0}\). Since \(H_{0}t_{0}\) is in the order of 1, the the integral \(\int_{t_{i}}^{t_{f}}H\left(t\right)dt\) is in the order of 1. Hence, the factor \(\chi\) is in the order about 1 and the factor \(\xi\) is in the order \(m^{2}\) so that shift factor \(\zeta\left(x,T\right)\) is in the order of \(\dfrac{m^{2}}{E^{2}}\) which is much less than 1 in the most cases for neutrinos. Therefore, the second term of the left-hand side of equation (51) is small. Thus, the difference between the solution of eq.(51) \(x_{M}\) and the solution of (49) \(x_{0}\) is expect to be small. We may define the difference as \(\Delta_{M}=x_{M}-x_{0}\) and rewrite the equation (51) with \(x_{0}\) and \(\Delta_{M}\) in the first-order of \(\Delta_{M}\) as
\[\Delta_{M}e^{x_{0}}+\zeta\left(x_{0},T\right)\left(e^{x_{0}}-1-x_{0}e^{x_{0}} \right)\,=0 \tag{55}\]
and the \(\Delta_{M}\) can be easily solved as
\[\Delta_{M}=\dfrac{2}{3}\zeta\left(x_{0},T\right)=\dfrac{2\xi}{3\left(1+\chi \right)\left(k_{B}T\right)^{2}x_{0}}. \tag{56}\]
Thus the final Majorana energy distribution reaches it peak at
\[E_{M\,Max}=x_{M}k_{B}T\simeq x_{0}k_{B}T+\dfrac{2\xi}{3\left(1+\chi\right) \left(k_{B}T\right)x_{0}} \tag{57}\]
and similarly the result of the Dirac one is
\[E_{D\,Max}\simeq x_{0}k_{B}T+\dfrac{2\xi}{3\left(1+\chi+V\right)\left(k_{B}T \right)x_{0}} \tag{58}\]
where
\[V={{\eta_{1}}^{2}}{\left({\int{dt{\cal{V}}_{0}}}\right)^{2}}. \tag{59}\]
It can be seen that the shift of Dirac energy distribution and the Majorana energy distribution is different in the non-minimal coupling with \(\eta_{1}\) and \({\cal{V}}_{0}\) is not zero.The energy shift is proportional to the order \(\frac{{m^{2}}}{{E^{2}}}\). The energy distribution will not shift when \(m=0\) because in such a case the interaction rate for all the energy is the same so that it will cause the same distribution after normalization. If we stick to Wien's displacement law to give the temperature, the effective temperature we detect will be
\[T_{D\rm eff}=T\left({1+\frac{{2\xi}}{{3\left({1+\chi+V}\right)\left({k_{B}T} \right)^{2}}{x_{0}}^{2}}}\right) \tag{60}\]
for Dirac scattering and
\[T_{M\rm eff}=T\left({1+\frac{{2\xi}}{{3\left({1+\chi}\right)\left({k_{B}T} \right)^{2}}{x_{0}}^{2}}}\right) \tag{61}\]
for Majorana scattering. The temperature shift is also proportional to the order \(\frac{{m^{2}}}{{E^{2}}}\).
## 4 Conclusion and Discussion
We find that the scattering amplitudes of the Dirac and Majorana fermions by torsion in spatial-flat FRW spacetime background are differed by the vector part of torsion field in non-minimal coupling case. The axial vector part of cosmic torsion coupled with mass in the interaction rate will cause a small shift of the energy distribution and the effective temperature derived from Wien's displacement law shift in the order \(\frac{{m^{2}}}{{E^{2}}}\) if \(m\ll E\) but \(m\neq 0\). The shift of energy distributions and effective temperature for Dirac and Majorana spinors are different due to the difference of the scattering amplitude by the vector part of torsion.
The temperature of cosmic neutrino background (CNB) is given via \(T_{\nu}={\left({\frac{4}{{11}}}\right)^{1/3}}T_{\gamma}\approx 1.95K\) with the corresponding energy \(k_{B}T\sim 10^{-4}eV\)[35]. However, the upper limit of neutrino mass is about 1.1eV[2], which means the effect of static mass can not be neglect. The condition \(m\ll E\) is no longer hold. Thus, the energy distribution shift is not the value given in (57) and (58) but that calculated from the \({\cal{A}}\) terms given by (43) combined with specific cosmological models with cosmic torsion. However, the basic qualitative conclusions that the term involving both axial vector part of cosmic torsion and the mass of neutrino will cause the energy distribution shift and the shift will be different for Dirac and Majorana scattering in the non-minimal coupling case that the vector part of cosmic torsion couples with the spinor field are still hold.
The cosmic torsion of spatial-flat FRW spacetime takes simply effect on the rate of redshift in energy rather than the angle distribution of final state or leading to a change of the final state energy rather than redshift. This is due to the fact that the cosmic torsion in spatial-flat FRW spacetime is homogeneous isotropic and the scattering theory assume that the interaction is turned off in the initial and final state. The case of open and closed universe need to be discussed whether the interaction is spatially correlated so that it will caused the angle distribution of final state.
## Acknowledgment
This work is supported by the National Natural Science Foundation of China ( Grant Nos. 11775080, 11865016 ) and the Natural Science Foundation of Chongqing, China ( Grant No. CSTB2022NSCQ-MSX0351 ). |
2305.17220 | VoxDet: Voxel Learning for Novel Instance Detection | Detecting unseen instances based on multi-view templates is a challenging
problem due to its open-world nature. Traditional methodologies, which
primarily rely on 2D representations and matching techniques, are often
inadequate in handling pose variations and occlusions. To solve this, we
introduce VoxDet, a pioneer 3D geometry-aware framework that fully utilizes the
strong 3D voxel representation and reliable voxel matching mechanism. VoxDet
first ingeniously proposes template voxel aggregation (TVA) module, effectively
transforming multi-view 2D images into 3D voxel features. By leveraging
associated camera poses, these features are aggregated into a compact 3D
template voxel. In novel instance detection, this voxel representation
demonstrates heightened resilience to occlusion and pose variations. We also
discover that a 3D reconstruction objective helps to pre-train the 2D-3D
mapping in TVA. Second, to quickly align with the template voxel, VoxDet
incorporates a Query Voxel Matching (QVM) module. The 2D queries are first
converted into their voxel representation with the learned 2D-3D mapping. We
find that since the 3D voxel representations encode the geometry, we can first
estimate the relative rotation and then compare the aligned voxels, leading to
improved accuracy and efficiency. In addition to method, we also introduce the
first instance detection benchmark, RoboTools, where 20 unique instances are
video-recorded with camera extrinsic. RoboTools also provides 24 challenging
cluttered scenarios with more than 9k box annotations. Exhaustive experiments
are conducted on the demanding LineMod-Occlusion, YCB-video, and RoboTools
benchmarks, where VoxDet outperforms various 2D baselines remarkably with
faster speed. To the best of our knowledge, VoxDet is the first to incorporate
implicit 3D knowledge for 2D novel instance detection tasks. | Bowen Li, Jiashun Wang, Yaoyu Hu, Chen Wang, Sebastian Scherer | 2023-05-26T19:25:13Z | http://arxiv.org/abs/2305.17220v4 | # VoxDet: Voxel Learning for Novel Instance Detection
###### Abstract
Detecting unseen instances based on multi-view templates is a challenging problem due to its open-world nature. Traditional methodologies, which primarily rely on 2D representations and matching techniques, are often inadequate in handling pose variations and occlusions. To solve this, we introduce VoxDet, a pioneer 3D geometry-aware framework that fully utilizes the strong 3D voxel representation and reliable voxel matching mechanism. VoxDet first ingeniously proposes template voxel aggregation (TVA) module, effectively transforming multi-view 2D images into 3D voxel features. By leveraging associated camera poses, these features are aggregated into a compact 3D template voxel. In novel instance detection, this voxel representation demonstrates heightened resilience to occlusion and pose variations. We also discover that a 3D reconstruction objective helps to pre-train the 2D-3D mapping in TVA. Second, to quickly align with the template voxel, VoxDet incorporates a Query Voxel Matching (QVM) module. The 2D queries are first converted into their voxel representation with the learned 2D-3D mapping. We find that since the 3D voxel representations encode the geometry, we can first estimate the relative rotation and then compare the aligned voxels, leading to improved accuracy and efficiency. Exhaustive experiments are conducted on the demanding LineMod-Occlusion, YCB-video, and the newly built RoboTools benchmarks, where VoxDet outperforms various 2D baselines remarkably with \(\mathbf{20\%}\) higher recall and faster speed. To the best of our knowledge, VoxDet is the first to incorporate implicit 3D knowledge for 2D detection tasks.
## 1 Introduction
Consider the common scenarios of locating the second sock of a pair in a pile of laundry or identifying luggage amid hundreds of similar suitcases at an airport. These activities illustrate the remarkable capability of human cognition to swiftly and accurately identify a specific _instance_ among other similar objects. Humans can rapidly create a mental picture of a novel _instance_ with a few glances even if they see such an _instance_ for the first time or have never seen _instances_ of the same type. Searching for instances using mental pictures is a fundamental ability for humans, however, even the latest object detectors [1; 2; 3; 4; 5; 6; 7] still cannot achieve this task.
We formulate the above tasks as novel instance detection, that is identification of an unseen instance in a cluttered query image, utilizing its multi-view support references. Previous attempts mainly work in 2D space, such as correlation [8; 5], attention mechanisms [6], or similarity matching [9], thereby localizing and categorizing the desired instance, as depicted in Fig. 1 gray part. However, these techniques struggle to maintain their robustness when faced with significant disparities between the query and templates. In comparison to novel instance detection, there is a vast amount of work centered around few-shot category-level object detection [7; 1; 2]. Yet, these class-level matching techniques prove insufficient when it comes to discerning specific instance-level features.
Humans exhibit the remarkable capability to swiftly formulate a mental model of an unfamiliar instance, facilitated by a rapid comprehension of its 3D geometric structure [10; 11; 12]. Leveraging such a mental representation, once presented with a single query image, a human can probably search and identify the same instance despite alterations in distance, occlusion, and even approximate the
instance's orientation. Motivated by this, we propose VoxDet, a pioneer 3D geometry-aware instance detection framework as shown in Fig. 1 bottom. In contrast to state-of-the-art methods [7; 5; 6; 9], VoxDet adapts two novel designs: (1) a compact 3D voxel representation that is robust to occlusion and pose variations and (2) an effective voxel matching algorithm for identifying instances.
VoxDet consists of three main modules: a template voxel aggregation (TVA) module, an open-world detection module, and a query voxel matching (QVM) module. Initially, the TVA module transforms multi-view 2D features of an instance into individual 3D template voxels [10]. These template voxels are then accumulated using relative rotations, thus incorporating both geometry and appearance into a condensed template voxel. As VoxDet learns this 2D-3D mapping via a reconstruction objective, TVA effectively encapsulates both the geometry and appearance of any instance into a compact template voxel. When presented with a query image, VoxDet employs an open-world detector [13] that universally identifies potential objects within the image as 2D proposals. These proposals are then converted to query voxels via the learned 2D-3D mapping and compared with the template voxel by the QVM module. QVM initiates this comparison process by first estimating the relative rotation between a query voxel and the template, which is then used to align the two voxels. Finally, the comparison between aligned voxels is delivered by a carefully designed voxel relation module.
Besides methodology, we also construct a large-scale synthetic training dataset, Open-World Instance Detection (OWID). OWID comprises 10k instances sourced from the ShapeNet [14] and Amazon Berkeley Objects [15] datasets, culminating in 55k scenes and 180k query bounding boxes. Trained on OWID, VoxDet demonstrates strong generalization ability on novel instances, which we attribute to the meticulously designed voxel-based framework and the large-scale OWID training set.
To validate VoxDet, we further build RoboTools, a new instance detection benchmark compiled from a diverse range of real-world cluttered environments. RoboTools consists of 20 unique instances, 24 test scenes, and over 9,000 annotated bounding boxes. As shown in Fig. 1 right, in the demanding RoboTools benchmark, VoxDet can robustly detect the novel instances under severe occlusion or varied orientation. Evaluations are also performed on the authoritative Linemond-Occlusion [16] and YCB-video [17] for more compelling results. The exhaustive experiments on these three benchmarks demonstrate that our 3D geometry-aware VoxDet not only outperforms various previous works [5; 6; 7] and different 2D baselines [18; 9] but also achieves faster inference speed.
## 2 Related Works
**Typical object detection**[19; 20; 21; 22; 23; 24; 25] thrive in category-level tasks, where all the instances belonging to a pre-defined class are detected. Typical object detection can be divided into two-stage approaches and one-stage approaches. For the former one, RCNN [19] and its variants [20; 21] serves as foundations, where the regions of interest (ROI) are first obtained by the region proposal network. Then the detection heads classify the labels of each ROI and regress the box coordinates. On the other hand, the YOLO series [22; 23; 24] and recent transformer-based methods [4; 3] are developing promisingly as the latter stream, where the detection task is tackled as an end-to-end regression problem.
Figure 1: Architecture comparison between previous 2D methods (_gray_) and proposed VoxDet (_black_). Previous methods resorts to pure 2D correlation/attention/matching for novel instance detection. In contrast, VoxDet is 3D-inspired, leveraging reconstruction objective to learn the geometry-aware voxel representation, which enables more effective and accurate voxel-based instance detection. In the challenging newly built RoboTools benchmark shown on the right, VoxDet exhibits surprising robustness to severe occlusion and orientation variation.
**Few-shot/One-shot object detection**[1; 26; 27; 2; 28; 7] can work for unseen classes with only a few labeled support samples, which are closer to our task. One stream focuses on transfer-learning techniques [27; 26], where the fine-tuning stage is carefully designed to make the model quickly generalize to unseen classes. While the other resorts to meta-learning strategies [1; 7; 2; 28], where various kinds of relations between supports and queries are discovered and leveraged. Since the above methods are category-level, they assume more than one desired instances exist in an image, so the classification/matching designs are usually tailored for Top-100 precision, which is not a very strict metric. However, they can easily fail in our problem, where the _Top-1_ accuracy is more important.
**Open-world/Zero-shot object detection**[29; 30; 31; 13] finds any objects on an image, which is class-agnostic and universal. Some of them learn objectiveness [29; 13] and others [31] rely on large-scale high-quality training sets. These methods can serve as the first module in our pipeline, which generates object proposals for comparison with the templates. Among them, we adopt [13] with its simple structure and promising performance.
**Instance detection** requires the algorithm to find an unseen instance in the test image with some corresponding templates. Previous methods [6; 5; 8] usually utilize pure 2D representations and 2D matching/relation techniques. For example, DTOID [6] proposed global object attention and a local pose-specific branch to predict the template-guided heatmap for detection. However, they easily fall short when the 2D appearance variates due to occlusion or pose variation. Differently, VoxDet leverages the explicit 3D knowledge in the multi-view templates to represent and match instances, which is geometry-invariant.
**Multi-view 3D representations** Representing 3D scenes/instances from multi-view images is a long-standing problem in computer vision. Traditional methods resort to multi-view geometry, where structure from motion (SfM) [32] pipeline has enabled joint optimization of the camera pose and 3D structure. Modern methods usually adopts neural 3D representations [33; 11; 34; 35; 36; 12; 10], including deep voxels [34; 12; 10; 37] and implicit functions [35; 36], which have yielded great success in 3D reconstruction or novel view synthesis. Our framework is mainly inspired by Video Autoencoder [10], which encodes a video by separately learning the deep implicit 3D structure and the camera trajectory. One biggest advantage of [10] is that the learned Autoencoder can encode and synthesize test scenes without further tuning or optimization, which greatly satisfies the efficiency requirement of our instance detection task.
## 3 Methodology
### Problem Formulation
Given a training instance set \(\mathcal{O}_{\mathrm{base}}\) and an unseen test instance set \(\mathcal{O}_{\mathrm{novel}}\), where \(\mathcal{O}_{\mathrm{base}}\cap\mathcal{O}_{\mathrm{novel}}=\phi\), the task of novel instance detection (open-world detection) is to find an instance detector trained on \(\mathcal{O}_{\mathrm{base}}\) and then detect new instances in \(\mathcal{O}_{\mathrm{novel}}\) with no further training or finetuning. Specifically, for each instance, the input to the detector is a query image \(\mathcal{I}^{\mathrm{Q}}\in\mathbb{R}^{3\times W}\times H\) and a group of \(M\) support templates \(\mathcal{I}^{\mathrm{S}}\in\mathbb{R}^{M\times 3\times W}\times H\) of the target instance. The detector is expected to output the bounding box \(\mathbf{b}\in\mathbb{R}^{4}\) of an instance on the query image. We assume there exists exactly one such instance in the query image and the instance is located near the center of the support images.
### Architecture
The architecture of VoxDet is shown in Fig. 2, which consists of an open-world detector, a template voxel aggregation (TVA) module, and a query voxel matching (QVM) module. Given the query image, the open-world detector aims to generate universal proposals covering all possible objects. TVA aggregates multi-view supports into a compact template voxel via the relative camera pose between frames. QVM lifts 2D proposal features onto 3D voxel space, which is then aligned and matched with the template voxel. In order to empower the voxel representation with 3D geometry, we first resort to a reconstruction objective in the first stage. The pre-trained models serve as the initial weights for the second instance detection training stage.
#### 3.2.1 Open-World Detection
Since the desired instance is unseen during training, directly regressing its location and scale is non-trivial. To solve this, we first use an open-world detector [13] to generate the most possible candidates. Different from standard detection that only finds out pre-defined classes, an open-world detector locates _all_ possible objects in an image, which is class-agnostic.
As shown in Fig. 2, given a query image \(\mathcal{I}^{\mathrm{Q}}\), a 2D feature map \(\mathbf{f}^{\mathrm{Q}}\) is extracted by a backbone network \(\psi(\cdot)\). To classify each pre-defined anchor as foreground (objects) or background, the
region proposal network (RPN) [21] is adopted. Concurrently, the boundaries of each anchor are also roughly regressed. The resulting anchors with high classification scores are termed region proposals \(\mathbf{P}=[\mathbf{p}_{1},\mathbf{p}_{2},\cdots,\mathbf{p}_{N}]\in\mathbb{R}^{N \times 4}\), where \(N\) is the number of proposals. Next, to obtain the features \(\mathbf{F}^{\mathrm{Q}}\) for these candidates, we use region of interest pooling (ROIAlign) [21], \(\mathbf{F}^{\mathrm{Q}}=\mathrm{ROIAlign}(\mathbf{P},\mathbf{f}^{\mathrm{Q}}) \in\mathbb{R}^{N\times C\times w\times w}\), where \(C\) denotes channel dimensions and \(w\) is the spatial size of proposal features. Finally, we obtain the final classification result and bounding box by two parallel multi-layer perceptrons (MLP), known as the detection head, which takes the proposal features \(\mathbf{F}^{\mathrm{Q}}\) as input, and outputs the binary classification scores and the box regression targets. The training loss is comprised of RPN classification loss \(\mathcal{C}^{\mathrm{RPN}}_{\mathrm{cls}}\), RPN regression loss \(\mathcal{L}^{\mathrm{RPN}}_{\mathrm{reg}}\), head classification loss \(\mathcal{L}^{\mathrm{Head}}_{\mathrm{cls}}\), and head regression loss \(\mathcal{L}^{\mathrm{Head}}_{\mathrm{reg}}\).
To make the detector work for open-world objects, the classification branches (in RPN and head) are guided by _objectiveness regression_[13]. Specifically, the classification score is defined (supervised) by Intersection over Union (IoU), which showed a high recall rate over the objects in test images, even those unseen during training. Since they have learned the class-agnostic "objectiveness", we assume the open-world proposals probably cover the desired novel instance. Therefore, we take the top-ranking candidates and their features as the input of the subsequent matching module.
#### 3.2.2 Template Voxel Aggregation
To learn geometry-invariant representations, the Template Voxel Aggregation (TVA) module compresses multi-view 2D templates into a compact deep voxel. Inspired by previous technique [10] developed for unsupervised video encoding, we propose to encode our instance templates via their relative orientation in the physical 3D world. To this end, we first generate the 2D feature maps \(\mathbf{F}^{\mathrm{S}}=\psi(\mathcal{I}^{\mathrm{S}})\in\mathbb{R}^{M\times C \times w\times w}\) using a shared backbone network \(\psi(\cdot)\) used in the query branch and then map the 2D features to 3D voxels for multi-view aggregation.
**2D-3D mapping:** To map these 2D features onto a shared 3D space for subsequent orientation-based aggregation, we utilize an implicit mapping function \(\mathcal{M}(\cdot)\). This function translates the 2D features to 3D voxel features, denoted by \(\mathbf{V}=\mathcal{M}(\mathbf{F}^{\mathrm{S}})\in\mathbb{R}^{M\times C_{\mathrm{ v}}^{\mathrm{v}}\times D\times L\times L}\), where \(\mathbf{V}\) is the 3D voxel feature from the 2D feature, \(C_{\mathrm{v}}\) is the feature dimension, and \(D,L\) indicate voxel spatial size. Specifically, we first reshape the feature maps to \(\mathbf{F}^{\prime\mathrm{S}}\in\mathbb{R}^{M\times(C/d)\times d\times w\times w}\), where \(d\) is the pre-defined implicit depth, then we apply 3D inverse convolution to obtain the feature voxel.
Note that with multi-view images, we can calculate the relative camera rotation easily via Structure from Motion (SfM) [32] or visual odometry [38]. Given that the images are object-centered and the object stays static in the scene, these relative rotations in fact represent the relative rotations between the object orientations defined in the same camera coordination system. Different from previous
Figure 2: Architecture of VoxDet. VoxDet mainly consists of three modules, namely, open-world detection, template voxel aggregation (TVA), and query voxel matching (QVM). We first train TVA via the reconstruction stage, where the 2D-3D mapping learns to encode instance geometry. Then the pre-trained mapping serves as initial weights in the TVA and QVM modules for detection training.
work [10] that implicitly learns the camera extrinsic for unsupervised encoding, we aim to explicitly embed such geometric information. Specifically, our goal is to first transform every template into the same coordinate system using their relative rotation, which is then aggregated:
\[\mathbf{v}^{\mathrm{S}}=\frac{1}{M}\sum_{i=1}^{M}\mathrm{Conv3D}(\mathrm{Rot}( \mathbf{V}_{i},\mathbf{R}_{i}^{\top}))\;, \tag{1}\]
where \(\mathbf{V}_{i}\in\mathbb{R}^{C_{\mathrm{v}}\times D\times L\times L}\) is the previously mapped \(i\)-th independent voxel feature, \(\mathbf{R}_{i}^{\top}\) denotes the relative camera rotation between the \(i\)-th support frame and the first frame. \(\mathrm{Rot}(\cdot,\cdot)\) is the 3D transform used in [10], which first wraps a unit voxel to the new coordination system using \(\mathbf{R}_{i}^{\top}\) and then samples from the feature voxel \(\mathbf{V}_{i}\) with the transformed unit voxel grid. Therefore, all the \(M\) voxels are transformed into the same coordinate system defined in the first camera frame. These are then aggregated through average pooling to produce the compact template voxel \(\mathbf{v}^{\mathrm{S}}\).
By explicitly embedding the 3D rotations into individual reference features, TVA achieves a geometry-aware compact representation, which is more robust to occlusion and pose variation.
#### 3.2.3 Query Voxel Matching
Given the proposal features \(\mathbf{F}^{\mathrm{Q}}\) from query image \(\mathcal{I}^{\mathrm{Q}}\) and the template voxel \(\mathbf{C}^{\mathrm{S}}\) from supports \(\mathcal{I}^{\mathrm{S}}\), the task of the query voxel matching (QVM) module is to classify each proposal as foreground (the reference instance) or background. As shown in Fig. 2, in order to empower the 2D features with 3D geometry, we first use the same mapping to get query voxels, \(\mathbf{V}^{\mathrm{Q}}=\mathcal{M}(\mathbf{F}^{\mathrm{Q}})\in\mathbb{R}^{N \times C_{\mathrm{v}}\times D\times L\times L}\). VoxDet next accomplishes matching \(\mathbf{v}^{\mathrm{S}}\) and \(\mathbf{V}^{\mathrm{Q}}\) through two steps. First, we need to estimate the relative rotation between query and support, so that \(\mathbf{V}^{\mathrm{Q}}\) can be aligned in the same coordinate system as \(\mathbf{v}^{\mathrm{S}}\). Second, we need to learn a function that measures the distance between the aligned two voxels. To achieve this, we define a voxel relation operator \(\mathcal{R}_{\mathrm{v}}(\cdot,\cdot)\):
**Voxel Relation** Given two voxels \(\mathbf{v}_{1},\mathbf{v}_{2}\in\mathbb{R}^{c\times a\times a\times a}\), where \(c\) is the channel and \(a\) is the spatial dimension, this function seeks to discover their relations in every semantic channel. To achieve this, we first interleave the voxels along channels as \(\mathrm{In}(\mathbf{v}_{1},\mathbf{v}_{2})=[\mathbf{v}_{1}^{1},\mathbf{v}_{2 }^{1},\mathbf{v}_{1}^{2},\mathbf{v}_{2}^{2},\cdots,\mathbf{v}_{1}^{c}, \mathbf{v}_{2}^{c}]\in\mathbb{R}^{2c\times a\times a\times a}\), where \(\mathbf{v}_{1}^{k},\mathbf{v}_{2}^{k}\) is the voxel feature in the \(k\)-th channel. Then, we apply grouped convolution as \(\mathcal{R}_{\mathrm{v}}(\mathbf{v}_{1},\mathbf{v}_{2})=\mathrm{Conv3D}( \mathrm{In}(\mathbf{v}_{1},\mathbf{v}_{2}),\mathrm{group}=c)\). In the experiments, we found that such a design makes relation learning easier since each convolution kernel is forced to learn the two feature voxels from the same channel. With this voxel relation, we can then roughly estimate the rotation matrix \(\hat{\mathbf{R}}^{Q}\in\mathbb{R}^{N\times 3\times 3}\) of each query voxel relative to the template as:
\[\hat{\mathbf{R}}^{Q}=\mathrm{MLP}(\mathcal{R}_{\mathrm{v}}(\mathbf{V}^{ \mathrm{S}},\mathbf{V}^{\mathrm{Q}}))\;, \tag{2}\]
where \(\mathbf{v}^{\mathrm{S}}\) is copied \(N\) times to get \(\mathbf{V}^{\mathrm{S}}\). In practice, we first predict 6D continuous vector [39] as the network outputs and then convert the vector to a rotation matrix. Next, we can define the classification head with the Voxel Relation as:
\[\hat{s}=\mathrm{MLP}\left(\mathcal{R}_{\mathrm{v}}(\mathbf{V}^{\mathrm{S}}, \mathrm{Rot}(\mathbf{V}^{\mathrm{Q}},\hat{\mathbf{R}}^{Q}))\right), \tag{3}\]
where \(\mathrm{Rot}(\mathbf{V}^{\mathrm{Q}},\hat{\mathbf{R}}^{Q})\) rotates the queries to the support coordination system to allow for reasonable matching. In practice, we additionally introduced a global relation branch for the final score, so that the lost semantic information in implicit mapping can be retrieved. More details are available in the supplementary material. During inference, we rank the proposals \(\mathbf{P}\) according to their matching score and take the Top-k candidates as the predicted box \(\hat{\mathbf{b}}\).
### Training Objectives
As illustrated in Fig. 2, VoxDet contains two training stages: reconstruction and instance detection.
**Reconstruction** To learn the 3D geometry relationships, specifically 3D rotation between instance templates, we pre-train the implicit mapping function \(\mathcal{M}(\cdot)\) using a reconstruction objective. We divide \(M\) multi-view templates \(\mathcal{I}^{\mathrm{S}}\) into input images \(\mathcal{I}^{\mathrm{S}}_{i}\in\mathbb{R}^{(M-K)\times 3\times W\times H}\) and outputs \(\mathcal{I}^{\mathrm{S}}_{\mathrm{o}}\in\mathbb{R}^{K\times 3\times W\times H}\). Next, we construct the voxel representation \(\mathbf{V}^{\mathrm{S}}\) using \(\mathcal{I}^{\mathrm{S}}_{i}\) via the TVA module and adopt a decoder network \(\mathrm{Dec}\) to reconstruct the output images through the relative rotations:
\[\hat{\mathcal{I}}^{\mathrm{S}}_{\mathrm{o},j}=\mathrm{Dec}(\mathrm{Rot}( \mathbf{V}^{\mathrm{S}},\mathbf{R}_{j}^{\top}))\;,j\in\{1,2,\cdots,K\}\;, \tag{4}\]
where \(\hat{\mathcal{I}}^{\mathrm{S}}_{\mathrm{o},j}\) denotes the \(j\)-th reconstructed (fake) output images and \(\mathbf{R}_{j}\) is the relative rotation matrix between the \(1\)-st to \(j\)-th camera frame. We finally define the reconstruction loss as:
\[\mathcal{L}_{\mathrm{r}}=w_{\mathrm{recon}}\mathcal{L}_{\mathrm{recon}}+w_{ \mathrm{gan}}\mathcal{L}_{\mathrm{gan}}+w_{\mathrm{percep}}\mathcal{L}_{\mathrm{ percep}}\;, \tag{5}\]
where \(\mathcal{L}_{\mathrm{recon}}\) denotes the reconstruction loss, _i.e._, the L1 distance between \(\mathcal{I}_{\mathrm{o}}^{\mathrm{S}}\) and \(\hat{\mathcal{I}}_{\mathrm{o}}^{\mathrm{S}}\). \(\mathcal{L}_{\mathrm{gan}}\) is the generative adversarial network (GAN) loss, where we additionally train a discriminator to classify \(\mathcal{I}_{\mathrm{o}}^{\mathrm{S}}\) and \(\hat{\mathcal{I}}_{\mathrm{o}}^{\mathrm{S}}\). \(\mathcal{L}_{\mathrm{percep}}\) means the perceptual loss, which is the L1 distance between the feature maps of \(\mathcal{I}_{\mathrm{o}}^{\mathrm{S}}\) and \(\hat{\mathcal{I}}_{\mathrm{o}}^{\mathrm{S}}\) in each level of VGGNet [40]. Even though the reconstruction is only supervised on training instances, we observe that it can roughly reconstruct novel views for unseen instances. We thus reason that the pre-trained voxel mapping can roughly encode the geometry of an instance.
**Detection base training** : In order to empower \(\mathcal{M}(\cdot)\) with geometry encoding capability, we initialize it with the reconstruction pre-trained weights and conduct the instance detection training stage. In addition to the open-world detection loss [13], we introduce the instance classification loss \(\mathcal{L}_{\mathrm{cls}}^{\mathrm{Ins}}\) and rotation estimation loss \(\mathcal{L}_{\mathrm{rot}}^{\mathrm{Ins}}\) to supervise our VoxDet.
We define \(\mathcal{L}_{\mathrm{cls}}^{\mathrm{Ins}}\) as the binary cross entropy loss between the true labels \(\mathbf{s}\in\{0,1\}^{N}\) and the predicted scores \(\hat{\mathbf{s}}\in\mathbb{R}^{N\times 2}\) from the QVM module. The rotation estimation loss is defined as:
\[\mathcal{L}_{\mathrm{rot}}^{\mathrm{Ins}}=\|\hat{\mathbf{R}}^{Q}\mathbf{R}^{ Q\top}-\mathbf{I}\|\;, \tag{6}\]
where \(\mathbf{R}^{Q}\) is the ground-truth rotation matrix of the query voxel. Note that here we only supervise the positive samples. Together, our instance detection loss is defined as:
\[\mathcal{L}_{\mathrm{d}}=w_{1}\mathcal{L}_{\mathrm{cls}}^{\mathrm{RPN}}+w_{2} \mathcal{L}_{\mathrm{reg}}^{\mathrm{RPN}}+w_{3}\mathcal{L}_{\mathrm{cls}}^{ \mathrm{Head}}+w_{4}\mathcal{L}_{\mathrm{reg}}^{\mathrm{Head}}+w_{5}\mathcal{L }_{\mathrm{cls}}^{\mathrm{Ins}}+w_{6}\mathcal{L}_{\mathrm{rot}}^{\mathrm{Ins }}\;, \tag{7}\]
**Remark 1**: In both training stages, we only use the training objects, \(\mathcal{O}_{\mathrm{base}}\). During inference, VoxDet doesn't need any further fine-tuning or optimization for \(\mathcal{O}_{\mathrm{novel}}\).
## 4 Experiments
### Implementation Details
**Datasets:** Our research employs datasets composed of distinct training and test sets, adhering to \(\mathcal{O}\mathrm{base}\cap\mathcal{O}\mathrm{novel}=\phi\) to ensure no overlap between semantic classes of \(\mathcal{O}\mathrm{base}\) and \(\mathcal{O}\mathrm{novel}\).
_Training:_ In response to the scarcity of instance detection traing sets, we've compiled a comprehensive synthetic dataset using 9,901 objects from ShapeNet [14] and ABO [15]. Each instance is rendered into a 40-frame, object-centric \(360^{o}\) video via Blenderproc [41]. We then generate a query scene using 8 to 15 randomly selected objects from the entire instance pool, each initialized with a random
\begin{table}
\begin{tabular}{c c c|c c c|c c c c|c c c} \hline \hline Dataset & \multicolumn{3}{c|}{LM-O [16]} & \multicolumn{3}{c|}{YCB-V [17]} & \multicolumn{3}{c|}{RoboTools} & \multicolumn{3}{c}{Avg.} \\ Metric & mAR & AR50 & AR75 & mAR & AR50 & AR75 & mAR & AR50 & AR75 & mAR & AR50 & AR75 & Speed \\ \hline
**VoxDet (Ours)** & **27.4** & **36.9** & **26.4** & **33.3** & **55.0** & **33.4** & **16.1** & **14.0** & **11.8** & **25.60** & **35.30** & **23.87** & **6.5** \\ \(\mathrm{OLN}_{\mathrm{DINO}}\)[13, 9] & 23.6 & 34.2 & 18.3 & 25.6 & 53.5 & 22.7 & 13.9 & 13.6 & 10.6 & 21.03 & 33.77 & 17.20 & 2.8 \\ \(\mathrm{OLN}_{\mathrm{CLIP}}\)[13, 18] & 16.2 & 29.8 & 11.4 & 10.7 & 25.6 & 7.50 & 11.2 & 15.1 & 7.10 & 12.70 & 23.50 & 8.967 & 2.7 \\ \(\mathrm{OLN}_{\mathrm{Corr}}\)[13, 5] & 22.3 & 29.8 & 21.6 & 24.8 & 41.2 & 27.5 & 14.3 & 12.1 & 10.4 & 20.47 & 27.70 & 19.83 & 5.5 \\ Gen6D [5] & 12.0 & 26.8 & 4.80 & 12.2 & 38.2 & 4.70 & 9.20 & 13.7 & 4.40 & 11.13 & 26.23 & 4.633 & 1.3 \\ DTOD [6] & 9.80 & 26.5 & 2.90 & 16.3 & 46.3 & 3.50 & 3.60 & 6.20 & 1.30 & 9.900 & 26.33 & 2.567 & 0.6 \\ OS2D [7] & 0.20 & 4.30 & 0.20 & 5.20 & 14.7 & 1.50 & 2.90 & 3.80 & 1.20 & 2.767 & 7.600 & 0.967 & 5.3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overall performance comparison on LM-O [16], YCB-V [17], and the newly built RoboTools benchmarks. Compared with previous 2D methods, including correlation [5], attention [6], and matching [9, 18], our VoxDet holds superiority in both accuracy and efficiency.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Recon. & R & w/ sup. & Voxel Rel. & mAR & AR50 & AR75 \\ \hline ✓ & ✓ & ✓ & ✓ & **16.1** & **14.0** & **11.8** \\ ✓ & ✓ & ✓ & 2D Rel. [1] & 12.2 & 11.6 & 9.2 \\ ✗ & ✓ & ✓ & ✓ & 14.4 & 12.8 & 10.9 \\ ✓ & ✗ & ✗ & ✓ & 15.6 & 13.1 & 10.7 \\ ✓ & ✓ & ✗ & ✓ & 15.1 & 13.3 & 11.2 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study for VoxDet in RoboTools benchmark. Using 2D relation [1], the performance drops significantly. Without reconstruction, the geometry can’t be effectively learned, resulting in performance degrade. Measuring query rotation w/ supervision finally brings the best result.
orientation. This process yielded 55,000 scenes with 180,000 boxes for training and an additional 500 images for evaluation, amounting to 9,800 and 101 instances respectively. We've termed this expansive training set "open-world instance detection" (OWID-10k), signifying our model's capacity to handle unseen instances. To our knowledge, this is the first of its kind.
_Test:_ We utilize three benchmarks for testing. LineMod-Occlusion [16] (LM-O) features 8 texture-less instances and 1,514 box annotations, with the primary difficulty being heavy object occlusion. The YCB-Video [17] (YCB-V) contains 21 instances and 4,125 target boxes, where the main challenge lies in the variance in instance pose. For LM-O and YCB-V, we render reference videos using the provided CAD models, similar to the OWID pipeline. However, these benchmarks lack clutter and background distractions, thereby failing to truly mimic real-world conditions. To address this, we introduced a more complex real-world benchmark, RoboTools, consisting of 20 instances, 9,109 annotations, and 24 challenging scenarios. Each instance in RoboTools was video-recorded on a clean table, with the ground-truth camera extrinsic computed for each frame.
**Baselines:** Our baselines comprise template-driven instance detection methods, such as correlation [5] and attention-based approaches [6]. However, these methods falter in cluttered scenes, like those in LM-O, YCB-V, and RoboTools. Therefore, we've self-constructed several 2D baselines, namely, \(\text{OLN}_{\text{DINO}}\), \(\text{OLN}_{\text{CLIP}}\), and \(\text{OLN}_{\text{Corr}}\). In these models, we initially obtain open-world 2D proposals via our open-world detection module [13]. We then employ different 2D matching methods to identify the proposal with the highest score. In \(\text{OLN}_{\text{DINO}}\) and \(\text{OLN}_{\text{CLIP}}\), we leverage robust features from pre-trained backbones [9, 18] and use cosine similarity for matching. 1 For \(\text{OLN}_{\text{Corr}}\), we designed a 2D matching head using correlation as suggested in [5]. These open-world detection based 2D baselines significantly outperform previous methods [5, 6]. In addition to these instance-specific methods, we also include a class-level one-shot detector, OS2D [7], for comparison.
Footnote 1: we default to the pre-trained ViT-B model due to its similar capacity to ours. DINO [9] and CLIP [18] might already be familiar with the semantics of the test instances, slightly deviating from our setup.
**Hardware and configurations:** The reconstruction stage of VoxDet was trained on a single Nvidia V100 GPU over a period of 6 hours, while the detection training phase utilized four Nvidia V100 GPUs for a span of 36 hours. For the sake of fairness, we trained all the methods referenced [5, 6, 7, 13, 18, 9] on the OWID dataset, adhering to their official configuration. Inferences were conducted on a single V100 GPU to ensure fair efficiency comparison. During testing, we supplied each model with the same set of \(M=10\) template images per instance, and all methods employed the top \(N=500\) ranking proposals for matching. In the initial reconstruction training stage, VoxDet used 98% of all 9,901 instances in the OWID dataset. For each instance, a random set of \(K=4\) images
Figure 4: Top-K analysis of VoxDet and One-shot object detector [7]. By virtue of the instance-level matching method, QVM, VoxDet can better classify the proposals, so that \(90\%\) of the true positives lie in Top-10, while for OS2D, this ratio is only \(60\%\).
Figure 3: Number of templates analysis of VoxDet and 2D baseline, \(\text{OLN}_{\text{DINO}}\)[13, 9] on YCB-V benchmark. Thanks to the learned geometry-aware 2D-3D mapping, VoxDet can work well with very few reference images, while 2D method suffers from such setting, dropping up to \(\mathbf{87}\%\).
were designated as output \(\mathcal{I}_{\text{o}}^{\text{S}}\), while the remaining \(M-K=6\) images constituted the inputs \(\mathcal{I}_{\text{i}}^{\text{S}}\). For additional configurations of VoxDet, please refer to the Appendix and our code repository.
**Metrics:** Given our assumption that precisely one desired instance is present in the query image, we default to selecting the Top-1 proposal as the predicted result. We report the average recall (AR) rate [42] across different IoU thresholds, such as mAR (IoU \(\in 0.25\sim 0.95\)), AR50 (IoU \(0.5\)), and AR75 (IoU \(0.75\)). Under this setup, the AR is equivalent to the average precision (AP).
### Quantitative Results
**Overall Performance Comparison:** The comprehensive detection results are detailed in Table 1, demonstrating that VoxDet consistently delivers superior performance across most settings. Notably, VoxDet surpasses the next best baseline, \(\text{OLN}_{\text{DINO}}\), by an impressive margin of up to \(\mathbf{20}\%\) in terms of average mAR. Furthermore, due to its compact voxel representation, VoxDet is observed to be markedly more efficient.
**Efficiency Comparison:** As QVM has a lower model complexity than \(\text{OLN}_{\text{CLIP}}\) and \(\text{OLN}_{\text{DINO}}\), it achieves faster inference speeds, as detailed in Table 2. Compared to correlation-based matching [5], VoxDet leverages the aggregation of multi-view templates into a single compact voxel, thereby eliminating the need for exhaustive 2D correlation and achieving \(\mathbf{2}\times\) faster speed.
In addition to inference speed, VoxDet also demonstrates greater efficiency regarding the number of templates. We tested the methods on the YCB-V dataset [17] using fewer templates than the default, and found that the 2D baseline is highly sensitive to the number of provided references. As illustrated in Fig. 3, for object 8, the performance of the 2D baseline can plummet by \(\mathbf{87}\%\) when the number of templates is reduced from 10 to 2. However, such a degradation rate for VoxDet is \(\mathbf{2}\times\) less. We attribute this capability to the learned 2D-3D mapping, which can effectively incorporate 3D geometry with very few views of the instance.
Figure 5: Detection qualitative results comparison between VoxDet and 2D baselines on the three benchmarks. VoxDet shows better robustness under pose variance (_e.g._ Obj. 5@LM-O first and second columns) and occlusion (_e.g._ Obj. 13@YCB-V second column and Obj. 9@RoboTools).
Figure 6: Visualization of the activation score in the voxels grids. When presented with a query instance in various orientations, the location of the high-activated grids closely corresponds to the actual rotations. The depicted rotation axis serves as a reference for better understanding.
Top-K Analysis:Compared to the category-level method [7], VoxDet produces considerably fewer false positives among its Top-10 candidates. As depicted in Fig. 4, we considered Top-\(K=1,5,10,20,30,50,100\) proposals and compared the corresponding AR between VoxDet and OS2D [7]. VoxDet's AR only declines by \(5\sim 10\%\) when \(K\) decreases from 100 to 10, whereas OS2D's AR suffers a drop of up to \(38\%\). This suggests that over \(90\%\) of VoxDet's true positives are found among its Top-10 candidates, whereas this ratio is only around \(60\%\) for OS2D. Hence, VoxDet outperforms category-level methods in classifying the desired instance and differentiating it from the copious background data.
**Ablation Studies:** The results of our ablation studies are presented in Table 3. Initially, we attempted to utilize the 2D relation proposed in [1] for matching. However, this class-level relation proved to be significantly inferior to our proposed instance-level voxel relation. Reconstruction pre-training is crucial for VoxDet's ability to learn to encode the geometry of an instance; without it, the performance can decline by approximately \(10\%\). Additionally, we conducted an ablation on the query rotation measurement module in the QVM, and also tried not supervising the predicted rotation. Both of these alternatives yielded results inferior to our default setting.
### Qualitative Results
Detection VisualizationThe qualitative comparison of detection is depicted in Fig. 5, where we compare VoxDet with the two most robust baselines, \(\text{OLN}_{\text{DINO}}\) and \(\text{OLN}_{\text{Corr}}\). We notice that 2D methods can easily falter if the pose of an instance is not seen in the reference, e.g., 2-nd query image in the 1-st row, while VoxDet still accurately identifies it. Furthermore, 2D matching exhibits less robustness under occlusion, where the instance's appearance could significantly differ, as seen in the 2-nd and 3-rd rows. VoxDet can effectively detect these partially occluded instances thanks to its learned 3D geometry.
**Deep Voxels Visualization** To better validate the geometry awareness of our learned voxel representation, we present the deep visualization in Fig. 6. The gradient of the matching score is backpropagated to the template voxel and we visualize the activation value of each grid. Surprisingly, we discover that as the orientation of the query instance changes, the activated regions within our voxel representations accurately mirror the true rotation. This demonstrates that the voxel representation in VoxDet is aware of the orientation of the instance.
**Reconstruction Visualization** The voxel representation in VoxDet can be decoded to synthesize novel views, even for unseen instances, which is demonstrated in Fig. 7. The voxel, pre-trained on 9500 instances, is capable of approximately reconstructing the geometry of unseen instances.
## 5 Discussion
**Conclusion:** This work introduces, VoxDet, a novel approach to instance detection using multi-view reference images. VoxDet is a pioneering 3D-aware framework that exhibits robustness to occlusions and pose variations. VoxDet's crucial contribution and insight stem from its geometry-aware Template Voxel Aggregation (TVA) module and an exhaustive Query Voxel Matching (QVM) specifically tailored for instances. Owing to the learned instance geometry in TVA and the meticulously designed matching in QVM, VoxDet significantly outperforms various 2D baselines and offers faster inference speed. Beyond methodological contributions, we also introduce the first instance detection training set, OWID, and a challenging RoboTools benchmark for future research.
**Limitations:** Despite its strengths, VoxDet has two potential limitations. Firstly, the model, trained on the synthetic OWID dataset, may exhibit a domain gap when applied to real-world scenarios. Secondly, we assume that the relative rotation matrix for the reference images is known, which may not be straightforward to calculate for texture-less objects against simple backgrounds. However, the TVA module in VoxDet doesn't require an extremely accurate rotation. We present further experiments addressing these issues in the supplementary materials.
**Acknowledgement:** This work was sponsored by SONY Corporation of America under agreement #1012409.
Figure 7: Reconstruct results of VoxDet on unseen instances. The voxel representation in VoxDet can be decoded with a relative rotation and synthesize novel views, which demonstrate the geometry embedded in our learned voxels. |
2305.18113 | General expansion of natural power of linear combination of Bosonic
operators in normal order | In quantum mechanics, bosonic operators are mathematical objects that are
used to represent the creation ($a^\dagger$) and annihilation ($a$) of bosonic
particles. The natural power of a linear combination of bosonic operators
represents an operator $(a^\dagger x+ay)^n$ with $n$ as the exponent and
$x,\,y$ are the variables free from bosonic operators. The normal ordering of
these operators is a mathematical technique that arranges the operators so that
all the creation operators are to the left of the annihilation operators,
reducing the number of terms in the expression. In this paper, we present a
general expansion of the natural power of a linear combination of bosonic
operators in normal order. We show that the expansion can be expressed in terms
of binomial coefficients and the product of the normal-ordered operators using
the direct method and than prove it using the fundamental principle of
mathematical induction. We also derive a formula for the coefficients of the
expansion in terms of the number of bosons and the commutation relation between
the creation and annihilation operators. Our results have important
applications in the study of many-body systems in quantum mechanics, such as in
the calculation of correlation functions and the evaluation of the partition
function. The general expansion presented in this paper provides a powerful
tool for analyzing and understanding the behavior of bosonic systems, and can
be applied to a wide range of physical problems. | Deepak, Arpita Chatterjee | 2023-05-29T14:26:45Z | http://arxiv.org/abs/2305.18113v1 | # General expansion of natural power of linear combination of Bosonic operators in normal order
###### Abstract
In quantum mechanics, bosonic operators are mathematical objects that are used to represent the creation (\(a^{\dagger}\)) and annihilation (\(a\)) of bosonic particles. The natural power of a linear combination of bosonic operators represents an operator \((a^{\dagger}x+ay)^{n}\) with \(n\) as the exponent and \(x,\,y\) are the variables free from bosonic operators. The normal ordering of these operators is a mathematical technique that arranges the operators so that all the creation operators are to the left of the annihilation operators, reducing the number of terms in the expression. In this paper, we present a general expansion of the natural power of a linear combination of bosonic operators in normal order. We show that the expansion can be expressed in terms of binomial coefficients and the product of the normal-ordered operators using the direct method and than prove it using the fundamental principle of mathematical induction. We also derive a formula for the coefficients of the expansion in terms of the number of bosons and the commutation relation between the creation and annihilation operators. Our results have important applications in the study of many-body systems in quantum mechanics, such as in the calculation of correlation functions and the evaluation of the partition function. The general expansion presented in this paper provides a powerful tool for analyzing and understanding the behavior of bosonic systems, and can be applied to a wide range of physical problems.
keywords: boson operators; normal ordering; antinormal ordering; Hermite polynomial, binomial coefficients, etc. +
Footnote †: journal: Conference Proceedings
## 1 Introduction
In the quantum theoretic illustration of light [1], the state of a photon field is entirely described by the corresponding density operator, which consists of a mixture of the bosonic operators denoted by photon annihilation
(creation) operators \(a(a^{\dagger})\), [2] having the well-known commutation relation \(aa^{\dagger}-a^{\dagger}a=[a,\ a^{\dagger}]=1\). Various achievable approaches to represent the density operator of the electromagnetic field have been illustrated in [2]. In general, the Fock states as it is a complete set, can be used to expand the density operator as \(\rho=\sum_{s_{1},s_{2}}c_{s_{1},s_{2}}\left|s_{1}\right\rangle\left\langle s_{ 2}\right|\). The coherent states, irrespective of the fact that they form a non-orthogonal set, can be used as a basis to represent the density operator diagonally. The coherent state facilitates a number of probable expressions of the density matrix \(\rho\) in terms of the Wigner function [3], the \(P\)-function [4, 5] and the \(Q\)-function [6]. In each transformation, we need to organize boson operators [7] according to a specific order. For instance, if the density operator is normally (antinormally) ordered, one can obtain the corresponding \(Q\)-function (\(P\)-function) instantly. Adjustment of operators is also performed worldwide for finding out miscellaneous representative states of the optical field and calculating the expected values of various operators in these states, such as normal ordered squeezing operator is needed while constructing squeezed states [8]. In quantum mechanics and quantum field theory, people deals with solutions to quantum problems involving different combinations of bosonic operators [9]. This includes, for example, managing many complicated operators which have various commutation relations with other operators, calculating the expectations of the operators which are functions of \(a\) and \(a^{\dagger}\), etc. In addition, the non-commutativity between \(a\) and \(a^{\dagger}\) causes more complexity in operator handling. Because of the non-commutativity of bosonic operators, simple sequences of first adding and then subtracting, as well as first subtracting and then adding exactly alike particles to any quantum system, produce different results [10]. Sun et al. proposed a cavity-QED-based technique, where two atoms have been sent one after one at the desired levels and detected only if they end at the stages under consideration and thus they proved that \(a\) and \(a^{\dagger}\) are not commuting to each other [11]. Parigi et al. [12] experimentally demonstrated how adding (subtracting) only one photon to (from) a completely classical and fully incoherent thermal light field can affect the system. They have applied creation and annihilation operators in alternate sequences and realized that the resulting states can differ a lot depending on the order in which the two quantum operators have been applied. The same group of people also carried out a single-photon interferometer-based experiment for proving the bosonic commutation relation directly [13]. The quantum algebra behind the commutation relation has a major role in many of the paradoxes, theorems, and applications of quantum physics. In addition, the work is motivated by the fact that, in the recent past, several real-life uses of nonclassical [14] states have been announced. For example, squeezed states functioned as an important character in the phase diffusion problems [15, 16]. It can also be applied for detecting the gravitational waves in LIGO experiments [17, 18], for teleporting different types of quantum states [19], and for continuous-variable quantum cryptography [20]. The immediate requirement for a single photon resource can be achieved only by an antibunched source of light [21]. Entangled states are detected to be of utmost use in both secure [22] and uncertain [23] quantum communication schemes. The most frequently used nonclassicality witnesses are \(l\)-th order Mandel's function
[24], \(l-1\)th order antibunching [25], \(l-1\)th order sub-Poissonian statistics [26], \(l\)th order squeezing [27], Husmi \(Q\) parameter [28], Agarwal-Tara criterion [29] and Klyshko's parameter [30]. Since many of these experimentally assessable nonclassicality witnesses can be written in terms of the moments of annihilation and creation operators [31], it is beneficial to find out an analytic expression for the most general moment \(\langle a^{\dagger p}a^{q}\rangle\), \(p\), \(q\) being non-negative integers.
Now the second problem arises that how to find the \(\langle a^{\dagger p}a^{q}\rangle\) for a quantum state which includes the squeezing operator as squeezing operator \(S(z)\) operated on Bosonic operators gives the linear combination of Bosonic operators as follows [1]
\[S^{\dagger}(z)a^{\dagger n}S(z)=(a^{\dagger}\cosh(r)-ae^{-2i\phi}\sinh(r))^{n} \tag{1}\]
\[S^{\dagger}(z)a^{n}S(z)=(a\cosh(r)-a^{\dagger}e^{2i\phi}\sinh(r))^{n} \tag{2}\]
\[D^{\dagger}(\alpha)a^{n}D(\alpha)=(a+\alpha)^{n} \tag{3}\]
\[D^{\dagger}(\alpha)a^{\dagger n}D(\alpha)=(a^{\dagger}+\alpha^{*})^{n} \tag{4}\]
Therefore to apply the displacement(\(D(\alpha)\) and squeezing(\(S(z)\)) operators multiple times, we have to simplify the expressions in (1)-(4). Also, in many other quantum states this type of relation occurs. The motivation behind the general expansion of natural power of the linear combination of Bosonic operators in normal order comes from the need to develop a powerful mathematical tool for analyzing and understanding the behavior of bosonic systems in quantum mechanics. Bosonic operators are essential tools for describing the creation and annihilation of bosonic particles, such as photons, phonons, and atomic nuclei. In many-body systems, the number of bosons in different states can be large, making it difficult to analyze the system using the individual operators. The general expansion of natural power of the linear combination of Bosonic operators in normal order provides a way to simplify the calculation of complex systems by reducing the number of terms in the expression. Furthermore, the normal ordering of the operators is an important technique in quantum mechanics that arranges the operators so that all the creation operators are to the left of the annihilation operators, reducing the complexity of the expression. The general expansion in normal order provides a systematic way of calculating the natural power of a linear combination of bosonic operators, and can be used in the calculation of correlation functions and the evaluation of the partition function. Therefore, the motivation behind the general expansion of natural power of the linear combination of Bosonic operators in normal order is to provide a powerful mathematical tool for analyzing and understanding the behavior of bosonic systems, and to simplify the calculation of complex many-body systems in quantum mechanics. So in this paper, we find the simplified expression of \((ax+a^{\dagger}y)^{n}\) which can be used in different fields. Not only in Physics, this relation is also used in pure Mathematics non-commutative algebra where we study the expansion of non-commutative numbers or operators.
To the best of our knowledge, the two main way-outs for handling operator ordering related problems are Lie algebraic method [32, 33] and Louisell's
technique [9] via the differentiation of coherent state representation [34]. Another unified method for organizing the quantum operators into ordered products (normally ordered, antinormally ordered, Weyl ordered, or symmetrically ordered) is realized via the fascinating IWOP (integration within an ordered product of operators) technique [35; 36]. In this small article, we intend to derive simple Mathematical formulae which can transfer different combinations of higher powers of \(a\) and \(a^{\dagger}\) into desired normally or antinormally ordered form. The basic theoretical tools and methods are shown in Sec. 2 We present our results in Sec. 3 which is followed by a discussion in Sec. 4.
## 2 The basic theoretical terminologies
The basic theoretical terminologies used in the study of Bosonic operators include:
1. **Creation and annihilation operators:** These are the basic mathematical objects that represent the creation and annihilation of bosonic particles in quantum mechanics. They are used to build up the Fock space, which is the space of all possible states of a system of bosonic particles.
2. **Commutation relations:** The commutation relations between the creation and annihilation operators are fundamental in the study of bosonic systems. They determine the algebraic properties of the operators and the behavior of the system under different conditions.
3. **Normal ordering:** Normal ordering is a mathematical technique that arranges the operators so that all the creation operators are to the left of the annihilation operators. This simplifies the calculation of complex many-body systems by reducing the number of terms in the expression.
4. **Wick's theorem:** Wick's theorem is a powerful tool that allows for the calculation of expectation values of products of bosonic operators. It expresses the expectation value as a sum of all possible pairings of the operators in normal order.
5. **Second quantization:** Second quantization is a formalism that allows for the systematic treatment of many-body systems in terms of creation and annihilation operators. It provides a powerful tool for studying the behavior of bosonic systems under different conditions.
6. **Path integrals:** Path integrals are a powerful method for calculating the transition amplitudes between two quantum states. They provide a different perspective on quantum mechanics and can be used to study the behavior of bosonic systems in a variety of contexts.
7. **Principle of Mathematical Induction (PMI):** The Principle of Mathematical Induction(PMI) is a proof technique used to establish a Mathematical statement that holds for all natural numbers. The principle has two parts: **Basis Step:** We first prove that the statement holds for the first natural number, usually 1. **Inductive Step:** We then prove that if the statement holds for some
natural number \(k\), then it also holds for the next natural number, which is \(k+1\).
By proving these two parts, we can conclude that the statement holds for all natural numbers(\(\mathcal{N}\)) greater than or equal to the first natural number. PMI is a powerful tool for proving many important Mathematical theorems, such as those in number theory, combinatorics, and algebra.
Overall, the study of bosonic operators and systems in quantum mechanics involves a combination of mathematical tools and methods, including algebraic techniques, normal ordering, Wick's theorem, second quantization, and path integrals. These tools and methods provide a powerful framework for understanding and analyzing the behavior of bosonic systems in a variety of physical contexts.
## 3 Results
We have two relations: \(a^{j}a^{\dagger}=a^{\dagger}a^{j}+ja^{j-1}\) and its conjugate \(aa^{\dagger j}=ja^{\dagger j-1}+a^{\dagger j}a\) which can be easily proved as follows
**Lemma 1**.: \([a,\,a^{\dagger j}]=ja^{\dagger j-1}\)__
Proof.: \[\begin{array}{rcl}aa^{\dagger j}&=&aa^{\dagger}a^{\dagger j-1}\\ &=&(1+a^{\dagger}a)a^{\dagger j-1}\\ &=&a^{\dagger j-1}+a^{\dagger}aa^{\dagger}a^{\dagger j-1}\\ &=&a^{\dagger j-1}+a^{\dagger}(1+a^{\dagger}a)a^{\dagger j-2}\\ &=&2a^{\dagger j-1}+a^{\dagger}aa^{\dagger j-2}\\ &=&\dots\\ &=&ja^{\dagger j-1}+a^{\dagger j}a\,\,\,\mbox{(proceeding similarly $j$ times)}\\ \Rightarrow\,\,aa^{\dagger j}-a^{\dagger j}a&=&ja^{\dagger j-1}\\ &\Rightarrow\,\,[a,a^{\dagger j}]&=&ja^{\dagger j-1}\end{array}\] (5)
**Lemma 2**.: \([a^{j},\,a^{\dagger}]=ja^{j-1}\)__
Proof.: To prove this, we compute the conjugate of (5) and obtain
\[\begin{array}{rcl}a^{j}a^{\dagger}&=&ja^{j-1}+a^{\dagger}a^{j}\\ \Rightarrow\,\,a^{j}a^{\dagger}-a^{\dagger}a^{j}&=&ja^{j-1}\\ &\Rightarrow\,\,[a,a^{\dagger j}]&=&ja^{\dagger j-1}\end{array} \tag{6}\]
Using lemma 1 and 2 we prove the theorem 1 as follows
**Theorem 1**.: \[(ax+a^{\dagger}y)^{n}=\sum_{m=0}^{\left[\frac{n}{2}\right]}\frac{n!(xy)^{m}}{m!(n-2 m)!2^{m}}\sum_{r=0}^{n-2m}\binom{n-2m}{r}x^{r}y^{n-2m-r}a^{\dagger n-2m-r}a^{r}\] (7)
Proof.: To find the above result we use the direct or hit and trial method as follows
\[(ax+a^{\dagger}y)^{2} =a^{2}x^{2}+a^{\dagger 2}y^{2}+xy(aa^{\dagger}+a^{\dagger}a)\] \[=a^{2}x^{2}+a^{\dagger 2}y^{2}+xy(1+2a^{\dagger}a)\] \[(ax+a^{\dagger}y)^{3} =a^{3}x^{3}+a^{\dagger 3}y^{3}+xy^{2}(aa^{\dagger 2}+a^{\dagger 2 }+2a^{\dagger 2}a)+x^{2}y(a^{\dagger}a^{2}+a+2aa^{\dagger}a)\] \[=a^{3}x^{3}+a^{\dagger 3}y^{3}+xy^{2}(a^{\dagger 2}a+2a^{\dagger }+a^{\dagger}+2a^{\dagger 2}a)\] \[+x^{2}y(a^{\dagger}a^{2}+a+2(1+a^{\dagger}a)a)\] \[=a^{3}x^{3}+a^{\dagger 3}y^{3}+xy^{2}(3a^{\dagger 2}a+3a^{\dagger })+x^{2}y(3a^{\dagger}a^{2}+3a)\] \[(ax+a^{\dagger}y)^{4} =a^{4}x^{4}+a^{\dagger 4}y^{4}+x^{3}y(a^{\dagger}a^{3}+3aa^{ \dagger 2}+3a^{2})\] \[+x^{2}y^{2}(3aa^{\dagger 2}a+3aa^{\dagger}+3a^{\dagger 2}a^{2}+3 a^{\dagger}a)+xy^{3}(aa^{\dagger 3}+3a^{\dagger 3}a+3a^{\dagger 2})\] \[=a^{4}x^{4}+a^{\dagger 4}y^{4}+x^{3}y(a^{\dagger}a^{3}+3(1+a^{ \dagger}a)a^{2}+3a^{2})\] \[+x^{2}y^{2}(3(2a^{\dagger}+a^{\dagger 2}a)a+3(1+a^{\dagger}a)+3a^{ \dagger 2}a^{2}+3a^{\dagger}a)\] \[+xy^{3}(3a^{\dagger 2}+a^{\dagger 3}a+3a^{\dagger 3}a+3a^{\dagger 2})\] \[=a^{4}x^{4}+a^{\dagger 4}y^{4}+x^{3}y(4a^{\dagger}a^{3}+6a^{2})+x^ {2}y^{2}(6a^{\dagger 2}a^{2}+12a^{\dagger}a+3)\] \[+xy^{3}(4a^{\dagger 3}a+6a^{\dagger 2})\] \[(ax+a^{\dagger}y)^{5} =a^{5}x^{5}+a^{\dagger 5}y^{5}+x^{4}y(a^{\dagger}a^{4}+4aa^{ \dagger}a^{3}+6a^{3})\] \[+x^{3}y^{2}(4a^{\dagger 2}a^{3}+6a^{\dagger}a^{2}+6aa^{\dagger 2 }a^{2}+12aa^{\dagger}a+3a)\] \[+x^{2}y^{3}(6a^{\dagger 3}a^{2}+12a^{\dagger 2}a+3a^{\dagger}+4aa^{ \dagger 3}a+6aa^{\dagger 2})+xy^{4}(aa^{\dagger 4}+4a^{\dagger 4}a+6a^{\dagger 3})\] \[=a^{5}x^{5}+a^{\dagger 5}y^{5}+x^{4}y(a^{\dagger}a^{4}+4(1+a^{ \dagger}a)a^{3}+6a^{3})\] \[+x^{3}y^{2}(4a^{\dagger 2}a^{3}+6a^{\dagger}a^{2}+6(2a^{\dagger}+a^{ \dagger 2}a)a^{2}+12(1+a^{\dagger}a)a+3a)\] \[+x^{2}y^{3}(6a^{\dagger 3}a^{2}+12a^{\dagger 2}a+3a^{\dagger}+4(3a^{ \dagger 2}+a^{\dagger 3}a)a+6(2a^{\dagger}+a^{\dagger 2}a))\] \[+xy^{4}(4a^{\dagger 3}+a^{\dagger 4}a+4a^{\dagger 4}a+6a^{\dagger 3})\] \[=a^{5}x^{5}+a^{\dagger 5}y^{5}+x^{4}y(5a^{\dagger}a^{4}+10a^{3})+x^ {3}y^{2}(10a^{\dagger 2}a^{3}+30a^{\dagger}a^{2}+15a)\] \[+x^{2}y^{3}(10a^{\dagger 3}a^{2}+30a^{\dagger 2}a+15a^{\dagger})+xy ^{4}(10a^{\dagger 3}+5a^{\dagger 4}a)\] \[(ax+a^{\dagger}y)^{6} =a^{6}x^{6}+a^{\dagger 6}y^{6}+x^{5}y(a^{\dagger}a^{5}+5aa^{ \dagger}a^{4}+10a^{4})\] \[+x^{4}y^{2}(5a^{\dagger 2}a^{4}+10a^{\dagger}a^{3}+10aa^{\dagger 2 }a^{3}+30aa^{\dagger}a^{2}+15a^{2})\] \[+x^{3}y^{3}(10a^{\dagger 3}a^{3}+30a^{\dagger 2}a^{2}+15a^{\dagger}a+10aa ^{\dagger 3}a^{2}+30aa^{\dagger 2}a+15aa^{\dagger})\] \[+x^{2}y^{4}(10aa^{\dagger 3}+5aa^{\dagger 4}a+10a^{\dagger 4}a^{2}+30 a^{\dagger 3}a+15a^{\dagger 2})\] \[+xy^{5}(aa^{\dagger 5}+10a^{\dagger 4}+5a^{\dagger 5}a)\]
\[=a^{6}x^{6}+a^{\dagger 6}y^{6}+x^{5}y(a^{\dagger}a^{5}+5(1+a^{ \dagger}a)a^{4}+10a^{4})\] \[+x^{4}y^{2}(5a^{\dagger 2}a^{4}+10a^{\dagger}a^{3}+10(2a^{\dagger}+a^{ \dagger 2}a)a^{3}+30(1+a^{\dagger}a)a^{2}+15a^{2})\] \[+x^{3}y^{3}(10a^{\dagger 3}a^{3}+30a^{\dagger 2}a^{2}+15a^{\dagger}a\] \[+10(3a^{\dagger 2}+a^{\dagger 3}a)a^{2}+30(2a^{\dagger}+a^{ \dagger 2}a)a+15(1+a^{\dagger}a))\] \[+x^{2}y^{4}(10(3a^{\dagger 2}+a^{\dagger 3}a)+5(4a^{\dagger 3}+a^{ \dagger 4}a)a+10a^{\dagger 4}a^{2}+30a^{\dagger 3}a+15a^{\dagger 2})\] \[+xy^{5}(5a^{\dagger 4}+a^{\dagger 5}a+10a^{\dagger 4}+5a^{ \dagger 5}a)\] \[=a^{6}x^{6}+a^{\dagger 6}y^{6}+x^{5}y(6a^{\dagger}a^{5}+15a^{4})+x^ {4}y^{2}(15a^{\dagger 2}a^{4}+60a^{\dagger}a^{3}+45a^{2})\] \[+x^{3}y^{3}(20a^{\dagger 3}a^{3}+90a^{\dagger 2}a^{2}+90a^{ \dagger}a+15)\] \[+x^{2}y^{4}(15a^{\dagger 4}a^{2}+60a^{\dagger 3}a+45a^{\dagger 2}) +xy^{5}(15a^{\dagger 4}+6a^{\dagger 5}a)\] \[(ax+a^{\dagger}y)^{7} =a^{7}x^{7}+a^{\dagger 7}y^{7}+x^{6}y(a^{\dagger}a^{6}+6aa^{ \dagger}a^{5}+15a^{5})\] \[+x^{5}y^{2}(6a^{\dagger 2}a^{5}+15a^{\dagger}a^{4}+15aa^{\dagger 2 }a^{4}+60aa^{\dagger}a^{3}+45a^{3})\] \[+x^{4}y^{3}(15a^{\dagger 3}a^{4}+60a^{\dagger 2}a^{3}+45a^{ \dagger}a^{2}\] \[+20aa^{\dagger 3}a^{3}+90aa^{\dagger 2}a^{2}+90aa^{\dagger}a+15a)\] \[+x^{3}y^{4}(20a^{\dagger 4}a^{3}+90a^{\dagger 3}a^{2}+90a^{ \dagger 2}a+15a^{\dagger}\] \[+15aa^{\dagger 4}a^{2}+60aa^{\dagger 3}a+45aa^{\dagger 2})\] \[+x^{2}y^{5}(15a^{\dagger 5}a^{2}+60a^{\dagger 4}a+45a^{\dagger 3 }+15aa^{\dagger 4}+6aa^{\dagger 5}a)\] \[+xy^{6}(aa^{\dagger 6}+15a^{\dagger 5}+6a^{\dagger 6}a)\] \[=a^{7}x^{7}+a^{\dagger 7}y^{7}+x^{6}y(a^{\dagger}a^{6}+6(1+a^{ \dagger}a)a^{5}+15a^{5})\] \[+x^{5}y^{2}(6a^{\dagger 2}a^{5}+15a^{\dagger}a^{4}+15(2a^{\dagger }+a^{\dagger 2}a)a^{4}+60(1+a^{\dagger}a)a^{3}+45a^{3})\] \[+x^{4}y^{3}(15a^{\dagger 3}a^{4}+60a^{\dagger 2}a^{3}+45a^{ \dagger}a^{2}\] \[+20(3a^{\dagger 2}+a^{\dagger 3}a)a^{3}+90(2a^{\dagger}+a^{ \dagger 2}a)a^{2}+90(1+a^{\dagger}a)a+15a)\] \[+x^{3}y^{4}(20a^{\dagger 4}a^{3}+90a^{\dagger 3}a^{2}+90a^{ \dagger 2}a+15a^{\dagger}\] \[+15(4a^{\dagger 3}+a^{\dagger 4}a)a^{2}+60(3a^{\dagger 2}+a^{ \dagger 3}a)a+45(2a^{\dagger}+a^{\dagger 2}a)\] \[+x^{2}y^{5}(15a^{\dagger 5}a^{2}+60a^{\dagger 4}a+45a^{\dagger 3 }+15(4a^{\dagger 3}+a^{\dagger 4}a)+6(5a^{\dagger 4}+a^{\dagger 5}a)a)\] \[+xy^{6}((6a^{\dagger 5}+a^{\dagger 6}a)+15a^{\dagger 5}+6a^{ \dagger 6}a)\] \[=a^{7}x^{7}+a^{\dagger 7}y^{7}+x^{6}y(7a^{\dagger}a^{6}+21a^{5})+x^ {5}y^{2}(21a^{\dagger 2}a^{5}+105a^{\dagger}a^{4}+105a^{3})\] \[+x^{4}y^{3}(35a^{\dagger 3}a^{4}+210a^{\dagger 2}a^{3}+315a^{ \dagger}a^{2}+105a)\] \[+x^{3}y^{4}(35a^{\dagger 4}a^{3}+210a^{\dagger 3}a^{2}+315a^{ \dagger 2}a+105a^{\dagger})\] \[+x^{2}y^{5}(21a^{\dagger 5}a^{2}+105a^{\dagger 4}a+105a^{\dagger 3})+ xy^{6}(7a^{\dagger 6}a+21a^{\dagger 5})\]
Thus general term is given by
\[(ax+a^{\dagger}y)^{n}=\sum_{m=0}^{\left[\frac{n}{2}\right]}\frac{n!(xy)^{m}}{m! (n-2m)!2^{m}}\sum_{r=0}^{n-2m}\binom{n-2m}{r}x^{r}y^{n-2m-r}a^{\dagger n-2m-r} a^{r}. \tag{8}\]
Now we prove equation (8) using the principle of Mathematical induction and hence let us assume that the given statement is denoted by \(P(n)\).
Case - 1: Clearly \(P(n)\) is true for \(n=1\).
Case - 2: Let \(P(n)\) be true for \(n=k\) so
\[(ax+a^{\dagger}y)^{k}=\sum_{m=0}^{\left[\frac{k}{2}\right]}\frac{k!(xy)^{m}}{m! (k-2m)!2^{m}}\sum_{r=0}^{k-2m}\binom{k-2m}{r}x^{r}y^{k-2m-r}a^{\dagger k-2m-r} a^{r}.\]
Case - 3: Now we prove \(P(n)\) for \(n=k+1\) as follows
\[(ax+a^{\dagger}y)^{k+1} =(ax+a^{\dagger}y)(ax+a^{\dagger}y)^{k}\] \[=(ax+a^{\dagger}y)\sum_{m=0}^{\left[\frac{k}{2}\right]}\frac{k!(xy )^{m}}{m!(k-2m)!2^{m}}\sum_{r=0}^{k-2m}\binom{k-2m}{r}x^{r}y^{k-2m-r}a^{\dagger k -2m-r}a^{r}\] \[=\sum_{m=0}^{\left[\frac{k}{2}\right]}\frac{k!(xy)^{m}x}{m!(k-2m )!2^{m}}\sum_{r=0}^{k-2m}\binom{k-2m}{r}x^{r}y^{k-2m-r}aa^{\dagger k-2m-r}a^{r}\] \[+\sum_{m=0}^{\left[\frac{k}{2}\right]}\frac{k!(xy)^{m}y}{m!(k-2m )!2^{m}}\sum_{r=0}^{k-2m}\binom{k-2m}{r}x^{r}y^{k-2m-r}a^{\dagger k-2m-r+1}a^{r}\] \[=\sum_{m=0}^{\left[\frac{k}{2}\right]}\frac{k!(xy)^{m}}{m!(k-2m)!2 ^{m}}\sum_{r=0}^{k-2m}\binom{k-2m}{r}x^{r+1}y^{k-2m-r}\] \[\times((k-2m-r)a^{\dagger k-2m-r-1}+a^{\dagger k-2m-r}a)a^{r}\] \[+\sum_{m=0}^{\left[\frac{k}{2}\right]}\frac{k!(xy)^{m}}{m!(k-2m)!2 ^{m}}\sum_{r=0}^{k-2m}\binom{k-2m}{r}x^{r}y^{k-2m-r+1}a^{\dagger k-2m-r+1}a^{r}\] \[=\sum_{m=0}^{\left[\frac{k}{2}\right]}\frac{k!(xy)^{m}}{m!(k-2m)!2 ^{m}}\sum_{r=0}^{k-2m-1}\binom{k-2m}{r}x^{r+1}y^{k-2m-r}(k-2m-r)a^{\dagger k-2 m-r-1}a^{r}\] \[+\sum_{m=0}^{\left[\frac{k}{2}\right]}\frac{k!(xy)^{m}}{m!(k-2m)!2 ^{m}}\sum_{r=0}^{k-2m}\binom{k-2m}{r}x^{r+1}y^{k-2m-r}a^{\dagger k-2m-r}a^{r+1}\] \[+\sum_{m=0}^{\left[\frac{k}{2}\right]}\frac{k!(xy)^{m}}{m!(k-2m)!2 ^{m}}\sum_{r=0}^{k-2m}\binom{k-2m}{r}x^{r}y^{k-2m-r+1}a^{\dagger k-2m-r+1}a^{r}\] (as at \(r=k-2m\), the first summation becomes zero) \[=\sum_{m=1}^{\left[\frac{k}{2}\right]+1}\frac{k!(xy)^{m-1}}{(m-1)!(k-2m+2)!2 ^{m-1}}\sum_{r=0}^{k-2m+1}\binom{k-2m+2}{r}x^{r+1}y^{k-2m+2-r}\]
\[\times(k-2m+2-r)a^{\dagger k-2m+2-r-1}a^{r}\] \[\times(k-2m+2-r)a^{\dagger k-2m+2-r-1}a^{r}\] \[+\sum_{m=0}^{\left[\frac{k}{2}\right]}\frac{k!(xy)^{m}}{m!(k-2m)!2^ {m}}\sum_{r=1}^{k-2m+1}\binom{k-2m}{r-1}x^{r}y^{k-2m-r+1}a^{\dagger k-2m-r+1}a^ {r}\] \[+\sum_{m=0}^{\left[\frac{k}{2}\right]}\frac{k!(xy)^{m}}{m!(k-2m)!2^ {m}}\sum_{r=0}^{k-2m}\binom{k-2m}{r}x^{r}y^{k-2m-r+1}a^{\dagger k-2m-r+1}a^{r}\]
replacing \(m\) by \(m+1\) in first and \(r\) by \(r+1\) in second line of equation
\[=\sum_{m=1}^{\left[\frac{k}{2}\right]+1}\frac{k!(xy)^{m-1}}{(m-1)! (k-2m+2)!2^{m-1}}\sum_{r=0}^{k-2m+1}\binom{k-2m+2}{r}x^{r+1}y^{k-2m+2-r}\] \[\times(k-2m+2-r)a^{\dagger k-2m+1-r}a^{r}\] \[+\sum_{m=0}^{\left[\frac{k}{2}\right]}\frac{k!(xy)^{m}}{m!(k-2m)! 2^{m}}\sum_{r=1}^{k-2m}\left\{\binom{k-2m}{r-1}+\binom{k-2m}{r}\right\}x^{r}y^ {k-2m-r+1}a^{\dagger k-2m-r+1}a^{r}\] \[+\sum_{m=0}^{\left[\frac{k}{2}\right]}\frac{k!(xy)^{m}}{m!(k-2m)! 2^{m}}\left\{y^{k+1-2m}a^{\dagger k+1-2m}+x^{k+1-2m}a^{k+1-2m}\right\}\] \[=\sum_{m=1}^{\left[\frac{k}{2}\right]+1}\frac{k!(xy)^{m-1}}{(m-1)! (k-2m+2)!2^{m-1}}\sum_{r=0}^{k-2m+1}\binom{k-2m+2}{r}x^{r+1}y^{k-2m+2-r}\] \[\times(k-2m+2-r)a^{\dagger k-2m+1-r}a^{r}\] \[+\sum_{m=0}^{\left[\frac{k}{2}\right]}\frac{k!(xy)^{m}}{m!(k-2m)! 2^{m}}\sum_{r=1}^{k-2m}\binom{k-2m+1}{r}x^{r}y^{k-2m-r+1}a^{\dagger k-2m-r+1}a^ {r}\] \[+\sum_{m=0}^{\left[\frac{k}{2}\right]}\frac{k!(xy)^{m}}{m!(k-2m)! 2^{m}}\left\{y^{k+1-2m}a^{\dagger k+1-2m}+x^{k+1-2m}a^{k+1-2m}\right\}\] \[=\sum_{m=1}^{\left[\frac{k}{2}\right]+1}\frac{k!(xy)^{m-1}}{(m-1)! (k-2m+2)!2^{m-1}}\sum_{r=0}^{k-2m+1}\binom{k-2m+2}{r}x^{r+1}y^{k-2m+2-r}\] \[\times(k-2m+2-r)a^{\dagger k-2m+1-r}a^{r}\] \[+\sum_{m=0}^{\left[\frac{k}{2}\right]}\frac{k!(xy)^{m}}{m!(k-2m)! 2^{m}}\sum_{r=0}^{k+1-2m}\binom{k-2m+1}{r}x^{r}y^{k-2m-r+1}a^{\dagger k-2m-r+1 }a^{r}\] \[=\sum_{m=1}^{\left[\frac{k}{2}\right]+1}\frac{k!(xy)^{m}}{(m-1)! (k-2m+2)!2^{m-1}}\sum_{r=0}^{k-2m+1}\binom{k-2m+2}{r}x^{r}y^{k-2m+1-r}\] \[\times(k-2m+2-r)a^{\dagger k-2m+1-r}a^{r}\]
\[+\sum_{m=0}^{\left[\frac{k}{2}\right]}\frac{k!(xy)^{m}}{m!(k-2m)!2^{m}} \sum_{r=0}^{k+1-2m}\binom{k-2m+1}{r}x^{r}y^{k-2m-r+1}a^{\dagger k-2m-r+1}a^{r}\] \[+\sum_{m=0}^{\left[\frac{k}{2}\right]}\frac{k!(xy)^{m}}{m!(k-2m)!2^ {m}}\sum_{r=0}^{k+1-2m}\binom{k-2m+1}{r}x^{r}y^{k-2m-r+1}a^{\dagger k-2m-r+1}a ^{r}\] \[+\sum_{m=0}^{\left[\frac{k}{2}\right]}\frac{k!(xy)^{m}}{m!(k-2m)!2 ^{m}}\sum_{r=0}^{k+1-2m}\binom{k-2m+1}{r}x^{r}y^{k-2m-r+1}a^{\dagger k-2m-r+1}a ^{r}\] \[=\sum_{m=1}^{\left[\frac{k}{2}\right]+1}\frac{k!(xy)^{m}}{(m-1)!( k-2m+1)!2^{m-1}}\sum_{r=0}^{k-2m+1}\binom{k-2m+1}{r}x^{r}y^{k-2m+1-r}\] \[\times a^{\dagger k-2m+1-r}a^{r}\] \[+\sum_{m=0}^{\left[\frac{k}{2}\right]}\frac{k!(xy)^{m}}{m!(k-2m)! 2^{m}}\sum_{r=0}^{k+1-2m}\binom{k-2m+1}{r}x^{r}y^{k-2m-r+1}a^{\dagger k-2m-r+1 }a^{r}\] \[=\sum_{m=1}^{\left[\frac{k}{2}\right]}\frac{k!(xy)^{m}}{(m)!(k-2m+ 1)!2^{m}}(2m+k-2m+1)\sum_{r=0}^{k-2m+1}\binom{k-2m+1}{r}\] \[\times x^{r}y^{k-2m+1-r}a^{\dagger k-2m+1-r}a^{r}+\sum_{r=0}^{k+1 }\binom{k+1}{r}x^{r}y^{k-r+1}a^{\dagger k-r+1}a^{r}\] \[+\left\{\begin{array}{ll}\frac{(2p)!(xy)^{p+1}}{p!(-1)!2^{p}} \sum_{r=0}^{-1}\binom{-1}{r}x^{r}y^{-1-r}a^{\dagger-1-r}a^{r}&\text{ for even $k=2p$}\\ \frac{(2p+1)!(xy)^{p+1}}{p!2^{p}}\sum_{r=0}^{0}\binom{0}{r}x^{r}y^{-r}a^{ \dagger-r}a^{r}&\text{ for odd $k=2p+1$}\end{array}\right.\] \[=\sum_{m=1}^{\left[\frac{k}{2}\right]}\frac{(k+1)!(xy)^{m}}{(m)!(k- 2m+1)!2^{m}}\sum_{r=0}^{k-2m+1}\binom{k-2m+1}{r}\] \[\times x^{r}y^{k-2m+1-r}a^{\dagger k-2m+1-r}a^{r}+\sum_{r=0}^{k+1 }\binom{k+1}{r}x^{r}y^{k-r+1}a^{\dagger k-r+1}a^{r}\] \[+\left\{\begin{array}{ll}0&\text{ for even $k=2p$}\\ \frac{(2p+2)!(xy)^{p+1}}{(p+1)!2^{p+1}}&\text{ for odd $k=2p+1$}\end{array}\right.\] \[=\sum_{m=0}^{\left[\frac{k+1}{2}\right]}\frac{(k+1)!(xy)^{m}}{(m)! (k+1-2m)!2^{m}}\sum_{r=0}^{k+1-2m}\binom{k+1-2m}{r}x^{r}y^{k+1-2m-r}a^{\dagger k +1-2m-r}a^{r}\]
Thus \(P(n)\) is true \(n=k+1\) if it is true for \(n=k\) and hence by PMI the equation (8) is true for all \(n\in\mathcal{N}\).
**Lemma 3.**
\[aa^{\dagger n}\left|0\right\rangle=na^{\dagger n-1}\left|0\right\rangle \tag{9}\]
_Proof._ we prove this lemma using PMI so let the statement is denoted by \(P(n)\).
Case - 1 First we prove \(P(n)\) for \(n=1\) as follows,
LHS \(=aa^{\dagger}\left|0\right\rangle=1\left|0\right\rangle=\) RHS
Case - 2 Let the result is true for \(n=k\), so \(aa^{\dagger k}\left|0\right\rangle=ka^{\dagger k-1}\left|0\right\rangle\)
Case - 3 Now we prove \(P(n)\) is true for \(n=k+1\), as follows
\[aa^{\dagger k+1}\left|0\right\rangle =aa^{\dagger}a^{\dagger k}\left|0\right\rangle=(1+a^{\dagger}a)a^ {\dagger k}\left|0\right\rangle\] \[=a^{\dagger k}\left|0\right\rangle+a^{\dagger}aa^{\dagger k} \left|0\right\rangle=a^{\dagger k}\left|0\right\rangle+a^{\dagger}ka^{\dagger k -1}\left|0\right\rangle\] \[=a^{\dagger k}\left|0\right\rangle+ka^{\dagger k}\left|0\right\rangle =(k+1)a^{\dagger k}\left|0\right\rangle\]
hence \(P(n)\) is true for \(n=k+1\) if it is true for \(n=k\). Thus by PMI \(P(n)\)(equation (9)) is true \(\forall n\in\mathcal{N}\). Using the equation (9), we can find the following relation
\[aa^{\dagger p}\left|n\right\rangle =aa^{\dagger p}a^{\dagger n}\frac{1}{\sqrt{n!}}\left|0\right\rangle =\frac{1}{\sqrt{n!}}aa^{\dagger p+n}\left|0\right\rangle\] \[=\frac{1}{\sqrt{n!}}(n+p)a^{\dagger p+n-1}\left|0\right\rangle \text{using the equation (\ref{eq:PMI})}\] \[=(n+p)a^{\dagger p-1}\left|n\right\rangle\]
we use the above lemma in following theorem
**Theorem 2.**
\[(ax+a^{\dagger}y)^{n}\left|0\right\rangle =n!\sum_{m=0}^{\left[\frac{n}{2}\right]}\frac{a^{\dagger n-2m}}{ m!(n-2m)!2^{m}}x^{m}y^{n-m}\left|0\right\rangle \tag{10}\] \[=n!y^{n}\sum_{m=0}^{\left[\frac{n}{2}\right]}\frac{a^{\dagger n- 2m}}{m!(n-2m)!2^{m}}\left(\frac{x}{y}\right)^{m}\left|0\right\rangle\] \[=n!y^{n}\sum_{m=0}^{\left[\frac{n}{2}\right]}\frac{(-1)^{m}a^{ \dagger n-2m}}{m!(n-2m)!2^{m}}\left(\sqrt{-\frac{y}{x}}\right)^{-2m}\left|0\right\rangle\] \[=n!y^{n}\left(\sqrt{-\frac{y}{x}}\right)^{-n}\sum_{m=0}^{\left[ \frac{n}{2}\right]}\frac{(-1)^{m}a^{\dagger n-2m}}{m!(n-2m)!2^{m}}\left(\sqrt{ -\frac{y}{x}}\right)^{n-2m}\left|0\right\rangle\] \[=n!y^{n}\left(\sqrt{-\frac{x}{y}}\right)^{n}\sum_{m=0}^{\left[ \frac{n}{2}\right]}\frac{(-1)^{m}}{m!(n-2m)!2^{m}}\left(\sqrt{-\frac{y}{x}}a^{ \dagger}\right)^{n-2m}\left|0\right\rangle\] \[=\left(\sqrt{-xy}\right)^{n}H_{e_{n}}\left(\sqrt{-\frac{y}{x}}a^{ \dagger}\right)\left|0\right\rangle \tag{11}\]
_Proof._ To find the above result we use the hit and trial method as follows
\[(ax+a^{\dagger}y)\left|0\right\rangle =ya^{\dagger}\left|0\right\rangle\] \[(ax+a^{\dagger}y)^{2}\left|0\right\rangle =(ax+a^{\dagger}y)(ax+a^{\dagger}y)\left|0\right\rangle=(ax+a^{ \dagger}y)ya^{\dagger}\left|0\right\rangle\] \[=y(xaa^{\dagger}+ya^{\dagger 2})\left|0\right\rangle=y(x+ya^{ \dagger 2})\left|0\right\rangle\] \[(ax+a^{\dagger}y)^{3}\left|0\right\rangle =(ax+a^{\dagger}y)(ax+a^{\dagger}y)^{2}\left|0\right\rangle=(ax+a^ {\dagger}y)y(x+ya^{\dagger 2})\left|0\right\rangle\] \[=y(x^{2}a+xya^{\dagger}+xyaa^{\dagger 2}+y^{2}a^{\dagger 3}) \left|0\right\rangle\] \[=y(xya^{\dagger}+2xya^{\dagger}+y^{2}a^{\dagger 3})\left|0\right\rangle =y^{2}(3x+ya^{\dagger 2})a^{\dagger}\left|0\right\rangle\] \[(ax+a^{\dagger}y)^{4}\left|0\right\rangle =(ax+a^{\dagger}y)(ax+a^{\dagger}y)^{3}\left|0\right\rangle=(ax+a^ {\dagger}y)y^{2}(3x+ya^{\dagger 2})a^{\dagger}\left|0\right\rangle\] \[=y^{2}(3x^{2}a+3xya^{\dagger}+xyaa^{\dagger 2}+y^{2}a^{\dagger 3})a^{ \dagger}\left|0\right\rangle\] \[=y^{2}(3x^{2}+3xya^{\dagger 2}+3xya^{\dagger 2}+y^{2}a^{\dagger 4}) \left|0\right\rangle=y^{2}(3x^{2}+6xya^{\dagger 2}+y^{2}a^{\dagger 4})\left|0\right\rangle\]
now generalizing the above result we get,
\[(ax+a^{\dagger}y)^{n}\left|0\right\rangle=n!\sum_{m=0}^{\left[\frac{n}{2} \right]}\frac{a^{\dagger n-2m}}{m!(n-2m)!2^{m}}x^{m}y^{n-m}\left|0\right\rangle \tag{12}\]
Now we prove the above result by PMI so let the above statement is denoted by \(P(n)\).
Case - 1 \(P(n)\) is true for \(n=1\) as LHS \(=ya^{\dagger}\left|0\right\rangle=\) RHS
Case - 2 Let \(P(n)\) is true for \(n=k\) so,
\[(ax+a^{\dagger}y)^{k}\left|0\right\rangle=k!\sum_{m=0}^{\left[\frac{k}{2} \right]}\frac{a^{\dagger k-2m}}{m!(k-2m)!2^{m}}x^{m}y^{k-m}\left|0\right\rangle\]
Case - 3 Now we prove the result for \(n=k+1\) if it is true for \(n=k\) as follows
\[\mathrm{P}(k+1) =(ax+a^{\dagger}y)^{k+1}\left|0\right\rangle=(ax+a^{\dagger}y)(ax +a^{\dagger}y)^{k}\left|0\right\rangle\] \[=(ax+a^{\dagger}y)k!\sum_{m=0}^{\left[\frac{k}{2}\right]}\frac{a^ {\dagger k-2m}}{m!(k-2m)!2^{m}}x^{m}y^{k-m}\left|0\right\rangle\] \[=k!\sum_{m=0}^{\left[\frac{k}{2}\right]}\frac{x^{m}y^{k-m}}{m!(k- 2m)!2^{m}}(ax+a^{\dagger}y)a^{\dagger k-2m}\left|0\right\rangle\] \[=k!\sum_{m=0}^{\left[\frac{k}{2}\right]}\frac{x^{m}y^{k-m}}{m!(k- 2m)!2^{m}}(x(k-2m)a^{\dagger k-2m-1}+ya^{\dagger k-2m+1})\left|0\right\rangle\] \[=\left(k!\sum_{m=0}^{\left[\frac{k}{2}\right]}\frac{x^{m+1}y^{k- m}}{m!(k-2m-1)!2^{m}}a^{\dagger k-2m-1}\right.\]
\[+\] \[+\] \[+ \left.k!\sum_{m=0}^{\left[\frac{k}{2}\right]+1}\frac{x^{m}y^{k-m+1}}{ m!(k-2m)!2^{m}}a^{\left\{k-2m+1\right\}}\right)\left|0\right\rangle\] \[= \left(k!\sum_{m=1}^{\left[\frac{k}{2}\right]+1}\frac{x^{m}y^{k-m +1}}{(m-1)!(k-2m+1)!2^{m-1}}a^{\left\{k-2m+1\right\}}\right.\text{ replacing }m\text{ by }m-1\] \[+ \left.k!\sum_{m=0}^{\left[\frac{k}{2}\right]}\frac{x^{m}y^{k-m+1} }{m!(k-2m)!2^{m}}a^{\left\{k-2m+1\right\}}\right)\left|0\right\rangle\] \[= \left(y^{k+1}a^{\left\{k+1\right\}}+k!\sum_{m=1}^{\left[\frac{k} {2}\right]}\frac{x^{m}y^{k-m+1}}{m!(k-2m+1)!2^{m}}(2m+k-2m+1)a^{\left\{k-2m+1 \right\}}\right.\] \[+ \left.\frac{k!x!\left[\frac{k}{2}\right]+1}{\left[\frac{k}{2} \right]!(k-2\left(\left[\frac{k}{2}\right]+1\right)+1)!2^{\left[\frac{k}{2} \right]}}a^{\left\{k-2\left(\left[\frac{k}{2}\right]+1\right)+1\right\}} \right)\left|0\right\rangle\] \[= \left\{\begin{array}{l}\left(y^{2r+1}a^{\left\{2r+1\right\}}+(2 r+1)!\sum_{m=1}^{r}\frac{x^{m}y^{2r-m+1}}{m!(2r-2m+1)!2^{m}}a^{\left\{2r-2m+1 \right\}}\right.\\ \left.\text{ since }1/(-1)!=0\text{
_Proof._ We have
\[\begin{array}{rcl}aa^{\dagger j}a^{j}a^{\dagger}&=&aa^{ij}(ja^{j-1}+a^{\dagger}a ^{j})\;\;\mbox{(using (\ref{eq:2}))}\\ &=&jaa^{\dagger j}a^{j-1}+aa^{\dagger j+1}a^{j}\\ &=&j(ja^{\dagger j-1}+a^{\dagger j}a)a^{j-1}\\ &+&\{(j+1)a^{\dagger j}+a^{\dagger j+1}a\}a^{j}\;\;\mbox{(using (\ref{eq:2}))}\\ &=&j^{2}a^{\dagger j-1}a^{j-1}+(2j+1)a^{\dagger j}a^{j}+a^{\dagger j+1}a^{j+1} \end{array} \tag{13}\]
To prove this lemma, we consider the following cases:
Case 1: If \(j<k\), we have
\[\begin{array}{rcl}aa^{\dagger k}&=&ka^{\dagger k-1}+a^{\dagger k}a\\ &=&\sum_{s=0}^{1}\frac{1!k!}{s!(k-s)!(1-s)!}a^{\dagger k-s}a^{1-s}\\ \Rightarrow\ a^{2}a^{\dagger k}&=&kaa^{\dagger k-1}+aa^{\dagger k}a\\ &=&k\{(k-1)a^{\dagger k-2}+a^{\dagger k-1}a\}+(ka^{\dagger k-1}+a^{\dagger k} a)a\\ &=&k(k-1)a^{\dagger k-2}+2ka^{\dagger k-1}a+a^{\dagger k}a^{2}\\ &=&\sum_{s=0}^{2}\frac{2!k!}{s!(k-s)!(2-s)!}a^{\dagger k-s}a^{2-s}\\ \Rightarrow\ a^{3}a^{\dagger k}&=&a\{k(k-1)a^{\dagger k-2}+2ka^{\dagger k-1}a+a ^{\dagger k}a^{2}\}\\ &=&k(k-1)\{(k-2)a^{\dagger k-3}+a^{\dagger k-2}a\}\\ &+&2k\{(k-1)a^{\dagger k-2}+a^{\dagger k-1}a\}a\\ &+(ka^{\dagger k-1}+a^{\dagger k}a)a^{2}\\ &=&k(k-1)(k-2)a^{\dagger k-3}+3k(k-1)a^{\dagger k-2}a\\ &+&3ka^{\dagger k-1}a^{2}+a^{\dagger k}a^{3}\\ &=&\sum_{s=0}^{3}\frac{3!k!}{s!(k-s)!(3-s)!}a^{\dagger k-s}a^{3-s}\\ &=&\ldots\;\;\mbox{(proceeding similarly $j$ times)}\\ \Rightarrow\ a^{j}a^{\dagger k}&=&\sum_{s=0}^{j}\frac{j!k!}{s!(j-s)!(k-s)!}(-1)^ {s}a^{\dagger k-s}a^{j-s}\end{array} \tag{14}\]
Case 2: If \(j>k\), then proceeding as in above and using (6), we can find
\[a^{j}a^{\dagger k}=\sum_{s=0}^{k}\frac{j!k!}{s!(j-s)!(k-s)!}a^{\dagger k-s}a^{ j-s} \tag{15}\]
Case 3: If \(j=k\), then substituting \(j=k\) in either of (14) or (15), we get
\[a^{k}a^{\dagger k}=\sum_{s=0}^{k}\frac{k!^{2}}{s!(qk-s)!^{2}}a^{\dagger k-s}a^{ k-s} \tag{16}\]
which along with (14) and (15) proves that \(a^{j}a^{\dagger k}=\sum_{s=0}^{\min(j,k)}\frac{j!k!}{s!(j-s)!(k-s)!}a^{\dagger k -s}a^{j-s}\).
**Lemma 5**.: \(a^{\dagger j}a^{k}=\sum_{s=0}^{min(j,k)}\frac{j!k!}{s!(j-s)!(k-s)!}(-1)^{s}a^{k -s}a^{\dagger j-s}\)__
_Proof._ Here we consider the following cases:
* If \(j>k\), the relation is true for \(k=1\) as \(a^{\dagger j}a=aa^{\dagger j}-ja^{\dagger j-1}\). Let us assume the result is true for \(k=l\). Then \(a^{\dagger j}a^{l}=\sum_{s=0}^{\min(j,l)}\frac{j!l!}{s!(j-s)!(l-s)!}(-1)^{s}a^{ l-s}a^{\dagger j-s}\). Now \[\begin{array}{l}a^{\dagger j}a^{l+1}\\ =a^{\dagger j}a^{l}a\\ =\sum_{s=0}^{\min(j,l)}\frac{j!!!}{s!(j-s)!(l-s)!}(-1)^{s}a^{l-s}a^{\dagger j- s}a\\ =\sum_{s=0}^{\min(j,l)}\frac{j!!!}{s!(j-s)!(l-s)!}(-1)^{s}a^{l-s}\{aa^{\dagger j -s}-(j-s)a^{\dagger j-s-1}\}\\ =\sum_{s=0}^{\min(j,l)}\frac{j!!!}{s!(j-s)!(l-s)!}(-1)^{s}a^{l-s+1}a^{\dagger j -s}\\ -\sum_{s=0}^{\min(j,l)}\frac{j!!!}{s!(j-s)!(l-s)!}(j-s)(-1)^{s}a^{l-s}a^{ \dagger j-s-1}\\ =\sum_{s=0}^{\min(j,l)}\frac{j!!!!}{s!(j-s)!(l-s)!}(-1)^{s}a^{l-s+1}a^{\dagger j -s}\\ -\sum_{s=0}^{\min(j,l)}\frac{j!!!!}{s!(j-s)!(l-s)!}(-1)^{s}a^{l-s}a^{\dagger j -s-1}\\ =\sum_{s=0}^{\min(j,l)}\frac{j!!!!}{s!(j-s)!(l-s)!}(-1)^{s}a^{l-s}a^{\dagger j -s}\\ -\sum_{s=1}^{\min(j,l+1)}\frac{j!!!!}{s!(j-s)!(l-s)!}(-1)^{s}a^{l-s+1}a^{ \dagger j-s}\\ -\sum_{s=1}^{\min(j,l+1)}\frac{j!!!!}{(s-1)^{l}(j-s)!(l-s+1)!}(-1)^{s-1}a^{ \dagger l-s+1}a^{\dagger j-s}\\ =a^{l+1}a^{\dagger j}+\sum_{s=1}^{\min(j,l)}\frac{j!!!}{s!(j-s)!(l-s)!}+\frac{ j!!!}{(s-1)!(j-s)!(l-s+1)!}\Big{\}}\\ \times(-1)^{s}a^{l-s+1}a^{\dagger j-s}\\ +\frac{j!}{(j-l-1)!}(-1)^{l+1}a^{\dagger j-l-1}\\ =\sum_{s=0}^{\min(j,l+1)}\frac{j!(l+1)!}{s!(j-s)!(l+1-s)!}(-1)^{s}a^{l+1-s}a^ {\dagger j-s}.\end{array}\] (17)
Thus if the result is true for \(k=l\) then it holds for \(k=l+1\). Hence by the Mathematical induction method, the result is true for all \(k\). Other cases like \(j<k\) or \(j=k\) can be proved similarly.
**Lemma 6**.: \(a^{\dagger j}a^{k}\left|m\right>=\frac{\sqrt{m!(m-k+j)!}}{(m-k)!}\left|m-k+j\right>\)_, where \(\left|m\right>\) is the usual Fock state with \(m\) number of photons._
Proof.: \[\begin{array}{l}a^{\dagger j}a^{k}\left|m\right>\\ =a^{\dagger j}a^{k-1}\sqrt{m}\left|m-1\right>\\ =a^{\dagger j}a^{k-2}\sqrt{m(m-1)}\left|m-2\right>\\ =\ldots\ (\mbox{proceeding $k$ times})\\ =a^{\dagger j}\sqrt{m(m-1)\ldots(m-k+1)}\left|m-k\right>\\ =\sqrt{\frac{m!}{(m-k)!}}a^{\dagger j}\left|m-k\right>\\ =\sqrt{\frac{m!}{(m-k)!}}\sqrt{m-k+1}a^{\dagger j-1}\left|m-k+1\right>\\ =\sqrt{\frac{m!}{(m-k)!}}\sqrt{(m-k+1)(m-k+2)}a^{\dagger j-2}\left|m-k+2\right> \\ =\ldots\ (\mbox{proceeding $j$ times})\\ =\sqrt{\frac{m!}{(m-k)!}}\sqrt{(m-k+1)(m-k+2)\ldots(m-k+j)}\\ \times\ \left|m-k+j\right>\\ =\sqrt{\frac{m!}{(m-k)!}}\sqrt{\frac{(m-k+j)!}{(m-k)!}}\left|m-k+j\right>\\ =\frac{\sqrt{m!(m-k+j)!}}{(m-k)!}\left|m-k+j\right>\end{array}\] (18)
**Lemma 7**.: \(a^{j}a^{\dagger k}\left|m\right\rangle=\frac{(m+k)!}{\sqrt{m!(m+k-j)!}}\left|m+k-j\right\rangle\)_, where \(\left|m\right\rangle\) is the usual Fock state with \(m\) number of photons._
Proof.: Proceeding similarly as in (18), we can find this also.
## 4 Discussion
In summary, we have followed the Mathematical induction process and direct or hit trial method to deal with the operator ordering problem. As one can see, our derivation is straightforward and more concise. These relationships can be used to calculate the expected values of any arbitrary product of annihilation and creation operators, with respect to any quantum state \(\left|\psi\right\rangle\)[37; 38; 39] as well as to study the atom-cavity field system's dynamics in interacting Fock space [40]. The results discussed above also facilitate an analytical understanding of different nonclassicality features, namely HOA, HOSPS, HOS, etc. These theorems also used in calculations of fidelity of quantum teleportation [41] Also, the two theorems stated above are very much useful in the calculation of the expectation of normal order \(a^{\dagger m}a^{n}\) for the quantum states which are obtained by operating the squeezing operator and other states. Also, these theorems allow us to convert the \(n^{\text{th}}\) power of linear combinations of Bosonic operators in the product of Bosonic operators in normal order.
In conclusion, the study of Bosonic operators and systems in quantum mechanics is a fundamental and essential part of modern physics. Bosonic operators are used to represent the creation and annihilation of bosonic particles, such as photons, phonons, and atomic nuclei, and are essential for describing the behavior of many-body systems in quantum mechanics. The general expansion of natural power of the linear combination of Bosonic operators in normal order provides a powerful mathematical tool for analyzing and understanding the behavior of bosonic systems. The expansion can be expressed in terms of binomial coefficients and the product of normal-ordered operators, and it can be used to calculate correlation functions and the partition function of complex many-body systems. The basic theoretical tools and methods used in the study of Bosonic operators and systems include creation and annihilation operators, commutation relations, normal ordering, Wick's theorem, second quantization, and path integrals. These tools and methods provide a powerful framework for understanding and analyzing the behavior of bosonic systems in a variety of physical contexts. Overall, the study of Bosonic operators and systems is an important and active area of research in modern physics, with applications in fields such as condensed matter physics, atomic physics, and quantum field theory.
**ACKNOWLEDGEMENT**
Deepak acknowledges the financial support by the Council of Scientific and Industrial Research (CSIR), Govt. of India (Award no. 09/1256(0006) /2019-EMR-1). |
2307.07892 | Multitemporal SAR images change detection and visualization using
RABASAR and simplified GLR | Understanding the state of changed areas requires that precise information be
given about the changes. Thus, detecting different kinds of changes is
important for land surface monitoring. SAR sensors are ideal to fulfil this
task, because of their all-time and all-weather capabilities, with good
accuracy of the acquisition geometry and without effects of atmospheric
constituents for amplitude data. In this study, we propose a simplified
generalized likelihood ratio ($S_{GLR}$) method assuming that corresponding
temporal pixels have the same equivalent number of looks (ENL). Thanks to the
denoised data provided by a ratio-based multitemporal SAR image denoising
method (RABASAR), we successfully applied this similarity test approach to
compute the change areas. A new change magnitude index method and an improved
spectral clustering-based change classification method are also developed. In
addition, we apply the simplified generalized likelihood ratio to detect the
maximum change magnitude time, and the change starting and ending times. Then,
we propose to use an adaptation of the REACTIV method to visualize the
detection results vividly. The effectiveness of the proposed methods is
demonstrated through the processing of simulated and SAR images, and the
comparison with classical techniques. In particular, numerical experiments
proved that the developed method has good performances in detecting farmland
area changes, building area changes, harbour area changes and flooding area
changes. | Weiying Zhao, Charles-Alban Deledalle, Loïc Denis, Henri Maître, Jean-Marie Nicolas, Florence Tupin | 2023-07-15T22:11:34Z | http://arxiv.org/abs/2307.07892v1 | # Multitemporal SAR images change detection and visualization using RABASAR and simplified GLR
###### Abstract
Understanding the state of changed areas requires that precise information be given about the changes. Thus, detecting different kinds of changes is important for land surface monitoring. SAR sensors are ideal to fulfil this task, because of their all-time and all-weather capabilities, with good accuracy of the acquisition geometry and without effects of atmospheric constituents for amplitude data. In this study, we propose a simplified generalized likelihood ratio (\(S_{GLR}\)) method assuming that corresponding temporal pixels have the same equivalent number of looks (ENL). Thanks to the denoised data provided by a ratio-based multitemporal SAR image denoising method (RABASAR), we successfully applied this similarity test approach to compute the change areas. A new change magnitude index method and an improved spectral clustering-based change classification method are also developed. In addition, we apply the simplified generalized likelihood ratio to detect the maximum change magnitude time, and the change starting and ending times. Then, we propose to use an adaptation of the REACTIV method to visualize the detection results vividly. The effectiveness of the proposed methods is demonstrated through the processing of simulated and SAR images, and the comparison with classical techniques. In particular, numerical experiments proved that the developed method has good performances in detecting farmland area changes, building area changes, harbour area changes and flooding area changes.
Multitemporal synthetic aperture radar change detection clustering visualization
## 1 Introduction
Timely, accurate and continuous monitoring of land cover and land use changes is important for land resource management. According to [1], land cover changes can be classified into several categories: changed from one land cover class to another, change of shape, shrink or transform, change of position, fragment or merge of adjacent regions. Based on the change reason or change type, changes can be classified into: short-term change (synoptic weather events), cyclic change (seasonal phenology), directional change (urban development), multidirectional change (deforestation & regeneration), event change (catastrophic fires). Changes happen spatially and temporally. More and more available multispectral, multitemporal, multisensor satellite data enhance the capability of detecting, identifying, mapping and monitoring these changes [2, 3].
Optical remote sensing images with high spatial and spectral resolutions can be easily acquired and have been widely used for land cover monitoring [4]. The passive acquisition model of optical sensors and the use of near-visible light wavelengths, sun illumination (or thermal radiation) and cloud-free weather requirements heavily limit its wide application.
However, SAR imagery has the main advantage of being an all-time and all-weather sensor, with good accuracy for the acquisition geometry and with few atmospheric effects for amplitude data. It has been widely used for environmental change detection, urban area change detection and disaster monitoring. The past few years have seen the launch of numerous synthetic aperture radar (SAR) sensors and new sensors will also be launched soon, such as TSX-NG, Cosmo-SkyMed second generation, Radarsat constellation and NISAR. The increasing availability of SAR data allows the high accuracy of change detection, such as abrupt (step) changes, seasonal changes and longer-term developments. Unlike optical images, SAR images are seriously affected by speckle noise, which makes the SAR image change detection much harder. SAR image change detection can be applied on image pairs or multitemporal series. Different SAR image characteristics can be used for change detection. The current change detection methods are mainly based on likelihood ratio [5; 6; 7; 8], coherence [9], image texture and structure analysis [10], and deep learning based methods [11; 12]. Due to the multiplicative noise in coherent SAR images, the likelihood ratio test is popularly used for change detection. A number of studies have applied to SAR data change detection methods based on mean-ratio [13], image ratio [14; 15], log-ratio operator [16; 17]. The true distribution of these ratio images depends on the relative change of the SAR reflectivities [18]. These methods are easily applied and the associated threshold can be calculated automatically.
Visualizing the changes is also important. Changes usually represent transitions that occur between states [19]. Using a colourful image can ease results interpretation highlighting changed areas. In the change detection field, a number of studies have used different colours to show different changes. Su et al. [6] associate different colours to highlight different change types of time series, Mou et al. [20] propose using different colours to represent different change phenomenons, and Dominguez et al. [21] use different fusion strategies to illustrate the real changes and false alarms. With short time series, Nielsen et al. [22] propose to use RGB colours to represent different change times and use black colour to represent the unchanged areas. In addition, Amitrano et al. [23] proposed using new bi-temporal and multitemporal RGB combination frameworks to illustrate temporal SAR images. The effectiveness of these two RGB visualization approaches has been verified by change detection and classification, respectively. However, none of them associates the colours with the times of change in a long time series. Recently, Rapid and EAsy Change detection in radar TIme-series by Variation coefficient (REACTIV) method has been proposed by Koeniguer et al. [24]. It is a simple and highly efficient time series change detection and visualization algorithm. It is based on HSV space and exploits only time domain estimates without any spatial estimation. The colour saturation is coded by the temporal coefficient of variation. However, the detection results are corrupted by speckle noise. Even using some state-of-the-art denoising methods, the bias estimation in vegetation areas still prohibits REACTIV to provide accurate performance. In addition, the colour in REACTIV results only represents the appearing date of maximum values.
The inherent speckle which is attached to any coherent imaging system affects the analysis and interpretation of synthetic aperture radar (SAR) images. Thanks to the recently proposed Ratio-Based Multitemporal SAR Images Denoising (RABASAR) method [25], we can significantly suppress the negative effect of SAR speckle. In this paper, we derive a simplified generalized log-likelihood ratio criterion (\(S_{GLR}\)) based on gamma distribution using RABASAR denoising data. Based on the obtained similarity function, strategies to apply this similarity criterion to image pair change detection, cumulative change detection and change classification are introduced. In addition, we adapt REACTIV method to integrate RABASAR denoising results. Then, we apply the simplified GLR function in this framework to detect different change times of interest, such as change starting and ending times under predefined threshold, or maximum change magnitude appearing time.
## 2 Image pair change area detection and change magnitude visualization
In this section, we derive a generalized log-likelihood ratio criterion based on gamma distribution using denoised data. Based on the obtained similarity functions, strategies to apply this similarity criterion to image pair change detection, cumulative change detection, change classification and change time detection are introduced.
### Change area detection
Under Goodman's hypothesis [26], the fully developed intensity speckle follows a Gamma distribution \(\mathcal{G}[u,L]\) depending on the number of looks \(L\) and the mean reflectivity \(u\) of the scene:
\[\mathcal{G}[u,L](y)=\frac{L}{u\Gamma(L)}\bigg{(}\frac{Ly}{u}\bigg{)}^{L-1}e^{ -\frac{Ly}{u}} \tag{1}\]
To compare the similarity of two gamma distributed variables \((y_{t},y_{t^{\prime}})\), a log-likelihood ratio test can be used. Speckle is an inherent problem for SAR image interpretation which brings many drawbacks for traditional SAR image change change
detection methods. When dealing with denoised images \(\hat{u}_{t}\) and \(\hat{u}_{t^{\prime}}\), with associated ENL \(L_{t}\) and \(L_{t^{\prime}}\), we have:
\[S_{GLR}(\hat{u}_{t},\hat{u}_{t^{\prime}})=L_{t}\log\frac{L_{t}\hat{u}_{t}+L_{t^{ \prime}}\hat{u}_{t^{\prime}}}{\hat{u}_{t}(L_{t}+L_{t^{\prime}})}+L_{t^{\prime} }\log\frac{L_{t}\hat{u}_{t}+L_{t^{\prime}}\hat{u}_{t^{\prime}}}{\hat{u}_{t^{ \prime}}(L_{t}+L_{t^{\prime}})} \tag{2}\]
where \(\hat{u}\) is the estimated reflectivity, \(t\) represents the time index in the time series.
Unlike \(CGLRT\) and \(AGLRT\) methods [6], we fully trust the denoising results and do not take the noisy data into account any more. In practice, we directly calculate the similarity of the multi-looked SAR data with the use of their corresponding ENL. Suppose the corresponding pixels in despeckling images have the same ENL \(L_{t}=L_{t^{\prime}}=\hat{L}\), the simplified GLR method [27; 28] turns out to be:
\[S_{GLR}(\hat{u}_{t},\hat{u}_{t^{\prime}})=2\hat{L}\log\left(\sqrt{\frac{\hat{ u}_{t}}{\hat{u}_{t^{\prime}}}}+\sqrt{\frac{\hat{u}_{t^{\prime}}}{\hat{u}_{t}}} \right)-2\hat{L}\log 2 \tag{3}\]
Defining a global threshold is a simple and widely used approach to distinguish changes from unchanged points. To detect the changed areas, we used a thresholding function:
\[\varphi[S_{GLR}(\hat{u}_{t},\hat{u}_{t^{\prime}})]=\left\{\begin{array}{ll}1,&\mbox{if }S_{GLR}(\hat{u}_{t},\hat{u}_{t^{\prime}})\geq\tau\\ 0,&\mbox{otherwise}\end{array}\right. \tag{4}\]
The threshold definition methods are under the same no-change hypothesis framework that can be used for this method. As introduced by [29], when the sample size approaches infinity, the log-likelihood statistic model asymptotically converges towards chi-squared distributed probability under the null hypothesis. Thus, we can use the chi-square cumulative function to estimate the change probability of \(S_{GLR}(\hat{u}_{t},\hat{u}_{t^{\prime}})\) with:
\[\rho=1-\frac{1}{4\hat{L}} \tag{5}\]
\[\omega_{2}=-\frac{1}{4}(1-\frac{1}{\rho})^{2} \tag{6}\]
\[P\{2\rho S_{GLR}(\hat{u}_{t},\hat{u}_{t^{\prime}})\leq\delta\}\simeq P\{\chi^{ 2}(1)\leq\delta\}+\omega_{2}[P\{\chi^{2}(5)\leq\delta\}-P\{\chi^{2}(1)\leq \delta\}] \tag{7}\]
where \(\delta\) is the statistical significance. For the detailed derivation of this probability calculation method, we recommend referring to [7; 30]. Unlike Conradsen's PolSAR change detection analysis, we mainly pay attention to spatially adaptive denoised single-channel SAR images with the same estimated ENL. To robustly estimate \(\hat{L}\) in the denoised image, the log-cumulant method is used [31]. In the following sections, we mainly use this way to define the threshold.
### Change magnitude index for visualization
To distinguish appearing from disappearing changes, we used a signum function \(\texttt{sign}(x)\) to convert \(S_{GLR}(\hat{u}_{t},\hat{u}_{t^{\prime}})\) to positive and negative values. In this case, if we set the image acquired at time \(t\) as the reference image, the positive and negative values correspond to the increase and decrease of the object backscattering values.
\[\texttt{sign}(x)\begin{cases}-1&\texttt{if }\ x<0\\ 0&\texttt{if }\ x=0\\ 1&\texttt{if }\ x>0\end{cases} \tag{8}\]
\[x=\log\left(\sqrt{\frac{\hat{u}_{t^{\prime}}}{\hat{u}_{t}}}\right) \tag{9}\]
To clearly illustrate the temporal changes, the similarity ratio is normalized and transformed to values within the range [0; 255].
\[S_{GLR}^{conv}(\hat{u}_{t},\hat{u}_{t^{\prime}})=\left\{\begin{array}{ll}255,&\mbox{if }\frac{2(S_{GLR}(\hat{u}_{t},\hat{u}_{t^{\prime}})-\alpha_{1})}{ \alpha_{2}-\alpha_{1}}\geq 2\\ 127\frac{2(S_{GLR}(\hat{u}_{t},\hat{u}_{t^{\prime}})-\alpha_{1})}{\alpha_{2}- \alpha_{1}}+1,&\mbox{otherwise}\end{array}\right. \tag{10}\]
where \(\alpha_{1}\) and \(\alpha_{2}\) represent the minimum and maximum values in the temporal dissimilarities \(S_{GLR}(\hat{u}_{t},\hat{u}_{t^{\prime}})\). To suppress the outliers, we empirically set \(\alpha_{1}=-2\) and \(\alpha_{2}=2\). The value range will be converted to [-255; 255] by
multiplying \(S_{GLR}^{conv}(\hat{u}_{t},\hat{u}_{t^{\prime}})\) with \(\texttt{sign}(x)\). The rainbow index colour is used to represent different change magnitudes (appearing, disappearing and slow changes).
In practice, we can arbitrarily combine the reference and slave image through RGB composition (R: slave image, G: reference image, B: slave image). Although different colours could indicate the increase and decrease of backscattering values, the illustration performance is not as good as the aforementioned strategy.
### Time series change type classification
During the time series acquisition, changes may occur multiple times and with different magnitudes. To detect the change types, we propose an improved change classification method inspired by NORCAMA method [6] and spectral clustering method [32; 33]. In practice, the change types are transferred into a partitioning problem and detected using spectral clustering.
#### 2.3.1 Change Criterion Matrix (CCM)
By making use of the eigenvalues of the similarity matrix of the data, spectral clustering techniques perform dimensionality reduction before clustering in fewer dimensions. It has been successfully used to cluster the temporal pixels based on their similarity symmetric matrix [33]. Given a time series \(\{\hat{u}_{1},\hat{u}_{2}\cdots\hat{u}_{M}\}\), the affinity matrix is defined as a symmetric matrix \(A(s)\), with elements \(S(\hat{u}_{t},\hat{u}_{t^{\prime}})\) representing the change criterion between different data points.
\[A(s)=\begin{bmatrix}S(\hat{u}_{1},\hat{u}_{1})&S(\hat{u}_{1},\hat{u}_{2})&S( \hat{u}_{1},\hat{u}_{3})&\dots&S(\hat{u}_{1},\hat{u}_{t^{\prime}})\\ S(\hat{u}_{2},\hat{u}_{1})&S(\hat{u}_{2},\hat{u}_{2})&S(\hat{u}_{2},\hat{u}_{3 })&\dots&S(\hat{u}_{2},\hat{u}_{t^{\prime}})\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ S(\hat{u}_{t},\hat{u}_{1})&S(\hat{u}_{t},\hat{u}_{2})&S(\hat{u}_{t},\hat{u}_{3 })&\dots&S(\hat{u}_{M},\hat{u}_{M})\end{bmatrix} \tag{11}\]
where \(A(s)\) is a symmetric change criterion matrix with size \(M\times M\), \(s\) is the location in one image, \(t\) and \(t^{\prime}\) are time index with \(1\leq t\leq M\) and \(1\leq t^{\prime}\leq M\).
To avoid the overlapping of different clusters, Xin \(et\)\(al\). [6] proposed to binarize the change criterion matrix \(A(s)\). The binary process can tighten the clusters, but it will force the clustering results seriously depending on the used thresholds. In practice, we use a binarized change criterion matrix. In addition, the \(k\)-nearest neighbours algorithm could be used to classify this change criterion matrix as well. To suppress the temporal variance caused by the residual speckle, we can apply the exponentially weighted moving average [34] to the time series.
#### 2.3.2 Clustering by spectral clustering method
Based on the acquired change criterion matrix, the Laplacian matrix \(A^{L}(s)\) is computed by:
\[A^{L}(s)=D(s)-A(s) \tag{12}\]
\[D(s)=\begin{bmatrix}\sum S(\hat{u}_{1},\hat{u}_{t^{\prime}})&0&\dots&0\\ 0&\sum S(\hat{u}_{2},\hat{u}_{t^{\prime}})&\dots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\dots&\sum S(\hat{u}_{t},\hat{u}_{t^{\prime}})\end{bmatrix} \tag{13}\]
\[\sum S(\hat{u}_{t},\hat{u}_{t})=\sum_{t^{\prime}=1}^{M}S(\hat{u}_{t},\hat{u}_ {t^{\prime}}) \tag{14}\]
The former steps (Eq.(11)\(\sim\)(14)) are the same as NORCAMA [6] except the similarity calculation. Then, the Laplacian matrix \(A^{L}(s)\) is normalized using:
\[A^{L}_{norm}=D^{-1/2}A^{L}D^{-1/2} \tag{15}\]
The eigenvalues \(\lambda\) are computed through:
\[A^{L}_{norm}V=\lambda V \tag{16}\]
where \(V\) is the eigenvector. After sorting the acquired eigenvalues \(\{\lambda_{1},\lambda_{2},\cdots,\lambda_{M}\}\) in ascending order, the clustering number \(k\) is calculated using the eigengap heuristic method [35]:
\[k=\operatorname*{arg\,max}_{1\leq t<M}(\lambda_{t+1}-\lambda_{t}) \tag{17}\]
To reduce the data dimension, only the eigenvectors \(\upsilon_{t}\) ( \(M\times 1\) column vector) which correspond to the \(k\) largest eigenvalues of \(L^{norm}\) are used with \(U=[\upsilon_{1},\upsilon_{2}\cdots\upsilon_{k}]\). To obtain the unit norm, we re-normalize the matrix rows. Finally, the \(k\)-means method is used to cluster each row \(u_{t}\) in \(U\) and the cluster labels \(\{l_{1},l_{2},...l_{M}\}\) are assigned to each cluster element with \(1\leq l_{t}\leq k\).
The aforementioned method is similar to the normalized cut method [32, 6]. However, they normalize the rows of \(A(s)\) to sum to 1 and use its eigenvectors instead of the normalized Laplacian matrix \(A^{L}(s)\) calculated using equation (15). In addition, they do not re-normalize the rows of \(U\) to unit length [33].
#### 2.3.3 Change type recognition
Based on the number of clusters \(k\) and cluster labels \(\{l_{1},l_{2},...l_{M}\}\) acquired by the \(k\)-means algorithm, the change type of the time series points can be recognized [6] according to Table 1.
## 3 Change times of interest detection and visualization with extended REACTIV
In this section, our objective is to adapt the REACTIV method to integrate RABASAR denoising results. Then, using the proposed simplified GLR function to detect the change time of interest.
The principle of REACTIV is to exploit the HSV colour space and a temporal stack of SAR images. The hue channel H represents the time, the saturation channel S corresponds to the temporal coefficient of variation, and the value V corresponds to the maximum radar intensity of the temporal series in each pixel [36].
### Times of interest (Hue)
As introduced in [36], one can associate a colour with a particular time according to the change. During the procedure, different change types can be considered, such as abrupt change, seasonal change, deforestation & regeneration, etc. The REACTIV visualization method chooses to highlight the appearing time of the maximum value. Although the REACTIV visualization method can well associate the maximum value appearing time with the colour, the first and last dates have very similar colours because the HSV colour palette is continuous and loops on itself. Thus, we propose to highlight the interested time using the normalized time in the interval:
\[f_{t}=\frac{5}{6}\times\frac{t-t_{1}}{t_{2}-t_{1}} \tag{18}\]
where \(t_{1}\) and \(t_{2}\) are the first and last image acquisition time in the time series, and \(t\) is the time of interest. \(5/6\) is used to suppress the time interval, so as to avoid using the starting color category (loop) again.
With the time series \(\{\hat{u}_{1}(s),\hat{u}_{2}(s),\cdots,\hat{u}_{M}(s)\}\), we can use \(S_{GLR}\) to detect times of interest:
* Start changing time When detecting the start changing time in the time series, points similarities \(S_{GLR}(\hat{u}_{1}(s),\hat{u}_{t}(s))\) are calculated with the reference to the value of the first date \(\hat{u}_{1}(s)\). After transforming the similarity to change probability \(P\{2\rho S_{GLR}(\hat{u}_{1},\hat{u}_{t^{\prime}})\}\), we can decide whether there is a change or not based on a predefined change probability \(\tau\) (such as \(99\%\)). \[T_{start}=\begin{cases}t&\texttt{if}\ P\{2\rho S_{GLR}(\hat{u}_{1},\hat{u}_{t ^{\prime}})\}>\tau\\ 0&\texttt{else}\end{cases}\] (19) where \(t\) is the start changing time with \(1<t\leq M\). It corresponds to the first time that the change probability is larger than the threshold \(\tau\).
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Classes** & **Types** & \(k\) & **Label series \(\{l_{1},l_{2},...l_{M}\}\)** \\ \hline
1 & Unchanged & 1 & 1, 1,...1 \\
2 & Step & 2 & 1, 1,...1, 2, 2,...2 \\
3 & Impulse & 2 & 1, 1,...1, 2, 2,...2, 1, 1,...1 \\
4 & Cycle & 2 & 1,...1, 2,...2, 1,...1, 2,...2,... \\
5 & Complex & \(\geq\)3 & 1, 1,..., 2,..., 3,...4, 4... \\ \hline \hline \end{tabular}
\end{table}
Table 1: Label of different change types
* Maximum changing time Generally, the abrupt changes are associated with large change magnitude [37]. The \(S_{GLR}(\hat{u}_{t}(s),\hat{u}_{t^{\prime}}(s))\) is supposed to be the maximum change. In this case, \(t\) and \(t^{\prime}\) are set as the adjacent times. \(t^{\prime}\) is the maximum change time. \[T_{max}=\begin{cases}t^{\prime}&\text{if }S_{GLR}(\hat{u}_{t}(s),\hat{u}_{t^{ \prime}}(s))=\tau_{max}\\ 0&\text{else}\end{cases}\] (20) where \(\tau_{max}\) equals to the maximum dissimilarity \(\tau_{max}=\operatorname*{arg\,max}_{t^{\prime}}S_{GLR}(\hat{u}_{t}(s),\hat{ u}_{t^{\prime}}(s))\).
* Stop changing time For the detection of stop changing time, the last changed point in the time series is used as the reference point. In practice, the calculation is carried out in reverse order. \[T_{stop}=\begin{cases}t&\text{if }P\{2\rho(S_{GLR}(\hat{u}_{t}(s),\hat{u}_{M}(s)))\}> \tau\\ 0&\text{else}\end{cases}\] (21) where \(t\) corresponds to the dates going from \(M,M-1,\cdots\) to 1.
### Saturation (S)
This colour component defines whether there are changes or not in the time series. Unlike popularly used change detection methods which mainly pay attention to the intensity value changes, the REACTIV method uses the dynamics of the coefficient of variation. Based on Rayleigh Nakagami's distribution [38], it is possible to derive the empirical moments' expression of pure speckle with [39]:
\[m_{1}=u_{A}\frac{\Gamma(L+\frac{1}{2})}{\sqrt{L}\Gamma(L)} \tag{22}\] \[m_{2}=u_{A}^{2} \tag{23}\]
Based on the ratio of standard deviation and the amplitude average, the coefficient of variation can be calculated through:
\[\gamma=\frac{\sigma}{u_{A}}=\frac{\sqrt{m_{2}-m_{1}^{2}}}{m_{1}}=\sqrt{\frac{ \Gamma(L)\Gamma(L+1)}{\Gamma(L+1/2)^{2}}-1} \tag{24}\]
To know the behaviour of this parameter, the variance of this estimator can be calculated according to [39, 40]:
\[\operatorname{Var}(\gamma)=\frac{1}{4M}\frac{4m_{2}^{3}-m_{2}^{2}m_{1}^{2}+m_ {1}^{2}m_{4}-4m_{1}m_{2}m_{3}}{m_{1}^{4}(m_{2}-m_{1}^{2})} \tag{25}\]
where \(M\) is the number of samples to compute the estimation which corresponds here to the temporal images considered. In addition, based on the order 1 to 4 moments of the Nakagami distribution and \(L\), the variance of this estimator can be written:
\[\operatorname{Var}(\gamma)=\frac{1}{4M}\frac{L\Gamma(L)^{4}(4L^{2}\Gamma(L)^ {2}-4L\Gamma(L+\frac{1}{2})^{2}-\Gamma(L+\frac{1}{2})^{2})}{\Gamma(L+\frac{1} {2})^{4}(L\Gamma(L)^{2}-\Gamma(L+\frac{1}{2})^{2})} \tag{26}\]
Usually, the number of available images on the same area varies. In order to overcome the dependency of the coefficient of variation on the number of images, it is possible to normalize the distribution with the theoretical mean and standard deviation. Koeniguer et al. [36] propose to use the following empirical normalization:
\[\gamma\leftarrow\frac{\gamma-\mathbb{E}[\gamma]}{10\sigma(\gamma)}+0.25 \tag{27}\]
with the theoretical mean and standard deviation values for a "stable" speckle and \(L=4.9\) for Sentinel-1 GRD data. This empirical normalization aims to reduce the saturation values of the stable zones around a low saturation to 0.25 and to spread the changes on higher saturation.
To speed up the processing, we will use a coefficient of variation method to detect time series changes. Then, detect the different change times of interest.
### Value (V)
The REACTIV visualization method uses the maximum amplitude value of each time series as the value channel. The hue colour component is computed using equation (18) with time \(t\) corresponding to the maximum amplitude value appearing time. This choice is particularly suitable for an abrupt event (such as the presence of a boat).
Although using maximum values can highlight abruptly appearing objects, the acquired results seem too noisy. Apart from this choice, we can use the denoising results or the temporal average image to have a clear vision of the ground. We only use the maximum time series values in the following experiments.
## 4 Experimental results and discussion
To illustrate and compare the proposed methods with state-of-the-art change detection methods, simulated SAR images and SAR images are tested in this section. All the data are despeckled by RABASAR before change analysis: image pair change detection in section 4.2, continuous change monitoring in section 4.3, change classification experiments in section 4.4 and change time detection in section 4.5.
### Experimental data introduction
To test the improved algorithms, we prepared four kinds of data. In this paper, we mainly consider SAR images acquired through the same orbit with similar incidence angles. During the RABASAR denoising, we mainly use arithmetic mean (AM), denoised arithmetic mean (DAM) and denoised binary weighted arithmetic mean (DBWAM) of the time series.
* Simulated SAR data Many SAR image simulations are based on reflectivity maps obtained from optical images. However, real SAR images exhibit strong and persistent scatterers, especially in urban areas which can hardly be simulated using optical images. Therefore, we propose to use the arithmetic mean image of long time series of SAR images, considered as a noise-free image (a reflectivity map \(u\)) to create realistic simulations of SAR images. With the acquired arithmetic mean image \(y^{AM}\), we add different kinds of object changes according to their values in real SAR images. Then, we simulate the temporal images by multiplying \(y^{RM}\) with different simulated Gamma distribution noise \(v_{t}\).
* TerreSAR-X data The TerraSAR-X images are acquired over the harbour area of \(Sendai\). 9 temporal well-registered SAR images are used for the preparation of denoised data, both for 2SPPB [28] and for RABASAR. Only the two images which were acquired on 06/05/2011 and 08/06/2011 are used for the change detection.
* Sentinel-1 single look SAR data The used Sentinel-1 IW VV single look SAR data are acquired over the Saclay area, South of Paris. All the images are registered using geometric-based subpixel images registration method [41]. This area mainly contains farmal area, forest area, building area, etc. Saclay area has been chosen since it has been chosen to receive the future scientific area of the University of Paris-Saclay. Starting in 2010, constructions and public works were decided to convert agricultural terrains into research and education buildings, mostly 2 to 5 storey compact and geometrical structures made of concrete, steel and glass. Many plots have been barred from vegetation, and the excavated heavy plant machinery and trucks have been parked in some places. All these elements greatly influence the SAR reflectivity.
* Sentinel-1 GRD data The Sentinel-1 GRD data are acquired over Saddle Dam D, Southern Laos. All the images are coregistered using geometric-based registration method1. All the historical Sentinel-1 CRD VV descending images acquired over this area are used to compute the temporal mean image, which is used by the RABASAR method for image denoising. Google Engine Engine is used during the image preparation.
Footnote 1: SNAP: [http://step.esa.int/main/toolboxes/snap/](http://step.esa.int/main/toolboxes/snap/)
### Image pair change detection
To evaluate the change detection performances and validate the effectiveness of \(S_{GLR}\) method, the image pair change detection results are compared with Conradsen's method [7; 8] and \(CGLRT\) method [6].
During the acquisition of SAR time series, changes may happen for different kinds of objects, such as buildings, farmland and forest, etc. Generally, different kinds of objects have different change magnitudes. To comprehensively and quantitatively evaluate the performances of different methods, we processed the simulated SAR images which have different kinds of object changes.
Because of the small change magnitude in SAR intensity images, identifying forest area changes is much harder than farmland and building changes. It is obvious that \(S_{GLR}\) method can obtain the best detection results. In addition, \(S_{GLR}\) method can obtain better results with RABASAR-DAM provided data. Compared with other methods, the \(CGLRT\) method is good at detecting building area changes which have large change magnitude. This characteristic causes \(CGLRT\) method to provide worse results when the false positives are larger than \(1\%\). According to the results shown in Figure 1, Conradsen's method always have more false positive because of using multilooked data.
To fairly compare with the state-of-the-art change detection methods, we processed popularly used TerraSAR-X images acquired over \(Sendai\). Although MIMOSA [42] detects all the changes directly using the noisy data, there are too much wrong detections in the unchanged areas. The detections are seriously influenced by the noise. \(CGLRT\) can provide good results, but the changed area boundaries are blurred compared with \(S_{GLR}\). In addition, \(CGLRT\) provides some wrong results in the water area.
Figure 1: False positive vs true positive curves comparison based on simulated SAR images. (c) ROC curve results using simulated Sentinel-1 data (a), (d) ROC curve results using simulated TerraSAR-X data (b). Multilooked data with \(3\times 3\) window size, 2SPPB and RABASAR provided data are used for the comparison of Conradsen’s method, \(CGLRT\) method and \(S_{GLR}\) method, respectively. Different colours represent different object changes: green=farmland, yellow=forest, red=appearing, blue=appearing then disappearing, cyan=disappearing.
## 4 Conclusion
Figure 2: \(Sendai\) SAR image pair change detection comparison. Pink represents disappearing areas, while green represents appearing areas. The comparison areas are indexed using red circles with associated numbers.
Figure 3: Continuous change monitoring results. (a) the reference image divided by the other images, (b) changes detection results, (c) changes with change magnitude weights. The thresholds were chosen for column (b) with a false alarm rate equal to 0.54% and 1% for column (c). The time intervals of the changes are shown on the left. Different colours represent the decrease and increase in the magnitude of the backscattering values. The temporal images are denoised using RABASAR-DAM.
\(S_{GLR}\) method obtains the best results both with RABASAR-DAM and RABASAR-DBWAM. They all precisely detect the 5 typical changed areas (Figure 2 (c)). However, there is some noise inside the changed areas when using RABASAR-DBWAM. \(S_{GLR}\) method obtained better results for the objective change detection with RABASAR-DAM.
### Continuous change monitoring
Continuous change monitoring is a good way to track the development of object changes. In this section, 5 Sentinel-1 images are processed using \(S_{GLR}\) method, which provides satisfying multitemporal change detection results. Yellow lines are used to highlight the farmland area boundaries, as shown in the first column of Figure 3. Compared with the farmland backscattering values in the reference image, the others seem to have smaller values. Thus, we create the background image by dividing the reference image with other images, so as to highlight the changed farmland areas.
There are valleys in this area, and even the farmland areas are not flat. The threshold is defined empirically, according to the detecting change types or changed areas. For example, big threshold values lead to the detection of high change magnitude areas. With the false alarm rate equal to 0.54% (computed using the denoised simulated SAR images without change), we can detect the farmland area changes (Figure 3).
The positive values (red) indicate the increase of backscattering values according to the reference data, and the negative values (blue) represent the decrease of the backscattering values. The appearing or disappearing buildings always have a large change magnitude. Seasonal changed areas (like farmland areas, and some kinds of forest areas) have dynamic changes during the timeline. With the false alarm rate equal to 0.54% (Figure 3), the proposed method detects 83.23% of the appearing and disappearing buildings.
Since farmland areas and building areas have different mean intensity values, we could suppress the detection of farmland area changes by adding this information as a weight, with \(\exp(\sqrt{\hat{u}_{t}}+\sqrt{\hat{u}_{t^{\prime}}}/2)S_{GLR}(\hat{u}_{t}, \hat{u}_{t^{\prime}})\). The exponential function is used to enlarge the backscattering value differences between different objects.
In addition, these two areas usually have different change magnitudes. For example, with weights calculated using the log version distance of the correponding amplitude values:
\[S_{GLR}^{D}(\hat{u}_{t},\hat{u}_{t^{\prime}})=\log(|\sqrt{\hat{u}_{t}}-\sqrt{ \hat{u}_{t^{\prime}}}|)S_{GLR}(\hat{u}_{t},\hat{u}_{t^{\prime}}) \tag{28}\]
we could acquire new change detection results (third column of Figure 3). However, after multiplying the weights, the threshold has to be defined empirically.
### Change classification
In this section, the proposed change classification method is compared with NORCAMA [6] with the use of multitemporal Sentinel-1 images. This process can distinguish farmland area changes from building area changes, which have seasonal and non-seasonal changes, respectively.
#### 4.4.1 Change classification with Sentinel-1 data
With high-frequency acquisition data, the whole duration of the changed buildings will be monitored. During the construction of the building, its backscattering values may keep changing which leads to complex change monitoring results. To suppress the complex or high-frequency changes in construction areas and keep the cycle changes of farmland areas, we under sampled the frequency of the time series. The acquisition time of the Sentinel-1 time series are 24/12/2014, 05/05/2015, 25/11/2015, 05/04/2016, 02/10/2016 and 18/01/2017.
However, all the Sentinel-1 IW VV polarization SAR images acquired through 110 orbit from 24/12/2014 to 18/01/2017 are used during the speckle reduction process, so as to acquire better denoising results. RABASAR-DAM denoised images and all change classification results are illustrated in Figure 4. Since the actual backscattering values of the farmland areas are controlled by the surface roughness and soil moisture, we could observe the weak backscatter fields in SAR images acquired in spring. This phenomenon also reflects the seasonal changes in the time series.
Compared to NORCAMA provided results, \(S_{GLR}\) based change classification method provides much better results. All the detected changed areas are well corresponding to the previous cumulative change detection results demonstrated in Figure 3. Visually, \(S_{GLR}\) gives better detection results (Figure 4(h)) when using RABASAR-DAM provided data. There are fewer isolated points in the detection results and the changed farmland areas are very smooth. The changed types are similar to that using RABASAR-DBWAM provided data.
Global threshold is used for \(S_{GLR}\) based change type detection method, so as to speed up the processing. The denoising results of RABASAR seem good when using denoised binary weighted arithmetic mean image, but the change type
detection results are not better than that using the denoised arithmetic mean image. When using RABASAR provided data, NORCAMA method can obtain much better results than using 2SPPB provided data.
The same crop with similar growth periods monitored in different years with the same SAR sensor, they have similar backscattering values. It seems that using fewer images is not enough to detect all kinds of vegetation interannual variations. It is recommended to use \(S_{GLR}\) change type detection method with RABASAR-DAM provided data.
### Change time detection and visualization with extended REACTIV method
In this section, the time series generated from Sentinel-1 single-look complex images and Sentinel-1 GRD images are separately used to illustrate the capability of the proposed method.
Figure 4: Sentinel-1 time series change classification. (a-f) Sentinel-1 images, (g) NORCAMA, (h) \(S_{GLR}\) with RABASAR-DAM provided data, (i) \(S_{GLR}\) with RABASAR-DBWAM provided data. 6 images are used for the change type detection. The change type results are: white: no change, red: step change, green: impulse change, blue: cycle change and cyan: complex change.
Figure 5: Maximum amplitude value time provided by REACTIV method. 10 Sentinel-1 images are used for the comparison. Ground truth map is prepared according to the arithmetic mean image, with different colours representing different objects. We mainly pay attention to the changed building areas (blue) and farmland areas (yellow, green).
Figure 6: Different change time detection comparison with improved REACTIV method. 10 Sentinel-1 images are used. The time series data are denoised by RABASAR-DAM with 69 time series images. A threshold is set at \(99\%\) on the change probability.
Figure 7: Flooding area change time detection comparison with 9 Sentinel-1 GRD data. 9 noisy images are shown above. The images are acquired over Xe-Pian Xe-Nammoy dam in the southeastern province of Attapeu in Laos. 87 temporal Sentinel-1 GRD images are used for the preparation of the arithmetic mean image. All the test images are provided by RABASAR-AM.
#### 4.5.1 Farmland area and building area monitoring
To make a comprehensive evaluation of the method, we processed the temporal Sentinel-1 SAR images with the original REACTIV method. The detection results are shown in Figure 5(a). Although the changing area can be roughly detected by the test statistics with original SAR data, the results are seriously affected by the noise. The previously prepared ground truth map is shown in Figure 5(b) to help the interpretation and evaluation of change detection results.
With the RABASAR provided data, different change time detection strategies described in Section 3.1 are utilized to process the Sentinel-1 time series. The spatial adaptive denoising leads to spatially variable ENL. During the change time detection test, we suppose they have the same ENL so as to speed up the process. In addition, the red colour at the end of the colour bar is removed so as to avoid the mix of red colours. All the time are compressed to the set of colour index bar.
Compared with the original method, the improved REACTIV method can obtain much better change detection results (Figure 6(a)). According to the ground truth, the improved REACTIV method can detect the changed building areas and farmland areas. With the default parameter, it can not detect the changes in forest areas which have low change magnitude in the temporal amplitude SAR images.
With reference to different object changes, the maximum change magnitude time and start changing time are the same for disappearing building areas. The detection results in the appearing building areas are much more complex. Based on the detection results, we can even distinguish different kinds of farmlands. Since most farmland areas are not totally flat, this may cause different parts of the same farmland to reach their maximum value at different times. Therefore, the detection results in these areas are not smooth enough. Generally, construction areas are larger than the under-construction building, and this area may keep changing before the building is completed. All these phenomena lead to complex change shapes in the detection results.
#### 4.5.2 Monitoring abrupt floods in Southern Laos
Change detection is a significant application of remote sensing technology. In this section, we try to apply the improved method to flooding area monitoring. There exist a large number of Sentinel-1 GRD data over the test site, all the images which have similar acquisition geometry are used to prepare the super-image.
Sentinel-1 GRD images have the ability to monitor large-area changes. We only test the improved method over the water storage area. The amount of flooding water can be estimated according to the changes in the water area and local digital elevation model. As shown in Figure 7, most of the areas are changed between the image pairs which were acquired on 17/07/2018 and 29/07/2018. The black areas surrounded by blue areas are the final water area during the image acquisition period. The detection results have high similarity with the results provided by ESA2.
Footnote 2: ESA: [https://www.esa.int/Our_Activities/Observing_the_Earth/Copernicus/Sentinel-1/Sentinel-1_maps_flash_](https://www.esa.int/Our_Activities/Observing_the_Earth/Copernicus/Sentinel-1/Sentinel-1_maps_flash_) floods_in_Laos
## 5 Conclusion
In this paper, we proposed a simple \(S_{GLR}\) based similarity test which could be applied and benefit to any denoised SAR images. The simple \(S_{GLR}\) similarity is based on gamma distribution and used for the calculation of the criteria map. Based on the prefiltered data, this method has been used for image pair change detection, continuous change monitoring and change classification. In particular, we mainly used RABASAR provided data in this paper. The processing results of simulated and real SAR images show that \(S_{GLR}\) based change detection method provided good results both in the processing of image pairs and temporal images. \(S_{GLR}\) method gives better change classification results compared to NORCAMA method. Using RABASAR-DAM provided data, \(S_{GLR}\) acquired much better change classification results, with smooth changed areas and less noisy points.
In addition, we used the \(S_{GLR}\) function and RABASAR denoising data to improve REACTIV method. Based on the detection areas acquired by REACTIV method (dynamics of time series coefficient of variation), we associated the colours with different kinds of change times and changed the background with a denoised image or arithmetic mean image. By only using part of the hue colour channel, we successfully avoid the mixture of the red colour index. The results obtained by the improved method provided useful information and allow extended interpretation. The change time detection is much more effective for homogeneous area changes and for abrupt changes, which is suitable for monitoring farmland areas, flooding areas and some human activities (harbour activities, urbanization and airport dynamics). However, the method has less capability to detect seasonal changes in forest areas.
Future work will take into account the object attributes of the changed areas, so as to acquire better analysis results. To precisely identify the seasonal change of vegetation areas, we will pay attention to the multitemporal coherence maps. |
2305.11889 | An Automated Power Conservation System (APCS) using Particle Photon and
Smartphone | Nowadays, people use electricity in all aspects of their lives so that
electricity consumption increases gradually. There can be wastage of
electricity due to various reasons, such as human negligence, daylighting, etc.
Hence, conservation of energy is the need of the day. This paper deals with the
fabrication of an "Automated Power Conservation System (APCS)" that has
multiple benefits like saving on power consumption there by saving on
electricity bills of the organization, eliminating human involvement and
manpower which is often required to manually toggle the lights and electrical
devices on/off, and last but most importantly conserve the precious natural
resources by reducing electrical energy consumption. Two IR sensors are used in
this project and these two sensors are used for detecting the presence of a
person in the classroom. When the existence of the person is detected by the
APCS it automatically turns on the fans and lights in that classroom and during
the absence they will be automatically turned off, thus paving the easiest way
to conserve power. This hardware is integrated with the Android app, where the
user can get data on his smartphone regarding the number of fans and lights
that are turned on at a particular instance of time. The user can also switch
on/off the fans and lights from anywhere in the world by using the Android App. | Chandra Sekhar Sanaboina, Harish Bommidi | 2023-05-12T01:55:13Z | http://arxiv.org/abs/2305.11889v1 | # An Automated Power Conservation System (APCS) using Particle Photon and Smartphone
###### Abstract
Nowadays, people use electricity in all aspects of their lives so that electricity consumption increases gradually. There can be wastage of electricity due to various reasons, such as human negligence, daylighting, etc. Hence, conservation of energy is the need of the day. This paper deals with the fabrication of an "Automated Power Conservation System (APCS)" that has multiple benefits like saving on power consumption there by saving on electricity bills of the organization, eliminating human involvement and manpower which is often required to manually toggle the lights and electrical devices on/off, and last but most importantly conserve the precious natural resources by reducing electrical energy consumption. Two IR sensors are used in this project and these two sensors are used for detecting the presence of a person in the classroom. When the existence of the person is detected by the APCS it automatically turns on the fans and lights in that classroom and during the absence they will be automatically turned off, thus paving the easiest way to conserve power. This hardware is integrated with the Android app, where the user can get data on his smartphone regarding the number of fans and lights that are turned on at a particular instance of time. The user can also switch on/off the fans and lights from anywhere in the world by using the Android App.
Internet of Things, Particle Photon, Android App, Thingspeak, Automated Power Conservation System +
Footnote †: [https://github.com/hugging-and-forward-to-](https://github.com/hugging-and-forward-to-)
## Introduction
An Automated Power Conservation System (APCS) is a system, which monitors and operates the electrical appliances in accordance to the presence of a person in the classroom and it can be accessed from a remote place through internet and an Android App (UI). (i,e,., APCS turns on/off the fans and lights in the classroom based on the presence of the people in the classroom). APCS also maintains the count of the number of persons present in the classroom and the data obtained is pushed on to the cloud. It can also generate the report and give the status of electrical appliances (on/off) on a real-time basis.
APCS consists of a microcontroller for monitoring and coordinating with sensors and appliances of a room. Here the Particle Photon Chip is used as a microcontroller. Particle Photon is a microcontroller with inbuilt Broadcom Wi-Fi device which assists the photon to connect to the internet. The particle is connected with two IR sensors (INPUTS) and Relay (OUTPUT). A Web IDE (Integrated Development Environment) is used to program the particle photon. The program specifies a task to particle photon based on inputs read from the two sensors and uploads the data into Cloud. The admin can monitor these data from a remote place by using his/her smartphone via the internet. Android studio is used for developing Android App (UI). The IR sensors used for detecting person entry/exit status and relay module are used for turn ON/OFF the electrical appliances. Moreover, the user is also facilitated to operate the appliances remotely.
APCS saves unwanted power consumption of the University or any other organization, eliminating human involvement and manpower which is often required to manually switch on/off the lights and the electrical device.
The rest of the paper is structured as follows. Section II explains Literature survey of the IoT and Automation. Section III elucidates the proposed model along with the description of all the components used in the experiment. Section IV describes the system design and implementation. Section V deals with the algorithm and flowchart for APCS. Section VI shows the Experimental setup. Section VII gives the experimental results. Finally, Section VIII discusses the conclusion and future work.
## II Literature Survey of IoT and Automation
The Internet of Things (IoT) conceptually embodies intelligent visions of automating the day to day activities [1]. Ideally, IoT will optimize our future routines with intelligent and robust systems that will make our life not only easy but also fast based upon our preferences and priorities like morning alarms, coffee timing, medicine uptake etc. Its vast applications will make our travel arrangements intelligently, by giving frequent updates and weather data. In short, IoT has the power to meet our every need before we even need to realize what we want and will need. Interconnectedness and automation are the real power of IoT solutions. A lot has not only made our lives easier but also has lots of potentials to drive economic value and social change [2]. But still, 85% of things still is unconnected and a security threat pervasive, for which industry has yet to conquer the real potential of IoT.
Automation is a technology which enables the user to control a process or procedure with minimum human assistance[3]. Automation otherwise called as automatic control uses various control systems for operating equipment such as switching on telephone networks, processes in the factories, stabilization of ships, aircraft etc., and heat treating ovens like boilers with very minimal or condensed human intervention [4]. Some processes are partly automated but some processes have been completely automated. Automation covers applications range simple household thermostat control in air conditioning unit or thermostat controlling unit in a boiler to a large industrial control system that can control tens of thousands of input as well as output devices [5].
One of the automation wings can be a home automation system that performs the operations of various home appliances more expedient and saves energy [6]. Home automation or building automation makes life very simple nowadays and it also saves a lot of energy. It involves automatic controlling of all electrical or electronic devices in homes or even remotely through wireless communication [7]. Centralized control of security systems, lighting equipment, kitchen appliances, air conditioning units, heating devices, audio/video systems and all other equipment used in home systems is possible with this system. Automation can also be extended to the universities wherein one can automate the electrical appliances in the classrooms [8].
## III Proposed Model and its Components
The proposed model APCS is a power-efficient Wi-Fi based intelligent automated system for a room that will monitor and control the electrical appliances without the human intervention. The model is being deployed in the classrooms for conserving the power and preventing the unwanted wastage of power. This system can work in two modes namely Automatic mode and Manual mode which is developed for our convenience. By default, the system will work with automatic mode. This mode saves power from
human negligence in situations like when persons leave the room without turning off the lights and fans. This mode works
on the presence of the person and person count. As soon as a person enters into the classroom the lights and fans are automatically turned on and it also keeps track of the number of persons present in the room. If the person count in the room reaches zero means it automatically turns off all the electrical appliances. In some situations, the user needs to turn on/off electrical appliances manually. To cater to these needs APCS is built to operate in a manual mode wherein the user can operate the electrical appliances in the room manually from anywhere in the world using an android app.
Figure 1 shows the Architecture of the proposed system. This system consisting of three parts mainly Hardware consisting of sensors, relay module and particle photon chip, User Interface (Android app) and Cloud which is used to store the data on a real-time basis for which Thingspeak is being used.
The most commonly known cause of energy wastage is human negligence. In most cases, humans tend to forget to turn off the electrical appliances as they left from Classroom. A smart room should be able to automatically turn off the lights and fans when it detects no person in the room. Table 1 gives a clear picture of how much power is being wasted in a month for a single room (i.e., trail room in this paper) having 4 lights and 4 fans. Ignoring the remaining factors for the wastage of power, this paper considers only one factor for the WASTAGE OF POWER (I.e., HUMAN NEGLIGENCE).
The calculations are based on the theoretical power consumptions that a particular fan (60 stats) or a light (40 watts) will consume. It is not practically possible to determine how much electricity will be wasted as negligence because electricity wastage is directly proportional to human negligence. Hence, this paper considered only three cases (i.e., 90% negligence (Worst Case), 50% negligence (Average Case) and 10% negligence(Best Case)) on a trial basis and the results are tabulated as above. Each case that is considered is based on the percentage of human negligence. The table also depicts the power consumed if the appliances are in full working condition versus the power wasted due to human negligence. Practically speaking, human negligence cannot be avoided and hence there is a need to conserve power by fabricating or developing some automated tools/devices. This is the principal motto behind the development of APCS. The calculations of power consumption in our trail room which consists of 4 fans and 4 lights are elucidated in the following section. The calculations are based on the worst case negligence (i.e., 90% negligence).
**Calculation of Power Consumption of a trial room**
By definition, Power is defined as the amount of energy used per unit time and 1 watt = 1 Joule/sec
**For tube light:**
Energy consumed is 40 Joules/sec (Since tube light is 40 watts )
For 1 hour, Energy usage = 40*3600=144000J.
1 standard unit of electricity i.e. 1 kwh= 3600000Joules.
So that would be 4% of a single unit, or in other words lighting of the tube light for 25 hours would cost you 1 unit of electricity. So it would be 0.0513 kWh of single 40W tube light power usage.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & \begin{tabular}{c} **Worst** \\ **Case** \\ **(90\%Negli** \\ **gence)** \\ \end{tabular} & \begin{tabular}{c} **Average** \\ **Case** \\ **(50\%Negli** \\ **gence)** \\ \end{tabular} & \begin{tabular}{c} **Best Case** \\ **(10\%Negli** \\ **gence)** \\ \end{tabular} \\ \hline \begin{tabular}{c} **Wastage of** \\ **Power in** \\ **KWH(Units)** \\ \end{tabular} & 163.0368 & 90.576 & 18.1152 \\ \hline
\begin{tabular}{c} **Utilization of** \\ **Power in** \\ **KWH(Units)** \\ \end{tabular} & 18.1152 & 90.576 & 163.0368 \\ \hline \end{tabular}
\end{table}
Table 1: Wastage Of Power due to Human Negligence
Figure 1: An Architecture of An Automated Power Conservation System
For 30 days working at 8 hours per day and 90% human negligence
Power wastage = (30*8*90)/100
= 216*0.0513 (for one tube light)
= 216*0.0513*4 (for four tube lights)
= 44.3232
**For fan:**
Energy consumed is 60 Joules/sec (Since fan is 60 watts )
For 1 hour, Energy usage = 60*3600=2160000J.
1 standard unit of electricity i.e. 1 kwh= 360000Joules.
So that would be 6% of a single unit, or in other words lighting of the fan for 24 hours would cost you 1 unit of electricity. So it would be 0.1374 kWh of single 60W fan power usage.
For 30 days working at 8 hours per day and 90% human negligence
Power wastage = (30*8*90)/100
= 216*0.1374 (for one fan)
= 216 * 0.1374 * 4 (for four fans)
= 118.7136
Hence, the total wastage of power for 4 tube lights and 4 fans is given by sum of wastage of power for 4 tube lights + wastage of power for 4 fans
Total wastage = 44.3232 + 118.7136
= 163.0368
The same value is given in Table 1 under wastage of power with 90% negligence. The remaining values given in the table are self-explanatory.
**3.1.Hardware used in APCS**
The hardware part of APCS consists of sensors, Relay module, and microcontroller. Sensors read data from the real world and send to the microcontroller. The microcontroller processes the data that is received and then controls appliances through Relay module.
A brief description of various components used in APCS are given below
#### 3.1.1 Particle Photon
Particle combines a powerful ARM Cortex M3 microcontroller with a Broadcom Wi-Fi chip in a tiny thumbnail-sized module called the PA^ (P-zero). The microcontroller is the brain of particle device. It runs the program and tells hardware prototype what to do. Unlike a computer, it can only run one application (often called firmware or an embedded application). This application can be simple (just a few lines of code), or very complex (may vary from thousands to lakhs of code). The microcontroller interacts with the outside world using pins. Pins are the input and output parts of the microcontroller that are exposed on the sides of particle device. GPIO pins can be hooked to sensors or buttons to listen to the world, or they can be hooked to lights and buzzers to act upon the world. There are pins for Serial/UART communication and a pin for resetting particle device. Figure 2 shows the top view of the Particle Photon.
As shown in Figure 2 particle photon has 6 Analog I/O ports with 2 port for RX and TX, one DAC ( Digital to Analog Converter), and 8 Digital I/O port with inbuilt LED at D7 port. The particle has its own cloud where the data in our particular photon can be accessed through the Internet. Also having its own cloud, the data can be accessed by IFTTT and use it for action and trigger purpose to send an email and receive command through user easily.
Features
* Broadcom BCM43362 Wi-Fi chip
* 802.11b/g/n Wi-Fi
* 1MB flash, 128KB RAM
* On-board RGB status LED (ext. drive provided)
* 18 Mixed-signal GPIO and advanced peripherals
* Real-time operating system (FreeRTOS)
#### 3.1.2 Two-Channel Relay module
A relay is an electrically operated switch. Many relays use an electromagnet to mechanically operate a switch, but other operating principles are also used, such as solid-state relays. Relays are used where it is necessary to control a circuit by a low-power signal (with complete electrical isolation between control and controlled circuits), or where
Figure 2: Particle Photon Chip
several circuits must be controlled by one signal. Since relays are switches, the terminology applied to switches applied to relays. Figure 3 shows the physical view of Two Channel Relay Module. A relay can be operated in two states:
**Normally - open (NO)** contacts connect the circuit when the relay is activated; the circuit is disconnected when the relay is inactive. It is also called a FORM A contact or make contact.
**Normally - closed (NC)** contacts disconnect the circuit when the relay is activated; the circuit is connected when the relay is inactive. It is also called FORM B contact or break contact.
**Features**
* Number of Relays: 2
* Control signal: TTL level
* Rated load: 7A/240VAC 10A/125VAC 10A/28VDC
#### 3.1.3 IR Obstacle Sensor
Infrared Obstacle Sensor Module has a built-in IR transmitter and IR receiver that sends out IR energy and looks for reflected IR energy to detect the presence of any obstacle in front of the sensor module. Figure 4 shows the IR Obstacle sensor. The module has an onboard potentiometer that lets the user adjust the detection range. The sensor has a very good and stable response even in ambient light or in complete darkness. Infrared Photodiodes are different from normal photodiodes as they detect only infrared radiation. When the IR transmitter emits radiation, it reaches the object and some of the radiation reflects back to the IR receiver. Based on the intensity of the reception by the IR receiver, the output of the sensor is defined
### Mobile Application
The mobile application is developed under the Android platform. It works on all Android mobile phones which are enabled by the internet. This application requires authorization for accessing or controlling of particle device.
The mobile application fetches the data from the Thingspeak
the cloud which is already uploaded by the particle photon and also sends commands to the particle device. The user interface for the application is designed in a way that enables both monitoring and control field from the device. The Mobile Application Interface is shown in Figure 5
### Thingspeak
Thingspeak is a web-based open API IoT information platform that can store the sensor data of a wide variety of IoT applications [9]. It is also used to combine different varieties of sensor data for analysis and thus helps the user in making the right decisions. The data from Thingspeak can be outputted in the graphical format at the web level. Internet helps in the communication of Thingspeak and other devices. It could analyze, retrieve, save/store, observe and work on the sensed data from the devices such as Raspberry-pi, Arduino, particle photon, Intel Galileo etc., to the sensors.
Thingspeak helps in a sensor based logging applications, social networking of objects/ things with updated status and location tracing applications. Alternatively, It can also be used in home automation products that were connected to the internet. The primary feature of the Thingspeak functionality is the term "channel" that have various fields for defining the status of varied
Figure 4: IR Obstacle Sensor
Figure 5: Mobile Application for APCS
Figure 3: 2-Channel Relay module
sensed data, location and data. The data from the Things speak can be processed and analyzed only after creating the channels. The data that is stored in the Thingspeak can be utilized for visualization purpose using MATLAB and can respond to the data with tweets and other forms of alerts also. It also provides a feature to create a public based channel to analyze and estimate it through the public. All these activities mentioned above are features of a Cloud and hence Thingspeak is treated as a cloud.
The IoT Helps to bring all things together and permits us to communicate with our very own things and even more curiously allows objects/things to interact with other 'things'.
## IV System Design and Implementation
The implementation of the proposed system (APCS) is shown in Figure 6. It shows the connectivity of various devices to particle photon chip. IR and Relay devices get power from GND and +5V pins from Particle device. The IR sensor is usually used for obstacle detection but in APCS it is used for detecting the direction of motion. It shows how the two IR sensors are placed at the entrance of the room. The phase difference between the readings of the two sensors can be used to detect the directions of the movement. The particle photon is continuously reading both the sensors values. The person while moving in or out of the room will pass through these sensors one by one. If the IR1 sensor, gives +5V before the IR2 sensor then the person is moving IN the room. If IR2 sensor gives +5V before the IR1 sensor then the person is moving OUT of the room. Whenever a person is detected entering or leaving the room the count is incremented or decremented respectively. Thus we can know the number of persons available inside the room at any moment and can check whether the room is empty or occupied. Accordingly, APCS switch on or off the electrical appliances.
## V Algorithm and Flowchart for APCS
The algorithmic approach for APCS was given below for better understanding of the user. The Particle.publish\(\rangle\) method uploads count data to the Thingspeak cloud for every event occurs.
```
Input: Two IR sensor inputs IR1 and IR2 Output: Outputs light and fan via Relay module Initialization:count \(\&\)0
1. Loop
2. ifIR1==HIGHthen
3. while 15sec completes do
4. ifIR2==HIGHthen
5. count++
6. particle.publish(count) ;
7. end
8. end
9. end
10. ifIR2==HIGHthen
11. while15sec completes do
12. ifIR1==HIGHthen
13. count -- ;
14. particle.publish(count) ;
15. end
16. end
17. end
18. ifcount <= 0then
19. light \(\&\)LOW;
20. fan \(\&\)LOW;
21. else
22. light \(\&\)HIGH;
23. fan \(\&\)HIGH;
24. end
25. EndLoop
```
**Algorithm 1**An Automated Power Conservation System
Figure 7 gives the flowchart of APCS while it is operating in Automatic mode. Initially, both the sensors are waiting for obstacle detection if the IR1 sensor detects signal first then 15sec after the IR2 sensor detects signal. It means person entered the room so count variable increments and count upload into the Thingspeak. if the count equals to zero the microcontroller turns off the lights and fans. If the IR2 detects first then IR1 means person left the room so count decrements. Every action, the particle will upload data into the cloud.
Figure 6: Implementation of Automated Power Conservation System
The particle photon continuously reading values from sensors and uploading into Thingspeak using webhook which is already configured in particle web. The mobile application gets these data from Thingspeak using JSON objects.
## VI Experimental Setup
Experimental setup for the proposed system (APCS) is shown in Figure 8. APCS program is flashed into particle photon. APCS android app has been installed on the smartphone. The user gets all the information about the devices with the help of the APCS mobile app. The user interacts with the system by simply logging into the app with help of the unique user id and password.
## VII Experimental Results
The proposed system APCS has been tested successfully for both manual and automatic operation of electrical appliances based on the presence of the persons in the classroom. APCS can be treated as a novel approach to conserving power and it is calculated that almost 15% of power can be saved per month. Conserving power has a direct impact on money savings also. Even though the APCS is developed keeping in view of the university, it is also proved that it is best-suited home automation and industrial automation where one can conserve more power from being wasted.
## VIII Conclusion and Future Work
This paper proposes a low cost, secure, ubiquitously accessible, remotely controlled solution. The approach discussed in the paper is novel and has achieved the target to control electrical appliances remotely using the Wi-Fi technology to connect system parts, satisfying user needs and requirements. Looking at the current scenario we have chosen the Android platform so that most of the people can get the benefit. The technology is easy to use and can benefit the naive people that have no technical background. The proposed system is better from the scalability and flexibility point of view than the commercially available automation systems.
In future proposals can be made to build a cross-platform system that can be deployed on various platforms like iOS, Windows etc. This System can be extended to a number of electrical appliances. INPUTS and OUTPUTS can be extended by using Basic Logic gates. Many other devices like Security cameras can be controlled, allowing the user to observe activity around a room. Security systems can include motion sensors that will detect any kind of unauthorized movement and notify the user. The scope of this project can be expanded to the entire organization but not for a single room.
|
2309.01797 | Accuracy and Consistency of Space-based Vegetation Height Maps for
Forest Dynamics in Alpine Terrain | Monitoring and understanding forest dynamics is essential for environmental
conservation and management. This is why the Swiss National Forest Inventory
(NFI) provides countrywide vegetation height maps at a spatial resolution of
0.5 m. Its long update time of 6 years, however, limits the temporal analysis
of forest dynamics. This can be improved by using spaceborne remote sensing and
deep learning to generate large-scale vegetation height maps in a
cost-effective way. In this paper, we present an in-depth analysis of these
methods for operational application in Switzerland. We generate annual,
countrywide vegetation height maps at a 10-meter ground sampling distance for
the years 2017 to 2020 based on Sentinel-2 satellite imagery. In comparison to
previous works, we conduct a large-scale and detailed stratified analysis
against a precise Airborne Laser Scanning reference dataset. This stratified
analysis reveals a close relationship between the model accuracy and the
topology, especially slope and aspect. We assess the potential of deep
learning-derived height maps for change detection and find that these maps can
indicate changes as small as 250 $m^2$. Larger-scale changes caused by a winter
storm are detected with an F1-score of 0.77. Our results demonstrate that
vegetation height maps computed from satellite imagery with deep learning are a
valuable, complementary, cost-effective source of evidence to increase the
temporal resolution for national forest assessments. | Yuchang Jiang, Marius Rüetschi, Vivien Sainte Fare Garnot, Mauro Marty, Konrad Schindler, Christian Ginzler, Jan D. Wegner | 2023-09-04T20:23:57Z | http://arxiv.org/abs/2309.01797v1 | Accuracy and Consistency of Space-based Vegetation Height Maps for Forest Dynamics in Alpine Terrain
###### Abstract
Monitoring and understanding forest dynamics is essential for environmental conservation and management. This is why the Swiss National Forest Inventory (NFI) provides countrywide vegetation height maps at a spatial resolution of 0.5 \(m\). Its long update time of 6 years, however, limits the temporal analysis of forest dynamics. This can be improved by using spaceborne remote sensing and deep learning to generate large-scale vegetation height maps in a cost-effective way. In this paper, we present an in-depth analysis of these methods for operational application in Switzerland. We generate annual, countrywide vegetation height maps at a 10-meter ground sampling distance for the years 2017 to 2020 based on Sentinel-2 satellite imagery. In comparison to previous works, we conduct a large-scale and detailed stratified analysis against a precise Airborne Laser Scanning reference dataset. This stratified analysis reveals a close relationship between the model accuracy and the topology, especially slope and aspect. We assess the potential of deep learning-derived height maps for change detection and find that these maps can indicate changes as small as 250 \(m^{2}\). Larger-scale changes caused by a winter storm are detected with an F1-score of 0.77. Our results demonstrate that vegetation height maps computed from satellite imagery with deep learning are a valuable, complementary, cost-effective source of evidence to increase the temporal resolution for national forest assessments.
keywords: Vegetation height mapping, Deep learning, Remote sensing, Stratified analysis +
Footnote †: journal: Journal of LaTeX Templates
## 1 Introduction
Forest conservation and management requires accurate monitoring of its structure and dynamics. Vertical forest structure indicators, such as vegetation height, support biodiversity studies and help forest planning to preserve its functions under increasing ecosystem stress (Dubayah et al., 2020). Airborne campaigns for vegetation height mapping, while delivering very accurate data, are time-consuming and costly. This leads to a six-year repetition rate of countrywide vegetation height maps in Switzerland, a relatively small territory. An alternative to accurate yet costly maps derived from airborne data is vegetation height maps derived from spaceborne data (Lang et al., 2019; Potapov et al., 2021; Lang et al., 2022). These space-based maps allow for much higher repetition rates at the cost of lower spatial resolution and accuracy. In this paper, we present an in-depth analysis of space-based vegetation height maps to explore their practical application in countrywide forest monitoring. In particular, we compute countrywide maps over multiple years and assess their value for detecting structural change in forests over subsequent years.
A mature, well-established technology to provide detailed and accurate height maps of different types of forests is Airborne Laser Scanning (ALS; White et al., 2016). A common workflow consists in processing the denoised point clouds into a digital terrain model (DTM) using the classified ground points and a digital surface model (DSM) using the highest points. The vegetation height map (VHM) is then computed by subtracting the DTM from the DSM. This standard approach is widely applied in practice in different countries, including Norway (Naesset, 1997) and Denmark (Nord-Larsen and Riis-Nielsen, 2010). If a DTM from ALS data exists, additional DSMs can be derived with digital photogrammetry and regular updates are possible (Ginzler and Hobi, 2015). While ALS and photogrammetry certainly do provide high-resolution, accurate data, these techniques come with high operational costs proportional to the area size of the campaign. This ultimately leads to rather low repetition rates. In Switzerland for instance, a still relatively small country of \(41,285\ km^{2}\), forests are revisited only every six years.
In contrast, Earth observation satellites facilitate more frequent and faster data acquisition at high spatial and temporal resolution. Piermattei et al. (2019) demonstrate that it is possible to apply photogrammetric methods to Pleiades satellite imagery to produce a 1-meter resolution VHM in Alpine terrain. While showing promising results, a limiting factor of country-scale VHM generation from Pleiades imagery are high costs for buying the satellite images in multi-view stereo configurations. Conversely, the Sentinel-2 mission of the European Space Agency (ESA) provides public and free multi-spectral, optical satellite imagery at \(10-60\ m\) spatial resolution and a revisit time of under six days (Drusch et al., 2012). An alternative approach for VHM generation, inspired by monocular depth estimation in computer vision, is to solely rely on single satellite images without any multi-view configuration. Although the Sentinel-2 mission does not come with single-pass stereo capability like Pleiades, it is possible to solely rely on single satellite images to generate a VHM, inspired by monocular depth estimation in computer vision (Lang et al., 2019). The underlying idea is to directly regress vegetation height per pixel in the Sentinel-2 image based on spectral and textural evidence. Because the physical effects that translate vegetation heights to specific spectral values in the satellite images are hard to be modelled directly with sufficient accuracy, data-driven approaches provide a promising alternative. Supervised deep learning approaches are particularly promising. Lang et al. (2019) propose a convolutional neural network to compute VHM at 10-meter grid spacing from Sentinel-2 satellite images and present countrywide maps for Switzerland and Gabon. Waldeland et al. (2022) propose a deep learning method to compute a map for the African continent from Sentinel-2 imagery. Becker et al. (2021) extend the approach by combining Sentinel-2 images with Sentinel-1 synthetic aperture radar (SAR) images to estimate a range of different forest variables for the entire country of Norway. Although the inclusion of SAR data has shown to enhance model performance, as highlighted by Becker et al. (2021), their findings have also revealed that optical images from Sentinel-2 play a more crucial role. Furthermore, the exclusion of SAR data only resulted in a minor decrease in accuracy. Given the limited improvement brought by Sentinel-1 in their study and the additional computational requirements, we concentrate solely on utilizing the optical images from Sentinel-2 for this study. Scaling further to global maps of forest structure variables needs globally distributed reference data to train and validate supervised machine learning approaches. The Global Ecosystem Dynamics Investigation (GEDI; Dubayah et al., 2020) mission, acquiring full-waveform LiDAR data at almost global scale, can provide this kind of reference data. Potapov et al. (2021) are the first to use GEDI data as reference to train a supervised, bagged regression tree ensemble to produce a global vegetation height map for 2019 with 30-meter resolution. A 10-meter, global VHM of 2020 has been computed by Lang et al. (2022) from Sentinel-2 images and GEDI data. The authors propose a new deep ensemble approach that regresses vegetation heights along with well-calibrated uncertainty estimates per grid cell. These recent methods that densely regress vegetation height maps with supervised machine learning from individual multi-spectral, optical satellite images motivate our work. We investigate to what extent they can be useful in practice to monitor the state of forests at a national scale. To that end, we present an in-depth stratified analysis of estimated vegetation heights to understand how the five factors elevation, slope, aspect, forest mix rate, and tree cover density impact performance over multiple years. We use two different ALS-based reference datasets of vegetation heights for our analysis: one is used as reference data for training our model whereas the other independent ALS dataset serves as a hold-out dataset to quantify model predictions.
We focus on vegetation height as a forest structure variable in this research because its temporal evolution
over multiple years can benefit a wider range of applications in forest dynamics (Kugler et al., 2015). One possible application is tracking structural changes in forests. These changes happen due to anthropogenic forest management practices or natural disturbances such as windthrow, fire, or large-scale bark beetle infestations (Sebald et al., 2021). For example, Honkavaara et al. (2013) conducted change detection on remote sensing derived vegetation height maps to spot the disturbed forest area in Finland caused by a storm. According to Hall et al. (2011), analyzing changes in vertical forest structure also supports biomass change estimation over time. We thus believe that computing annual VHMs from individual Sentinel-2 satellite images at country-scale with 10-meter grid spacing has good potential to complement existing airborne campaigns in a meaningful way.
In this paper, we aim to provide a comprehensive and critical evaluation of the capabilities and limitations of deep learning-based vegetation height mapping, for dynamic forest monitoring. We argue that contributing such valuable insights is necessary to advance its practical applications. To achieve this, we set out the following specific objectives:
* Generate annual countrywide vegetation height maps for Switzerland in the years 2017, 2018, 2019, and 2020, using state-of-the-art deep learning methods inspired by the works of Lang et al. (2019) and Becker et al. (2021).
* Thoroughly validate the accuracy of these vegetation height maps against an independent ALS dataset of vegetation heights.
* Evaluate the temporal and spatial generalization ability of our model on unseen years and regions because understanding the generalizability of the model is important in cases where aerial data are not available for a specific year or region.
* Analyze how our vegetation height maps can effectively support the detection of annual forest structural changes, and identify the limitations concerning the size of the area for which these maps can provide accurate results.
## 2 Materials and methods
### Study area
Our study covers Switzerland, a mountainous country with a total surface area of \(41,285\ km^{2}\). Switzerland's territory is composed of \(31.3\%\) forest and woods, \(23.4\%\) agricultural, \(12.4\%\) alpine farmland and the remaining are urban or unproductive areas (Swiss Federal Statistical Office (FSO)). According to the fourth survey of the NFI (Brandli et al., 2020), the forest in Switzerland includes: \(41\%\) pure coniferous forests, \(19\%\) mixed coniferous forests, \(14\%\) mixed deciduous forests, and \(24\%\) pure deciduous forests. Almost two thirds of the forest area in Switzerland is regularly managed. The management is small-scale, large-scale clear-cutting does not occur. However, natural events such as storms or rare fires can lead to large-scale damage. The proportion of old-growth forests is high by European standards. The southern part of Switzerland is mostly occupied by the Alps, while the northern side has many urban areas on the Swiss plateau. The elevation of Switzerland ranges from \(193\) meters to \(4634\) meters above sea level. This wide range of elevation, topography, and the diversity of forest cover types make Switzerland a challenging study area, suitable for an in-depth analysis of the model predictions.
### Data
#### 2.2.1 Reference data
We use an ALS-based reference dataset for model training and testing. Data were acquired as part of the ongoing national ALS campaign swissSURFACE3D with a mean point density of \(15-20\ pts/m^{2}\)(swissotopo, 2022b). The data are delivered as point clouds semantically classified using the TerraScan software and checked by visual interpretation. The data used in this study cover the years 2017-2020 for \(66.5\%\) of Switzerland (Figure 1). To derive heights above ground, the point clouds are normalised using the information of the ground classified points (ASPRS class 2 (The American Society for Photogrammetry & Remote
Sensing, 2019)). Then only points classified as vegetation (ASPRS classes 3, 4, 5) by swissSURFACE3D are chosen for the generation of the map containing the vegetation height with a pixel spacing of 1 \(m\). As the last step, we resample this reference data of 1-meter resolution, to match the 10-meter ground sampling distance (GSD) of Sentinel-2 images. We use average and maximum pooling to produce two different outputs (\(VHM_{ALS}\)): the vegetation height mean and the vegetation height maximum value in the 100 \(m^{2}\) pixel. The distributions of the resampled mean and maximum vegetation heights are shown in Figure 2, revealing an inherent imbalance in our data: a considerable number of samples correspond to low vegetation heights.
#### 2.2.2 Input data for VHM estimation
We use the Sentinel-2 level-2A product (European Space Agency, 2022), which provides bottom-of-atmosphere land surface reflectances corrected for atmospheric effects. Each Sentinel-2 tile covers a \(100\times 100\)\(km^{2}\) area and there are multiple observations per tile per year due to a revisit cycle of 3-5 days. We use those six tiles that cover 96% of the Swiss territory for model training and testing, and 13 tiles to generate complete countrywide maps, as shown in Figure 3. We refer the VHM predicted by our model to \(VHM_{S2}\).
From all available Sentinel-2 images of a year, we select those acquired during the leaf-on season from May to September. We use the cloud probability mask of the Level-2A product to select the ten observations with the lowest cloud probability per year per tile. We use the four spectral channels with 10-meter spatial resolution: red, green, blue, and near-infrared as they provide high spatial resolution and have been demonstrated to be effective for vegetation height estimation in Lang et al. (2019).
We also make use of the swissALTI3D DTM provided by swissotopo (swissotopo, 2022) as additional input data to the model. For completeness, we note that swissALTI3D is generated from ALS data and resampled into 1-meter pixels. To match the Sentinel-2 GSD, swissALTI3D is resampled to a 10-meter grid by averaging across \(10\times 10\) meter neighborhoods.
#### 2.2.3 Validation data
For the evaluation of the predicted \(VHM_{S2}\), we use an independent ALS-based validation dataset. The data was acquired by six Swiss cantons in the years 2018 and 2019 with a mean point density of
Figure 1: Overview of the reference data (\(VHM_{ALS}\)) and validation data (\(VHM_{V}\)). The reference data is based on the acquisition of ALS data in the years 2017 to 2020. The validation data shown in the red rectangle is the cantonal ALS data captured in 2018 and 2019. The DTM of Switzerland is shown as the background.
Figure 3: Overview of the Sentinel-2 tiles and reference data used for training and testing. Although we need 13 Sentinel-2 tiles (blue tiles) to cover Switzerland completely, six Sentinel-2 tiles are enough to cover most of the country. To reduce the computational burden, we use the areas covered by those six tiles for model training and testing (areas inside the orange rectangle). A north-south transect of 1800 \(km^{2}\) is defined as a hold-out test area unseen by the model during training.
Figure 2: Distribution of mean and maximum vegetation heights over different years. Note that we are dealing with a highly imbalanced distribution of vegetation heights.
\(25-30~{}pts/m^{2}\)(Canton of Aargau, 2019; Flotron Ingenieure, 2018, 2019,b). The overview of the areal coverage of this cantonal validation data is shown in Figure 1. These cantonal ALS datasets are processed and then resampled into a \(VHM_{V}\) with a pixel spacing of \(10~{}m\) in the same way as the swissSURFACE3D. We deem the slightly higher mean point density of the cantonal validation data as neglectable since the vegetation height information was aggregated to a pixel spacing of \(10~{}m\) in both cases. Coincidentally, in the case of the Canton of Aargau, the cantonal data \(VHM_{V}\) covers the region in 2019, and the same area was also captured by the national swissSURFACE3D campaign in 2020. The correlation analysis between the ALS data of Aargau, acquired by the Canton in 2019, and the national swissSURFACE3D campaign reveals a robust correlation with an \(R^{2}\) of \(0.92\), demonstrating consistent alignment between our reference and independent validation datasets. Therefore, the ALS data of Aargau captured in 2019 and 2020 enables an analysis concerning the sensitivity of \(VHM_{S2}\) to detect structural changes in the forest of the Canton of Aargau with a forest area of approximately \(500~{}km^{2}\). A comparison between the biannual differences captured by the ALS and \(VHM_{S2}\) is possible. In these two years, the ALS survey was conducted in spring, from March to April in 2019 and from February to March in 2020. Consequently, the time period is not exactly the same as that of \(VHM_{S2}\). Most structural changes should, nevertheless, appear in both \(VHM_{S2}\) and ALS data. Forestry interventions occur mainly in winter and major disturbances like windthrows were not reported for the region between March and the leaf-on season 2020: we assume that the different acquisition times of the ALS data and \(VHM_{S2}\) can be ignored.
We use additional countrywide products to mask out non-forested areas for the analysis and define different strata to thoroughly analyse the accuracy of the \(VHM_{S2}\) in 2018 and 2019. For masking, we use the forest mask of the Swiss NFI (Waser et al., 2015). Raster products of topography, forest mix rate regarding the share of broad-leaved vs. coniferous trees, and tree cover density are used for the strata analysis. We compute information on elevation, slope and aspect of the terrain from swissALTI3D and obtain the forest mix rate product from the NFI (Waser et al., 2021), and the tree cover density from the Copernicus High Resolution Layer 2018 (Copernicus, 2020). To achieve a consistent spatial resolution for all the data in the analysis, all raster products are resampled to the \(VHM_{S2}\) grid of 10 meters with bilinear interpolation.
As it is common to encounter new or updated data in vegetation height mapping applications, utilizing an independent dataset allows for the evaluation of the model's performance on previously unseen data, providing a crucial assessment of its ability to handle new information. In addition to assessing generalization, this independent dataset provides a substantial amount of data for stratified analysis and assists in evaluating the detection of structural changes.
### Methods
We show our overall experimental setup in Figure 4. We first train and test the model on the reference data \(VHM_{ALS}\). We then generate complete countrywide maps and perform a detailed analysis based on the independent validation data \(VHM_{V}\).
#### 2.3.1 Model architecture
We use a fully convolutional neural network based on the previous work of Becker et al. (2021) and the ResNeXt architecture (Xie et al., 2017) for our study. We do, however, modify the input and output layers of the model architecture proposed in Becker et al. (2021) to adapt to our settings (but keep all other parts unchanged): we remove the input layers designed for Synthetic Aperture Radar data and also the output layers for uncertainty estimation. As shown in Figure 5, our model consists of an entry block, four ResNeXt stages, a low-level feature extractor, and a regression head to generate the final output. The Sentinel-2 input and the elevation layer are concatenated and given as input to the model. The entry block processes the combined input of the satellite image and DTM. It consists of one 1x1 convolution layer, batch normalization, and applies ReLU as an activation function. Four stages of ResNeXt modules that extract high-level features follow this entry block. We provide a detailed description of the ResNeXt stage in Figure 12 in the Appendix. Another feature extractor then extracts low-level image features from raw pixels. It consists of three layers: two 1x1 convolutional layers with a single activation layer (ReLU) in the middle.
After concatenating low-level and high-level features, the regression head produces the final refined mean and maximum height maps. It comprises one convolutional layer, a ReLU activation function, and another convolutional layer.
#### 2.3.2 Model training
We frame our task as a mono-temporal, pixel-based regression problem. Our model \(f_{\theta}\) regresses the vegetation height value \(y_{i}\) of a 10-meter pixel from an input patch \(x_{i}\) of shape \(H\times W\). The input patch \(x_{i}\) contains one Sentinel-2 observation concatenated with the DTM. We empirically choose the mean absolute error (MAE) loss with L2-penalty to supervise the model's parameters \(\theta\) :
\[L(\mathbf{\theta})=\frac{1}{N}\sum_{i=1}^{N}|f_{\mathbf{\theta}}(x_{i})-y_{i}|+\lambda \|\mathbf{\theta}\|\|_{2}^{2}\enspace, \tag{1}\]
where \(N\) is the number of training samples and \(\lambda\) is the hyperparameter of weight decay.
We split the reference data into two geographically separate areas for training and testing. During training, we decompose the \(100\times 100\)\(km^{2}\) Sentinel-2 image into patches of size \(15\times 15\) pixels (i.e., \(22,500\)\(m^{2}\) area per patch on the ground) following Lang et al. (2019). We select those patches where valid target \(y_{i}\) exists for the center pixel and the cloud probability of the center pixel is under 10% to avoid noisy training samples. We refer to these patches as _valid_ patches. We consider different valid patches of the same location as distinct samples during training and we supervise all images of the same location in one given year with the same target value to neglect intra-annual changes. Although using all images of the same location helps to reduce the atmospheric effect introduced by individual observations, it also brings redundant information and slows down the model training process. As a compromise between robustness and efficiency, we rank the images by the number of valid patches and select two Sentinel-2 images of the same location for each year.
#### 2.3.3 Implementation details
We normalize per channel for all input images for training and testing datasets based on statistics computed solely from the training dataset. During training, we randomly sample a subset of patches (\(64,000\) patches) among the dataset in each epoch due to the large dataset (\(869,183\) patches). We implement our model in PyTorch (Paszke et al., 2017) and train with \(batch\_size=64\), \(learning\_rate=10^{-5}\). For the ResNeXt stages in the model architecture, we define \(groups=32\), \(width\_per\_group=4\), \(N_{block}\) for the four stages \([2,3,5,3]\), following Becker et al. (2021). Here _groups_ and \(width\_per\_group\) are the hyperparameter of grouped convolutions inside the ResNeXt block and \(N_{block}\) defines the number of ResNeXt blocks inside each ResNeXt stage. We show the detailed configuration of the model in Table 3 in the Appendix. We
Figure 4: Overview of our study design: (1) train and test the model based on reference data \(VHM_{ALS}\) and six Sentinel-2 (S2) tiles, and (2) use the trained model to generate countrywide map \(VHM_{S2}\) on 13 S2 tiles; independent analysis based on validation data \(VHM_{V}\).
apply Adam (Kingma and Ba, 2015) for optimization with parameters \(\beta_{1}=0.9\), \(\beta_{2}=0.999\) and \(\epsilon=10^{-8}\). We further split the training dataset into training and validation sets with an 80:20 ratio to monitor the model training and to avoid over-fitting. We train the model for 500,000 iterations until it convergences on the validation set, which takes approximately 50 hours on a single NVIDIA GeForce RTX 2080 Ti GPU.
#### 2.3.4 Evaluation of vegetation height estimation
We first evaluate our model's predictions against the test set obtained from the same source of reference data (\(VHM_{ALS}\)). For inference, we normalise the test set using the normalisation values of the train set. We split each image into patches of \(512\times 512\) pixels with 16 pixels overlap to produce a large-scale map efficiently. After model prediction, we recompose patches into the original image size and mask out cloudy (cloud probability \(>10\%\)) and non-vegetated (classified as water or snow according to Sentinel-2 land cover classification) pixels. Lastly, we take the median of the predictions obtained from different images of the same location in a given year to produce the final annual map. We thus neglect the height variations within a single year, which are outside of the scope of our study.
We evaluate our model's performance based on the following metrics: mean absolute error (MAE), root mean squared error (RMSE), and mean bias error (MBE). Although we can assess the performance with MAE alone, the RMSE can make outlier analysis more transparent. The MBE is important to detect biases in model predictions like underestimation and overestimation.
\[MAE=\frac{1}{N}\sum_{i=1}^{N}|f_{\mathbf{\theta}}(x_{i})-y_{i}| \tag{2}\]
\[RMSE=\sqrt{\frac{1}{N}\sum_{i=1}^{N}(f_{\mathbf{\theta}}(x_{i})-y_{i})^{2}} \tag{3}\]
\[MBE=\frac{1}{N}\sum_{i=1}^{N}f_{\mathbf{\theta}}(x_{i})-y_{i} \tag{4}\]
\[MAEr=\frac{\frac{1}{N_{s}}\sum_{i=1}^{N_{s}}f_{\mathbf{\theta}}(x_{i})-y_{i}}{ \frac{1}{N_{s}}\sum_{i=1}^{N_{s}}f_{\mathbf{\theta}(x_{i})}} \tag{5}\]
where \(N\) is the total number of test samples, \(N_{s}\) is the number of test samples in specific strata, \(f_{\mathbf{\theta}}(x_{i})\) is the model prediction, and \(y_{i}\) is the reference or validation data.
Figure 5: Our model combines a deep feature extractor based on the ResNeXt backbone with a pixel-level feature extractor to account for low-level and high-level features. The two feature maps are concatenated and processed by the regression head to predict the mean and max vegetation height.
#### 2.3.5 Independent assessment of countrywide vegetation height maps
We evaluate the generated vegetation height maps against the independent validation dataset \(VHM_{V}\). We compare the predicted \(VHM_{S2}\) with \(VHM_{V}\), focusing on forest areas only within the forest mask. As we have observed a few topography-induced erroneous vegetation height values higher than 50 \(m\) in the \(VHM_{V}\) in areas with steep slopes e.g. at the edge of riverbeds, we also mask out these 177 pixels in the mean and 7,654 pixels in the maximum \(VHM_{V}\) from the comparison. We then split both mean and maximum \(VHM_{V}\) into two annual \(VHM_{V}\), each containing only data captured within one of the years 2018 or 2019 (see Figure 1). We compute the density scatter plots for both year pairs of VHM. We also calculate the linear models between the \(VHM_{V}\) and \(VHM_{S2}\) values in combination with the calculation of the mean estimated \(VHM_{S2}\), MBE, MAE, RMSE and the relative MAE (MAEr). The MAEr is defined as MAE divided by the mean estimated \(VHM_{S2}\).
Additionally, we calculate these statistical metrics for different strata to perform a thorough analysis for the \(VHM_{S2}\) of the years 2018 and 2019. We analyze the following strata on the additional countrywide products of topography, forest mix rate, and tree cover density. We use the DTM to define the topography strata for elevation, slope and aspect in Table 4. Three forest mix rate strata are defined based on the product obtained from NFI: 0-24.9%, 25-74.9% and 75-100% broad-leaved trees. Tree cover density information is sourced from the Copernicus High Resolution Layer. Based on this product, the two strata 'open forest' (0-79.9%) and 'dense forest' (80-100%) are defined. We present an overview of the defined strata and the number of pixel pairs compared for each stratum in Table 4 and 5 in Appendix.
#### 2.3.6 Detection of structural changes in forest
We analyse the capability of the \(VHM_{S2}\) biannual difference to detect the structural changes in the forest of the Canton of Aargau in two ways. First, on pixel level, we compare the biannual \(VHM_{S2}\) difference within the forest directly to the \(VHM_{V}\) difference using a density scatter plot and the above-mentioned statistics. Second, on object level, we identify those connected pixels with a vegetation height decrease greater than 10 \(m\) as areas with structural change in the biannual difference of the 1-meter \(VHM_{V}\) (before resampling). We filter these areas using a minimum size of 25 \(m^{2}\) and define those as change objects, which results in 47,917 objects with area sizes of 25\(m^{2}\) - 3.91 \(ha\). We calculate the mean \(VHM_{S2}\) difference within the respective object area for all objects. To estimate the minimal area for change to be detected with \(VHM_{S2}\), we then generate box plots of these mean values for different object area ranges and compare them to the value distribution of the \(VHM_{S2}\) biannual difference of the unchanged forested area.
## 3 Results
In this section, we first evaluate the overall performance of our model against the held-out test area on the reference dataset \(VHM_{ALS}\) during the years 2017-2020 in Section 3.1. Then in Section 3.2, we extend the evaluation to the second validation dataset\(VHM_{V}\) covering the year 2018 and 2019, which enables a finer stratified analysis of the performance, as well as a feasibility study for structural change detection based on our model's prediction.
### Vegetation height estimation
**Overall performance.** We evaluate the trained model on the held-out test areas with the reference data of the years 2017 to 2020. Additionally, we perform an ablation study to examine the impact of including the DTM as input in the model. The results shown in Table 1 include the evaluation of the mean and maximum height for different years in two model settings: with and without DTM. Both models' predictions show an MAE under 2 \(m\) for mean height and under 2.5 \(m\) for maximum height. The errors of the model with DTM are slightly smaller. The MBE of both models for mean and maximum vegetation height are near zero. Comparing the errors of mean and maximum height, the errors of maximum height are always higher in all years and models. Errors are noticeably different across these four years though: the RMSE of the mean height for the model with DTM in 2019 is 2.98 \(m\) while the RMSE in 2017 is 2.22 \(m\), for example. As the reference data from these years cover different parts of Switzerland, both the temporal and spatial differences may contribute to different errors here.
Performance across the height range.As visible on Figure 6 showing the detailed performance of the model, residuals can reach 20% of the reference height when the vegetation height in the reference data is around 40 \(m\) in both model settings. This indicates a bias towards underestimating high vegetation values. In general, residuals fluctuate around zero in most areas but can become rather large with values larger than 5 \(m\) in areas with vegetation higher than 30 \(m\). Figure 6 also exhibits the positive contribution of using the DTM as additional input. Indeed, the residuals of the model with DTM are significantly closer to zero on the taller trees, e.g., \(\sim 1\)m smaller residuals for trees higher than 25m. Since these taller trees are less frequent, the improvement achieved on this part of the distribution only entails a small improvement on the aggregated metrics of Table 1. However, Figure 6 clearly shows how the DTM helps mitigating the underestimation problem. Therefore, we utilize the model that incorporates the DTM for the subsequent independent assessment of vegetation height maps.
Qualitative analysis.In Figure 7, we present a qualitative result of a 20 \(km^{2}\) sample area from the test set for the year 2017, showcasing the model's successful distinction of various land cover types, such as dense forests and meadows, along with corresponding vegetation height predictions. Yet, we can see that finer variations within individual land cover types are not recovered as well by our model's predictions. For example, our model detects dense forests quite well but it tends to underestimate trees higher than 35m.
### Independent assessment of vegetation height maps
#### 3.2.1 Analysis for the years 2018 and 2019
In the top rows of Table 2, we present the results of the comparison of \(VHM_{S2}\) with the independent validation dataset \(VHM_{V}\). These results indicate higher errors in the independent evaluation compared to Table 1. For example, the RMSE values in Table 1 range from 2.22 to 4.43 \(m\) while they range from
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c c} \hline \hline & with & \multicolumn{3}{c}{**2017**} & \multicolumn{3}{c}{**2018**} & \multicolumn{3}{c}{**2019**} & \multicolumn{3}{c}{**2020**} & \multicolumn{3}{c}{**Overall**} \\ & DTM & MBE & MAE & RMSE & MBE & MAE & RMSE & MBE & MAE & RMSE & MBE & MAE & RMSE & MBE & MAE & RMSE \\ \hline
**Mean** & yes & -0.30 & 0.97 & 2.22 & -0.47 & 1.57 & 2.91 & -1.02 & 1.62 & 2.98 & -0.19 & 1.39 & 2.55 & -0.52 & 1.44 & 2.74 \\
**Max** & yes & -0.76 & 1.73 & 3.52 & -0.75 & 2.03 & 3.44 & -1.64 & 2.50 & 4.43 & -0.31 & 1.97 & 3.45 & -0.88 & 2.09 & 3.74 \\ \hline
**Mean** & no & -0.39 & 1.05 & 2.41 & -0.80 & 1.69 & 3.11 & -0.66 & 1.52 & 2.81 & -0.38 & 1.42 & 2.62 & -0.59 & 1.47 & 2.81 \\
**Max** & no & -1.09 & 1.89 & 3.82 & -1.43 & 2.29 & 3.82 & -1.29 & 2.37 & 4.29 & -0.66 & 2.00 & 3.54 & -1.15 & 2.18 & 3.89 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Evaluation results of mean and max vegetation heights with reference data VHM for models with or without DTM as input. We report the Mean Bias Error (MBE), Mean Absolute Error (MAE), and Root Mean Squared Error (RMSE), expressed in meters.
Figure 6: Residuals of models with (**DTM**) and without DTM (**w/o DTM**) per 5 \(m\) height intervals. Note how the performance degrades for high vegetation. In comparison, the model achieves low residuals on the more frequent part of the height distribution (from 5 to 25 \(m\)).
Figure 7: Qualitative result of the model with DTM for a 20 \(km^{2}\) sample region within the test set in the year 2017. From left to right the columns show the input image, prediction of the model, reference data, and the resulted error (prediction - reference data). The blue color means negative error while the red color means a positive error in the error plot. Note there is an underestimation problem for high vegetation heights, as indicated inside the red rectangle. The zoomed-in figure for the area inside the red rectangle is also shown for better visualisation.
5.38 to 5.61 \(m\) in the top rows of Table 2. The MBEs are slightly below zero with values between \(-0.71\) and \(-1.28\)\(m\). The differences in the reported metrics between the two different years are similar. Because no data in that area of the year 2018 are used for model training (see Fig. 1), this outcome suggests that training on a multi-year dataset leads to good spatial generalization across unseen areas. To further assess temporal generalization, we withhold the training data from the years 2018 and 2019 separately and conduct independent evaluations for each based on the independent \(VHM_{V}\). The results of temporal generalization are shown in the bottom rows of Table 2. We observe that the errors in are slightly larger. Specifically, the overall mean absolute error (MAE) for mean vegetation height increases from 4.30 to 4.87 \(m\). Moreover, we note that the increase in errors is more prominent in the year 2018 compared to 2019. This can be attributed to the model's need to generalize across both space and time, which presents additional challenges.
#### 3.2.2 Stratified analysis for the years 2018 and 2019
Figure 8 shows the result of the stratified analysis of the mean \(VHM_{S2}\) with DTM for the years 2018 and 2019. The detailed stratified analysis including the values of all metrics is in Table 4 and 5 in Appendix. Looking at the elevation strata, higher MAEr values are observed with higher elevation. Both MBE and MAEr values show a negative trend towards steeper terrain in the six slope strata. In the aspect strata, a change from higher MBE in north-west facing slopes (around 300deg) to lower ones in south-east facing slopes (around 100deg) is observed. The MAEr values show the same trend with a tendency for higher values in south-east facing slopes and lower values in north-west facing slopes.
The underestimation of the vegetation height is equal in forests dominated by coniferous or broad-leaved trees with MBE values of \(-1.26\)\(m\) for each. The MAEr values are slightly lower for coniferous forests than for broad-leaved or mixed forests. The two tree cover density strata indicate that underestimation is increased for open forest stands with an MBE of \(-1.46\)\(m\) instead of \(-0.67\)\(m\). Also, the MAEr is much higher at 0.30 compared with the denser stands at 0.21.
### Detection of structural changes in forest
In Figure 9, we present a pixel-level analysis of structural change detection, revealing that the biannual \(VHM_{S2}\) difference exhibits a weak relationship with the biannual difference of the ALS data, as indicated by an \(R^{2}\) of 0.22. The white trend line indicates an underestimation of the height changes of ALS, both in the negative and positive change range. Furthermore, a vertical \(VHM_{S2}\) difference value distribution around 0 in the ALS VHM difference is observed indicating pixels containing VH change not matching the ALS reference. However, the analysis at the object level shows a dependency of the \(VHM_{S2}\) mean difference on the size of the change object (Fig. 10). For ALS change objects with a size between 25 \(m^{2}\) and 250 \(m^{2}\) the mean \(VHM_{S2}\) difference does not differ much compared to unchanged forest. This improves, however, for change objects larger than 250 \(m^{2}\), where the difference is substantially lower than for unchanged forests. This indicates that most of these objects should be detectable using the \(VHM_{S2}\). The larger the change objects, the larger the vegetation height changes. So for change objects larger than 0.1 \(ha\) and 0.5 \(ha\), the median values are around -10 \(m\) and -12 \(m\), respectively. This highlights the capability of \(VHM_{S2}\) to
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{Trained on} & \multicolumn{3}{c}{**2018**} & \multicolumn{3}{c}{**2019**} & \multicolumn{3}{c}{**Overall**} \\ & full training set & _MBE_ & _MAE_ & _RMSE_ & _MBE_ & _MAE_ & _RMSE_ & _MBE_ & _MAE_ & _RMSE_ \\ \hline \multirow{2}{*}{**Mean**} & \multirow{2}{*}{yes} & -1.28 & 4.45 & 5.61 & -0.71 & 4.23 & 5.38 & -0.88 & 4.30 & 5.45 \\ & & -0.90 & 4.32 & 5.49 & -0.99 & 4.27 & 5.50 & -0.96 & 4.28 & 5.49 \\ \hline \multirow{2}{*}{**Mean**} & \multirow{2}{*}{no} & -2.69 & 5.24 & 6.57 & -2.43 & 4.71 & 5.97 & -2.51 & 4.87 & 6.15 \\ & & -2.15 & 5.10 & 6.37 & -3.24 & 5.05 & 6.32 & -2.92 & 5.07 & 6.33 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Result of the comparison of the \(VHM_{S2}\) with the independent ALS vegetation height reference. Top rows: full training set results. Bottom rows: temporal generalization with data from 2018 or 2019 withheld. The statistics shown are Mean Bias Error (MBE), Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE), in units meters.
## 6 Conclusion
Figure 8: Result of the comparison of the S2 mean VHM with DTM with the independent ALS vegetation height reference within different strata based on _terrain types_ and _forest properties_. The Mean Bias Error (MBE) and relative MAE (MAEr) are shown for different strata.
Figure 10: Box plots of the \(VHM_{S2}\) difference (2020-2019) within the unchanged forest area of the Canton of Aargau and the mean \(VHM_{S2}\) difference in the ALS change objects (vegetation height decrease of \(>\)10 m) for varying object area size ranges. The boxplots show median, interquartile range (IQR), whiskers (maximum of 1.5 * IQR) and outliers.
Figure 9: Comparison of the \(VHM_{S2}\) and \(VHM_{ALS}\) differences (2020-2019) for the forested area of the Canton of Aargau, Switzerland. Mean Bias Error (MBE) and Mean Absolute Error (MAE) in meter are shown. The black line indicates the equivalence line whereas the white one shows the trend line of the derived linear model.
effectively capture and detect significant changes, although it may exhibit limitations in accurately detecting small change areas.
While the biannual \(VHM_{S2}\) difference primarily captures large changes, it still holds utility for applications such as detecting structural changes caused by windthrows. As shown in Figure 11, the structural changes due to the windthrow caused by the winter storm "Burg blind" on the 3/1/2018 are well detected by the \(VHM_{S2}\) difference between 2017 and 2018. An ad-hoc change mask defined by pixels with a decrease in vegetation height larger than 10 \(m\) results in an F1 score of 0.77 for the pixel-level mapping of the field-referenced windthrows. This result indicates the potential of using the produced maps for real applications related to forest dynamics.
## 4 Discussion
### Model evaluation
Our deep learning model achieves an overall MAE \(\leq\) 2.5 \(m\) across different years based on reference data \(VHM_{ALS}\). This relatively low error does make VHM regression with modern deep learning and Sentinel-2 images a valuable, complementary tool for countrywide dense mapping at annual repetition rates. However, the model tends to underestimate very high vegetation, with the MAE of vegetation heights in
Figure 11: \(VHM_{S2}\) difference between 2017 and 2018 within forest. Dark red indicates a strong decrease in vegetation height. The black lines indicate delineated field reference areas for windthrows of the storm ”Burg blind”. The background aerial image is from 2018. © swisstopo
the \([30~{}m,40~{}m]\) interval reaching seven meters. This saturation towards the high vegetation heights aligns with the previous works (Hansen et al., 2016; Potapov et al., 2021; Lang et al., 2019). We argue that there are two main reasons for the underestimation of tall trees. One is the imbalanced VHM reference data used for training. As shown in Figure 2, the reference data is dominated by vegetation height lower than \(10~{}m\) and there are only a few samples for vegetation height higher than \(30~{}m\). The model is biased towards vegetation with lower heights, such as bushes or non-vegetated areas. The second reason is the smoothing effect inherent to convolutional neural networks. As discussed in Lang et al. (2019), a convolutional neural network approach extracts evidence from neighborhoods for a certain pixel, which is prone to smooth out high-frequency information. Another observation is that max heights are more underestimated than mean heights in short forests, while the reverse holds true for tall forests. This discrepancy can be attributed to the distinct distribution of mean and max vegetation heights. As evident in Figure 2, both histograms exhibit another peak, apart from the dominant near-zero values, with mean vegetation heights peaking around \(10m\) and max vegetation heights peaking around \(25m\). Consequently, the predictions of max height are more prone to underestimation in short forests, as they are far away from the peak of max height, whereas this order is reversed in tall forests, which are closer to the peak of max height.
### Independent evaluation of countrywide vegetation height maps
To refine our analysis, we compare our model's prediction to an independent vegetation height dataset. The observed errors are larger when compared with the independent \(VHM_{V}\). This is mainly due to the reason that only forest areas are considered in this analysis. As Figure 6 shows, the error increases with higher VH typically found in forests. The results also indicate that the inclusion of the DTM does decrease the errors with better MBE and MAE values observed for both years. Therefore, we recommend including it in the model, given the low additional computation cost it represents. Furthermore, our detailed analysis against the independent ALS dataset leads to the following conclusions:
**Spatial generalization.** Interestingly, the errors of 2018 are in the same range as 2019. This observation demonstrates the model's good spatial generalization capability. The fact that the area covered by \(VHM_{V}\) in 2018, which is not included in the model training using \(VHM_{ALS}\), still exhibits good performance indicates that the model is able to effectively generalize to unseen spatial regions.
**Temporal generalization.** Overall, the results of temporal generalization demonstrate that it is feasible to transfer the model to a year without training data while maintaining similar prediction accuracy. Notably, when we exclude the training data from a specific year and conduct independent evaluations, we observe a slight decrease in accuracy. The decrease is relatively larger than the decrease in spatial generalization, suggesting that temporal generalization plays a more dominant role in performance compared to spatial generalization. The higher errors observed in 2018 compared to 2019 can be attributed to two main factors. Firstly, the model needs to generalize across both spatial and temporal dimensions, which presents a more challenging task. Secondly, the dry summer of 2018 resulted in atypical vegetation behavior, including early wilting, as reported in previous studies (Brun et al., 2020). This suggests that incorporating meteorological data as additional input to the model, although increasing computational load, could potentially provide benefits in capturing such environmental influences.
**Stratified analysis.** The strata analysis reveals varying accuracy depending on the topography, forest mix rate or forest density. Two hypotheses can explain the higher MAEr values observed in higher elevations and steep areas. Firstly, the limited availability of training data in these regions could contribute to the increased errors. Secondly, forests in high mountains may be affected by shadows, which can vary depending on the capture time and satellite angle (Wangchuk and Bolch, 2020). These factors can introduce additional complexities in estimating vegetation height accurately. An interesting observation is made concerning the effect of the terrain aspect. The higher underestimation in south-east facing and lower underestimation in north-west facing slopes could be explained by the use of Sentinel-2 data in the L2A processing level. The sen2cor workflow has a tendency to over-correct north-facing slopes, resulting in artificially inflated reflectance values (Simonetti et al., 2021). Replacing the sen2cor by other Sentinel-2 Level 2 processing
tools like Force (Frantz, 2019) may relieve the problem. The reason behind the slightly higher accuracy (lower MBE and MAEr) for forests consisting of broad-leaved trees remains unclear. A possible explanation would be a stronger correlation in broad-leaved trees between the diameter of the tree crown and the height. However, this would have to be investigated in more depth. The observed higher MAEr in open forests is not surprising, as open forests usually show a general tendency to lower VHs. Due to the reasons already mentioned above, lower VHs are overestimated by our model.
### Suitability to detect structural changes in forest
Our annual VHMs provide possibilities to analyze forest dynamics between two different years. We compare the biannual differences calculated from the generated maps and the independent ALS data and assess the ability of \(VHM_{S2}\) to detect structural changes in forests. Our results reveal that it is indeed possible to detect structural changes in forests for two consecutive years if the area of the change is large enough. Smaller changes, like structural changes of individual trees, are, however, not detectable. The relation between the \(VHM_{S2}\) and \(VHM_{V}\) differences on pixel level implies that changes cannot be reliably detected for individual 10-meter grid cells. Successful change detection requires areas \(>250\ m^{2}\) (Figure 10). An inherent reason in our measurement system is the original spatial resolution of the Sentinel-2 data and the smoothing effect introduced by convolutional neural networks. Each individual pixel of the utilized Sentinel-2 channels covers an area of 100 \(m^{2}\), but due to the convolutional kernels applied during processing, fine textual details are smoothed, leading to a final prediction that may not accurately represent these fine details. The larger the changed areas become, the less they are impacted by this smoothing effect. Note that this finding also suggests that \(VHM_{S2}\) differences detected by comparing two VHMs produced by our model should rather be limited to detect locations and measure the spatial extent of structural change. The current method is not accurate enough to measure the absolute change in vegetation height with reasonable accuracy. Subtle structural changes such as the forest growth between two consecutive years cannot be monitored using the \(VHM_{S2}\). Error rates, as shown in Table 2, are too high to capture the annual forest growth rate of 1 \(m\) at most. Nevertheless, our results demonstrate successful support for applications such as detecting structural changes caused by windthrows. In addition to windthrow detection, our study's findings hold valuable potential to support the detection of other forest disturbances related to vegetation height changes. In their study, Senf and Seidl (2021) have successfully generated forest disturbance maps of Europe from 1986 to 2016, revealing that most forest disturbances have a size larger than 0.5 \(ha\), which is large enough to be detected by our derived vegetation height maps (\(>250\ m^{2}\)). Consequently, our derived results can potentially detect other large-scale forest disturbances as well, consistent with findings by Hansen et al. (2013). For instance, vegetation height change maps can play a significant role in forest fire monitoring by assessing the extent of damage caused by fires and monitoring the recovery process e.g. every 5 years. As mentioned above, the error rates of the \(VHM_{S2}\) prevent it from accurately capturing annual regrowth. The integration of vegetation height change data empowers forest managers to foster resilient forest ecosystems and ensure the sustainable use of forest resources. Overall, our study offers products that can positively impact various forest management activities.
## 5 Conclusion and Outlook
In this work, we set out to explore the potential of satellite data-derived vegetation height maps as a complementary, faster and more homogeneous alternative to maps based on aerial campaigns. Our results indicate that our \(VHM_{S2}\), computed for several consecutive years densely at 10-meter grid spacing for Switzerland, can indeed be put to good use. Evaluation of our vegetation height maps on independent, accurate ALS data reveals pros and cons. On the one hand, it demonstrates relatively high mapping accuracy for most parts of the map and also, if predicting for different years, transferability of a trained model between years significantly lowers the computational burden. It also allows for computing longer VHM time series. On the other hand, our detailed, stratified analysis also reveals that VHM produced with our deep learning approach using Sentinel-2 satellite imagery have biases. Errors are generally higher for tall trees, high altitudes, steep slopes, and differ as a function of the aspect. These findings are very valuable |
2304.04613 | On Evaluation of Bangla Word Analogies | This paper presents a high-quality dataset for evaluating the quality of
Bangla word embeddings, which is a fundamental task in the field of Natural
Language Processing (NLP). Despite being the 7th most-spoken language in the
world, Bangla is a low-resource language and popular NLP models fail to perform
well. Developing a reliable evaluation test set for Bangla word embeddings are
crucial for benchmarking and guiding future research. We provide a
Mikolov-style word analogy evaluation set specifically for Bangla, with a
sample size of 16678, as well as a translated and curated version of the
Mikolov dataset, which contains 10594 samples for cross-lingual research. Our
experiments with different state-of-the-art embedding models reveal that Bangla
has its own unique characteristics, and current embeddings for Bangla still
struggle to achieve high accuracy on both datasets. We suggest that future
research should focus on training models with larger datasets and considering
the unique morphological characteristics of Bangla. This study represents the
first step towards building a reliable NLP system for the Bangla language1. | Mousumi Akter, Souvika Sarkar, Shubhra Kanti Karmaker Santu | 2023-04-10T14:27:35Z | http://arxiv.org/abs/2304.04613v1 | # On Evaluation of Bangla Word Analogies
###### Abstract
This paper presents a high-quality dataset for evaluating the quality of Bangla word embeddings, which is a fundamental task in the field of Natural Language Processing (NLP). Despite being the 7th most spoken language in the world, Bangla is a low-resource language and popular NLP models fail to perform well. Developing a reliable evaluation test set for Bangla word embeddings are crucial for benchmarking and guiding future research. We provide a Mikolov-style word-analogy evaluation set specifically for Bangla, with a sample size of 16678, as well as a translated and curated version of the Mikolov dataset, which contains 10594 samples for cross-lingual research. Our experiments with different state-of-the-art embedding models reveal that Bangla has its own unique characteristics, and current embeddings for Bangla still struggle to achieve high accuracy on both datasets. We suggest that future research should focus on training models with larger datasets and considering the unique morphological characteristics of Bangla. This study represents the first step towards building a reliable NLP system for the Bangla language1.
Footnote 1: Dataset can be found here: [https://paperswithcode.com/dataset/bangla-word-analogy](https://paperswithcode.com/dataset/bangla-word-analogy) and we plan to publish the code after acceptance.
## 1 Introduction
The Bangla language, with over 300 million native speakers and ranking as the sixth most spoken language in the world, is often regarded as a language with limited resources Joshi et al. (2020). Despite the breakthrough in the field of Natural Language Processing (NLP), it has been observed that popular NLP models fail to perform well for low-resource languages like Bangla, despite showing human-like performance in high-resource languages like English.
Due to its significance in NLP tasks, there is a growing need for high-quality Bangla word embedding, and several efforts have been made to create word embedding for Bangla by training with a large corpus Artetxe and Schwenk (2019); Bhattacharjee et al. (2022); Feng et al. (2022). There have been very few efforts to create a high-quality evaluation test set for other NLP tasks like Sentiment Analysis, Machine Translation and Summarization Hasan et al. (2020, 2021); Akil et al. (2022); Bhattacharjee et al. (2022). Despite the availability of some word analogy evaluation datasets for other languages Gurevych (2005); Hassan and Mihalcea (2009); Sak et al. (2010); Joubarne and Inkpen (2011); Panchenko et al. (2016), there is a noteworthy lack of datasets for evaluating the efficacy of word embeddings in low-resource languages, such as Bangla.
We emphasize that the evaluation of Bangla word embeddings is a fundamental task that needs to be addressed. Word embeddings has a direct impact on the performance of different NLP tasks. Therefore, creating a high-quality evaluation test set for Bangla word embeddings will enable researchers to benchmark the performance of existing embedding models and guide further research.
In this paper, we present a Mikolov-style Mikolov et al. (2013) high-quality word-analogy evaluation set specifically for Bangla, with a sample size of 16678. To the best of our knowledge, we are the first to do so. We have also translated and curated Mikolov's dataset for Bangla, resulting in 10594 samples for cross-lingual research. Our test set includes a variety of tasks that evaluate the quality of Bangla word embeddings, such as word similarity and analogy detection. We also provide an analysis of the performance of several state-of-the-art embedding models: Word2Vec Mikolov et al. (2013), GloVe Pennington et al. (2014), fastText Bojanowski et al. (2017), LASER Artetxe and Schwenk (2019), LaBSE Feng et al. (2022), bnBERT Bhattacharjee et al. (2022),
bnBART Wolf et al. (2020) along with our and translated Mikolov dataset.
The lack of available datasets for evaluating the quality of Bangla word embeddings is a significant gap in the NLP research community. Experimental results demonstrate that Bangla has its own unique characteristics, and state-of-the-art techniques show very poor performance in both our and translated Mikolov datasets, highlighting the need for further research in low-resource languages like Bangla.
## 2 Dataset
To evaluate the quality of word vectors, a comprehensive test set was developed that contains three types of semantic questions, and nine types of syntactic questions as shown in Table 1. The semantic questions contain word pairs that are similar in meaning, while the syntactic questions involve relationships between words such as pluralization and verb tense. Overall, the comprehensive test set contains 5036 semantic and 11642 syntactic questions, providing a robust evaluation of the quality of word vectors for the Bangla language. The test set was created by first manually generating lists of similar word pairs, and then forming questions by connecting two pairs. For example, we made a list of 7 divisions and 64 districts they belong to and formed 2160 questions by picking two-word pairs between division-district. Following Mikolov's approach Mikolov et al. (2013), word analogies were created by using the format \(word_{B}\) - \(word_{A}=word_{D}\) - \(word_{C}\). The goal was to determine the \(word_{D}\) that is similar to \(word_{C}\) in the same way that \(word_{A}\) is similar to \(word_{B}\).
The creation of the test set also took into account the unique characteristics of the Bangla language. For example, the Bangla language has different forms for numbers, including ordinal forms, female forms, and forms with prefixes and suffixes that change the meaning of the word. Additionally, Bangla has colloquial forms that are present in ancient literature, stories, and novels. A total of 2844-word pairs were formed that reflect these unique characteristics of the Bangla language. Furthermore, 3776-word pairs were included in the test set that is unique from Mikolov's word analogy, such as division-district pairs and number pairs with different forms. These additional word pairs further demonstrate the diverse and complex nature of the Bangla language.
Furthermore, we also translated Mikolov's dataset2 and manually removed English words that do not have Bangla translations. We also removed word pairs that were duplicated in Bangla, such as present principles and plural verbs. This resulted in a dataset of 10594 samples from the original 19544 samples. It is worth noting that the capital-city and currency relationships present in Mikolov's dataset can also be applied to the Bangla language. In summary, while our dataset focused on the linguistic specifics of the Bangla language, Mikolov's dataset was more focused on common words in both Bangla and English. The translation and curation of Mikolov's dataset provide a useful resource for cross-lingual research and analysis.
Footnote 2: www.fit.vutbr.cz/ imikolov/mnlm/word-test.v1.txt
## 3 Experimental Setup
**Methods:** We evaluated the quality of word vectors learned in both dataset using both traditional and transformer-based models. We utilized various models, including Word2Vec3, GloVe4, and fastText5, to estimate the quality of word representations. Additionally, we employed transformer-based models, such as LASER6, LaBSE7, bnBERT8, and bnBART9.
Footnote 3: huggingface.co/sagorsarker/bangla_word2vec
Footnote 4: huggingface.co/sagorsarker/bangla-glove-vectors
Footnote 5: github.com/facebookresearch/fastText/
Footnote 6: github.com/facebookresearch/LASER
Footnote 7: huggingface.co/sentence-transformers/LaBSE
Footnote 8: huggingface.co/csebuentnlp/banglabert
Footnote 9: github.com/sagor/brur/btransformer
Footnote 10: bn.wikipedia.org/
For GloVe, we used the Bengali GloVe model that was trained on Bangla Wikipedia10 and Bangla news articles. Specifically, we utilized the model trained with 39M tokens. Additionally, for Word2Vec, we used the Bengali Word2Vec model trained with Bengali Wikipedia data Sarker (2021). We also used fastText, which provides a model for Bangla word embedding, to further evaluate the quality of word vectors.
Footnote 10: bn.wikipedia.org/
For the transformer-based models, we used LaBSE, LASER, bnBERT, and bnBART, which provide sentence embeddings. To obtain the word embedding for a particular word, we passed the word to the model and collected the token embeddings for all the tokens of that word, which were then averaged to obtain the word embedding. We created an exhaustive embedding dictionary for 178152 Bengali words and used the word embeddings from that dictionary to perform the word
analogy task. Table 2 shows dimensions of different embedding used. Overall, our study provides a comprehensive evaluation of word vectors learned using various traditional and transformer-based models both for our dataset and translated Mikolov dataset.
**Task Description:** Representing words as vectors to measure similarity has been mainly used for languages with established embeddings, leaving it unexplored for low-resource languages, like Bangla. We explore the use of vector space models for measuring similarity in Bangla, using the approach proposed by Mikolov Mikolov et al. (2013). Specifically, we focus on more complex similarity tasks, such as identifying words similar to a given number in the context of a date. Specifically, we investigate the question of which Bangla words are similar to \(\overline{\texttt{\#}}\) (Two) in the same way that \(\overline{\texttt{\#}}\) (One) is similar to \(\overline{\texttt{\#}}\) (First).
We achieve this by computing a vector X as the difference between the vector representation of \(\texttt{\#}\) (First) and \(\overline{\texttt{\#}}\) (One), added to the vector representation of \(\overline{\texttt{\#}}\) (Two). We then search for the word closest to X in the vector space, measured by cosine distance, and use it as the answer to the question. We discard the input question words during the search process. When the word vectors are properly trained, this method can correctly identify the most similar word, such as \(\overline{\texttt{\#}}\) (Second) in this case.
## 4 Result
We examine the impact of embedding dimension on word analogy accuracy, training data variations, and the accuracy for analogies described in our dataset and the translated Mikolov dataset. Our dataset focuses on unique characteristics of the Bengali language, while the translated Mikolov dataset aids in determining model accuracy for cross-lingual words. For our experiments, we strictly matched our dataset for both Top-1(%) and Top-5(%) accuracy. Achieving maximum accuracy was challenging, but not expected.
**Embedding Dimension Variation:** LaBSE provided a 768-dimensional vector that gave better Top-5% accuracy for semantic relationships around 12%, while bnBART's 1024-dimensional vector gave better syntactic accuracy around 25%, as demonstrated in Figure 1. Top-1% accuracy was generally low in all cases, except for bnBART, which achieved around 21% accuracy.
**Training Data Variation:** Word2Vec, GloVe, fastText, and bnBERT were trained with Bengali Wikipedia and crawled bangla news articles. In contrast, LaBSE and LASER were trained with multilingual data, with LaBSE trained on data from 109 languages and LASER trained on data from 97 languages (Feng et al., 2022). bnBART11, on the other hand, was trained on data for different inference tasks for the Bengali language, such as QA, NER, MASK generation, Translation, and Text generation. Table 3 demonstrates that training data plays a critical role in the quality of learned word embeddings. Hence, LaBSE and bnBART performed well for different semantic and syntactic relationships and were the overall win
\begin{table}
\begin{tabular}{l|c|l l l l} \hline \hline
**Type of Relationship** & **Sample \#** & \multicolumn{3}{c}{**Examples**} & \multicolumn{2}{c}{**Word Pair 2**} \\ \hline Division-District & 2160 & \(\overline{\texttt{\#}}\) (Dhaka) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Shariatum) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Rangeur) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Kurigram) \\ Gender & 1260 & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Farber) & \(\overline{\texttt{\#}}\) (Mother) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Uncle) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Aunt) \\ Number-ordinal & 380 & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (One) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (First) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Five) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Fith) \\ Number-date & 930 & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (One) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (First) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Two) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Second) \\ \hline Antonym-objective & 4692 & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Expensive) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Deep) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Addicted) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Deserge) \\ Antonym-misc. & 3732 & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Arrival) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Departure) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Upstream) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Upstream) \\ \hline Tense & 144 & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Continued) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Continuing) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Will continue) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Will Continine) \\ Comparative & 552 & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Cool) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Cooler) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Long) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Longer) \\ Superlative & 600 & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Long) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Longest) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Low) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Lowest) \\ Prefix & 66 & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Knowledge) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Science) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (End) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Special) \\ Staffix & 94 & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Affitude) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Attitudes) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Mafter) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Mafter) \\ Affix & 38 & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Night) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Night) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#}}\) (Elephant) & \(\overline{\texttt{\#}}\)\(\overline{\texttt{\#
ners. Task-specific learning appears to aid in the learning of better word embeddings as well.
**Dataset Variation:** We compared our dataset and the translated Mikolov dataset in Figure 2. LaBSE and bnBART were the overall winners again, indicating that more training data including multilingual learning and task-specific learning help learn specific characteristics of the Bangla Language. For more results please look into Appendix 7.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline
**Embedding** & Word2Vec & GloVe & fastText & LaBSE & bnBERT & LASER & bnBART \\ \hline
**Dimension** & 100 & 100 & 300 & 768 & 768 & 1024 & 1024 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Dimensions of different embedding used
Figure 1: Top-1(%) and Top-5(%) accuracy for different embeddings in our dataset.
\begin{table}
\begin{tabular}{l|c c c c c c c} \hline \hline & \multicolumn{6}{c}{**Embedding**} \\ \hline
**Relationship** & Word2Vec & GloVe & fastText & LaBSE & bnBERT & LASER & bnBART \\ \hline Division-District & **9.8** & 9.5 & 0.0 & 0.1 & 0.6 & 0.0 & 0.0 \\ Gender & 11.2 & 8.5 & 3.3 & **21.9** & 0.1 & 12.7 & 12.6 \\ Number & 5.9 & 5.1 & 0.6 & **16.3** & 0.4 & 3.4 & 3.7 \\ \hline Antonym & 11.0 & 7.6 & **14.5** & 3.5 & 0.7 & 1.0 & 3.3 \\ Tense & 6.3 & 2.8 & 4.2 & 13.9 & 0.0 & **16.7** & 14.6 \\ Comparative & 5.1 & 1.1 & 6.5 & 12.5 & 2.2 & 5.3 & **14.7** \\ Superlative & 3.5 & 1.3 & 3.3 & **10.2** & 0.0 & 6.3 & 5.2 \\ Prefix & 1.5 & 0.0 & 3.0 & 9.1 & 0.0 & 3.0 & **43.9** \\ Suffix & 0.0 & 0.8 & 0.8 & 30.4 & 0.0 & 22.8 & **53.6** \\ Affix & 21.1 & 23.7 & 5.3 & 63.2 & 0.0 & 50.0 & **79.0** \\ Plural & 7.3 & 2.5 & 1.5 & **41.6** & 0.0 & 18.4 & 11.1 \\ Colloquial-Standard & 9.8 & 3.9 & 1.9 & **23.5** & 0.0 & 6.6 & 9.4 \\ \hline
**Overall Accuracy** & 7.7 & 5.6 & 3.7 & 20.5 & 0.3 & 12.2 & 20.9 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Top-5 (%) accuracy on three types of semantic and nine types of syntactic relationship set. Highest accuracy for each relationship type of embedding is bold. Results indicate that certain embeddings perform strongly, moderately strong, or weakly depending on the relationship type, with green, blue, and red highlights, respectively.
Figure 2: Comparison of Top-1(%) and Top-5(%) accuracy on translated Mikolov dataset and our dataset.
## 5 Conclusion
Despite being the 7th most spoken language12 in the world, Bangla is still considered a low-resource language in terms of NLP research. Previous studies have attempted to develop NLP tasks for Bangla, but there is a lack of high-quality datasets to evaluate the effectiveness of learned word embeddings. To address this gap, we have presented a high-quality dataset for evaluating Bangla-specific word analogy, similar to the Mikolov dataset. Additionally, we have manually filtered and translated the Mikolov dataset into Bangla, potentially enabling cross-lingual research. Our experiments with different word embeddings suggest that current word embeddings for Bangla still struggle to achieve high accuracy on both datasets. To improve performance, future research should focus on training models with larger datasets and taking into account the unique morphological characteristics of Bangla when developing models for different NLP tasks.
Footnote 12: en.wikipedia.org/wiki/List of languages
## 6 Limitations
It is important to acknowledge that our study has certain limitations. Firstly, we may have unintentionally dropped some word analogy relations while creating the dataset for Bangla. Therefore, the dataset may not be completely comprehensive, and some relevant relationships might be missing. Additionally, we considered translated dataset from English only, and we did not consider other languages. Therefore, the applicability of our findings to other languages may be limited. Future research can be conducted with the translated dataset from other languages to explore the potential differences in word analogy relations across languages. Furthermore, our study did not explore the impact of other factors such as cultural or contextual differences, which may also influence word analogy relations.
|
2306.12608 | DP-BREM: Differentially-Private and Byzantine-Robust Federated Learning
with Client Momentum | Federated Learning (FL) allows multiple participating clients to train
machine learning models collaboratively while keeping their datasets local and
only exchanging the gradient or model updates with a coordinating server.
Existing FL protocols are vulnerable to attacks that aim to compromise data
privacy and/or model robustness. Recently proposed defenses focused on ensuring
either privacy or robustness, but not both. In this paper, we focus on
simultaneously achieving differential privacy (DP) and Byzantine robustness for
cross-silo FL, based on the idea of learning from history. The robustness is
achieved via client momentum, which averages the updates of each client over
time, thus reducing the variance of the honest clients and exposing the small
malicious perturbations of Byzantine clients that are undetectable in a single
round but accumulate over time. In our initial solution DP-BREM, DP is achieved
by adding noise to the aggregated momentum, and we account for the privacy cost
from the momentum, which is different from the conventional DP-SGD that
accounts for the privacy cost from the gradient. Since DP-BREM assumes a
trusted server (who can obtain clients' local models or updates), we further
develop the final solution called DP-BREM+, which achieves the same DP and
robustness properties as DP-BREM without a trusted server by utilizing secure
aggregation techniques, where DP noise is securely and jointly generated by the
clients. Both theoretical analysis and experimental results demonstrate that
our proposed protocols achieve better privacy-utility tradeoff and stronger
Byzantine robustness than several baseline methods, under different DP budgets
and attack settings. | Xiaolan Gu, Ming Li, Li Xiong | 2023-06-22T00:11:53Z | http://arxiv.org/abs/2306.12608v3 | # DP-BREM: Differentially-Private and Byzantine-Robust Federated Learning
###### Abstract
Federated Learning (FL) allows multiple participating clients to train machine learning models collaboratively by keeping their datasets local and only exchanging the gradient or model updates with a coordinating server. Existing FL protocols were shown to be vulnerable to attacks that aim to compromise data privacy and/or model robustness. Recently proposed defenses focused on ensuring either privacy or robustness, but not both. In this paper, we focus on simultaneously achieving differential privacy (DP) and Byzantine robustness for cross-silo FL, based on the idea of learning from history. The robustness is achieved via client momentum, which averages the updates of each client over time, thus reduces the variance of the honest clients and exposes the small malicious perturbations of Byzantine clients that are undetectable in a single round but accumulate over time. In our initial solution DP-BREM, the DP property is achieved via adding noise to the aggregated momentum, and we account for the privacy cost from the momentum, which is different from the conventional DP-SGD that accounts for the privacy cost from gradient. Since DP-BREM assumes a trusted server (who can obtain clients' local models or updates), we further develop the final solution called DP-BREM\({}^{+}\), which achieves the same DP and robustness properties as DP-BREM without a trusted server by utilizing secure aggregation techniques, where DP noise is securely and jointly generated by the clients. Our theoretical analysis on the convergence rate and experimental results under different DP guarantees and attack settings demonstrate that our proposed protocols achieve better privacy-utility tradeoff and stronger Byzantine robustness than several baseline methods.
## 1 Introduction
Federated learning (FL) [29] is an emerging paradigm that enables multiple clients to collaboratively learn models without explicitly sharing their data. The clients upload their local model updates to a coordinating server, who then shares the global average with the clients in an iterative process. This offers a promising solution to mitigate the potential privacy leakage of sensitive information about individuals (since the data stays local with each client), such as typing history, shopping transactions, geographical locations, and medical records. However, recent works have demonstrated that FL may not always provide sufficient _privacy_ and _robustness_ guarantees. In terms of privacy leakage, exchanging the model updates throughout the training process can still reveal sensitive information [31, 4] and cause deep leakage such as pixel-wise accurate image recovery [50, 47], either to a third-party (including other participating clients) or the central server. In terms of robustness, the decentralization design of FL systems opens up the training process to manipulation by malicious clients, aiming to either prevent the convergence of the global model (a.k.a. Byzantine attacks) [3, 15, 44], or implant a backdoor trigger into the global model to cause targeted misclassification (a.k.a. backdoor attacks) [2, 43].
To mitigate the privacy leakage in FL, Differential Privacy (DP) [12, 13] has been adopted as a rigorous privacy notion. Existing frameworks [30, 26, 18] applied DP in FL to provide _client-level_ privacy under the assumption of a trusted server: whether a client has participated in the training process cannot be inferred by a third party from the released global model. Other works in FL [45, 48, 26, 40] focused on _record-level_ privacy: whether a data record at a client has participated during training cannot be inferred by the server or other adversaries that have access to the model updates or the global model. Record-level privacy is more relevant in cross-silo (as versus cross-device) FL scenarios, such as multiple hospitals collaboratively learn a prediction model for COVID-19, in which case what needs to be protected is the privacy of each patient (corresponding to each record in a hospital's dataset). In this paper, we focus on cross-silo FL with _record-level_ DP, where each client possesses a set of raw records, and each record corresponds to an individual's private data.
On the other hand, to defense against Byzantine attacks, robust FL protocols are proposed to ensure that the training procedure is robust to a fraction of potentially malicious
clients. This problem has received significant attention from the community. Most existing approaches replace the averaging step at the server with a robust aggregation rule, such as the median [5, 8, 32, 46]. However, recent state-of-the-art attacks [3, 44] have empirically demonstrated the failure of the above robust aggregators. Furthermore, a more recent work [22] shows that there exist realistic scenarios where these robust aggregators fail to converge, even if there are no Byzantine attackers and the data distribution is identical (i.i.d.) across the clients, and proposed a new solution called Learning From History (LFH) to address this issue. LFH achieves robustness via client momentum with the motivation of averaging the updates of each client over time, thus reducing the variance of the honest clients and exposing the small malicious perturbations of Byzantine clients that are undetectable in a single round but accumulate over time.
In this paper, we focus on achieving record-level DP and Byzantine robustness simultaneously in cross-silo FL. Existing differentially-private FL protocols based on DP-SGD [1] do not achieve robustness property intrinsically, and directly implementing an existing robust aggregator over the privatized client gradients will lead to poor utility due to the large aggregated DP noise (added by the clients). On the other hand, it is desirable to achieve DP guarantees based on average-based aggregators, because other aggregators (such as median [5, 32, 46]) usually have large sensitivity for DP and therefore poor utility (due to large DP noise). Although LFH [22] is an average-based Byzantine-robust FL protocol, it aggregates client momentum instead of gradient, thus it is non-trivial to achieve DP on top of LFH. We show that a direct combination of LFH with DP-SGD momentum has several limitations, leading to both poor utility and robustness. Therefore, we aim to address these limitations in our solution.
To achieve an enhanced privacy-utility tradeoff, we start our problem from an assumption that the server is trusted, and developed a **D**ifferentially-**P**ivate and **B**yzantine-**R**obust **f**E**derated learning algorithm with client **M**omentum (**DP-BREM**), which essentially is a differentially privatized version of the Byzantine-robust method LFH [22]. Instead of adding DP noise to the gradient and then aggregating momentum as post-processing, we add DP noise to the aggregated momentum with carefully computed sensitivity to account for the privacy cost. Since the noise is added to the final aggregate (instead of intermediate local gradient), our basic privacy-preserving solution DP-BREM maintains the non-private LFH's robustness as much as possible, which we show both theoretically (via convergence analysis) and empirically (via experimental results). Then, we relax our trust assumption to a malicious server (for privacy only) and develop our final solution called DP-BREM\({}^{+}\). It utilizes secure multiparty computation (MPC) techniques, including secure aggregation and secure noise generation, to achieve the same DP and robustness guarantees as in DP-BREM.
Our main contributions are summarized as follows:
1) We propose a novel differentially-private and Byzantine-robust FL protocol called DP-BREM, which adds DP noise to the aggregated client momentum with carefully computed sensitivity. Our privacy analysis (shown in Theorem 1) accounts for the privacy cost from momentum, which is different from the conventional DP-SGD that accounts for the privacy cost from gradient. We also provide the convergence analysis of DP-BREM (shown in Theorem 3), which indicates that there is only a small sacrifice on the convergence rate to satisfy DP (as versus a large convergence sacrifice of the baseline solution shown in Section 3.2).
2) Considering DP-BREM is developed under the assumption of a trusted server, we propose the final solution called DP-BREM\({}^{+}\) (in Section 5), which achieves the same privacy and robustness properties as DP-BREM, despite under a _malicious_ server (for privacy only), by utilizing secure multiparty computation techniques. DP-BREM\({}^{+}\) is built based on the framework of secure aggregation with verifiable inputs (SAVI) [35], but extends it to guarantee the integrity of DP noise via a novel secure distributed noise generation protocol. Our extended SAVI protocol is general enough to be applied on other DP and robust FL protocols that are average-based.
3) We conduct extensive experiments using MNIST and CIFAR-10 datasets (in Section 6) to demonstrate effectiveness of our protocols. The results show that it can achieve better utility under the same record-level DP guarantees, as well as strong robustness against Byzantine clients under different attacks, compared with several baseline methods.
## 2 Preliminaries
### Differential Privacy (DP)
Differential Privacy (DP) is a rigorous mathematical framework for the release of information derived from private data. Applied to machine learning, a differentially-private training mechanism allows the public release of model parameters with a strong privacy guarantee: adversaries are limited in what they can learn about the original training data based on analyzing the parameters, even when they have access to arbitrary side information. The formal definition is as follows:
**Definition 1** (\((\epsilon,\delta)\)-Dp [12, 13]).: _For \(\epsilon\in[0,\infty)\) and \(\delta\in[0,1)\), a randomized mechanism \(\mathcal{M}:\mathcal{D}\rightarrow\mathcal{R}\) with a domain \(\mathcal{D}\) (e.g., possible training datasets) and range \(\mathcal{R}\), (e.g., all possible trained models) satisfies \((\epsilon,\delta)\)-Differential Privacy (DP) if for any two neighboring datasets \(D,D^{\prime}\in\mathcal{D}\) that differ in only one record and for any subset of outputs \(S\subseteq\mathcal{R}\), it holds that_
\[\mathbb{P}[\mathcal{M}(D)\in S]\leqslant\epsilon^{\epsilon}\cdot\mathbb{P}[ \mathcal{M}(D^{\prime})\in S]+\delta\]
_where \(\epsilon\) and \(\delta\) are privacy parameters (or privacy budget), and a smaller \(\epsilon\) and \(\delta\) indicate a more private mechanism._
**Gaussian Mechanism.** A common paradigm for approximating a deterministic real-valued function \(f:\mathcal{D}\rightarrow\mathbb{R}\) with
a differentially-private mechanism is via additive noise calibrated to \(f\)'s sensitivity \(s_{f}\), which is defined as the maximum of the absolute distance \(|f(D)-f(D^{\prime})|\). The Gaussian Mechanism is defined by \(\mathcal{M}(D)=f(D)+\mathcal{N}(0,s_{f}^{2}\cdot\sigma^{2})\), where \(\mathcal{N}(0,s_{f}^{2}\cdot\sigma^{2})\) is the normal (Gaussian) distribution with mean 0 and standard deviation \(s_{f}\sigma\). It was shown that the mechanism \(\mathcal{M}\) satisfies \((\varepsilon,\delta)\)-DP if \(\delta\geq\frac{4}{5}e^{-(\varpi)^{2}/2}\) and \(\varepsilon<1\)[13]. Note that we use an advanced privacy analysis tool proposed in [11], which works for all \(\varepsilon>0\).
**DP-SGD Algorithm.** The most well-known differentially-private algorithm in machine learning is DP-SGD [1], which introduces two modifications to the vanilla stochastic gradient descent (SGD). First, a _clipping step_ is applied to the gradient so that the gradient is in effect bounded. This step is necessary to have a finite sensitivity. The second modification is _Gaussian noise augmentation_ on the summation of clipped gradients, which is equivalent to applying the Gaussian mechanism to the updated iterates. The privacy accountant of DP-SGD is shown in Appendix F.
### Federated Learning (FL) with DP
Federated Learning (FL) [29, 21] is a collaborative learning setting to train machine learning models. We consider the horizontal cross-silo FL setting. Horizontal FL involves multiple clients, each holding their own private dataset of the same set of features, and a central server who implements the aggregation. Unlike the traditional centralized approach, data is not stored at a central server; instead, clients train models locally and exchange updated parameters with the server, who aggregates the received local model parameters and sends them to the clients. Based on the participating clients and scale, federated learning can be classified into two types: _cross-device_ FL where clients are typically mobile devices and the client number can reach up to a scale of millions; _cross-silo_ FL (our focus) where clients are organizations or companies and the client number is usually small (e.g., within a hundred).
**FL with DP.** In FL, the _neighboring datasets_\(D\) and \(D^{\prime}\) in Definition 1 can be defined at two distinct levels: _record-level_ and _client-level_. In cross-device FL, each device usually stores one individual's data, then the whole devices' data should be protected. It corresponds to client-level DP, where \(D^{\prime}\) is obtained by adding or removing one client/device's whole training dataset from \(D\). In cross-silo FL, each record corresponds to one individual's data, then record-level DP should be provided, where \(D^{\prime}\) is obtained by adding or removing a single training record/example from \(D\). Since we consider cross-silo FL, achieving _record-level_ DP is our privacy goal.
### Byzantine Attacks and Defenses
Byzantine attack is the one where the adversary aims to destroy the convergence of the global model. Due to the decentralization design, FL systems are vulnerable to Byzantine clients, who do not obey the protocol and can send arbitrary messages to the server. Also, they may have complete knowledge of the system and learning algorithms, and can collude with each other. Most state-of-the-art defense mechanisms [32, 3, 5, 46] play with median statistics of gradient contributions. However, recent attacks [3, 44] have empirically demonstrated the failure of the above robust aggregations.
**LFH: Non-private Byzantine-Robust Defense.** Recently, Karimireddy et al. [22] showed that most state-of-the-art robust aggregators require strong assumptions and can fail in real settings even in the complete absence of Byzantine attackers. Then, they proposed a new Byzantine-robust scheme called "learning from history" (LFH) that essentially utilizes two simple strategies: _client momentum_ (during local update) and _centered clipping_ (during server aggregation). For presentation simplicity, we assume the momentum parameter \(\beta\in[0,1)\) is the same for all clients in all iterations. In each iteration \(t\), client \(\mathsf{C}_{i}\) receives the global model parameter \(\mathbf{\theta}_{t-1}\) from the server, and computes the local gradient of the random dataset batch \(\mathcal{D}_{t,t}\subseteq\mathcal{D}_{t}\) by
\[\mathbf{g}_{t,i}=\frac{1}{|\mathcal{D}_{t,t}|}\sum\nolimits_{\mathbf{x}\in\mathcal{D}_ {t,t}}\nabla_{\mathbf{\theta}}\ell(\mathbf{x},\mathbf{\theta}_{t-1}) \tag{1}\]
where \(\nabla_{\mathbf{\theta}}\ell(\mathbf{x},\mathbf{\theta}_{t-1})\) is the per-record gradient w.r.t. the loss function \(\ell(\cdot)\). The client momentum can be computed via
\[\mathbf{m}_{t,i}=\begin{cases}\mathbf{g}_{t,i},&\text{if }t=1\\ (1-\beta)\mathbf{g}_{t,i}+\beta\mathbf{m}_{t-1,i},&\text{if }t>1\end{cases} \tag{2}\]
After receiving \(\mathbf{m}_{t,i}\) from all clients, the server implements aggregation with centered clipping via
\[\mathbf{m}_{t}=\mathbf{m}_{t-1}+\frac{1}{n}\sum\nolimits_{i=1}^{n}\text{Clip}_{C}(\bm {m}_{t,i}-\mathbf{m}_{t-1}) \tag{3}\]
where \(\text{Clip}_{C}(\cdot)\) with scalar \(C>0\) is the clipping function:
\[\text{Clip}_{C}(\mathbf{x})\coloneqq\mathbf{x}\cdot\min\{1,\;C/\|\mathbf{x}\|\} \tag{4}\]
and \(\|\mathbf{x}\|\) is L2-norm of any vector \(\mathbf{x}\). The clipping operation \(\text{Clip}_{C}(\mathbf{m}_{t,i}-\mathbf{m}_{t-1})\) essentially bounds the distance between client's local momentum \(\mathbf{m}_{t,i}\) and the previous aggregated momentum \(\mathbf{m}_{t-1}\), thus restricts the impact from Byzantine clients. Then, the global model \(\mathbf{\theta}_{t}\) can be updated by \(\mathbf{\theta}_{t}=\mathbf{\theta}_{t-1}-\eta_{t}\mathbf{m}_{t}\) with learning rate \(\eta_{t}\). The convergence rate under Byzantine attacks is shown by the following lemma.
**Lemma 1** (Convergence Rate of LFH [22]).: _With some parameter tuning, the convergence rate of Byzantine-robust algorithm LFH is asymptotically (ignoring constants and higher order terms) of the order_
\[\frac{1}{T}\sum\nolimits_{t=1}^{T}\mathbb{E}\|\nabla\ell(\mathbf{\theta}_{t-1})\|^ {2}\lesssim\sqrt{\frac{\rho^{2}}{T}\frac{1+|\mathcal{B}|}{n}} \tag{5}\]
_where \(\ell(\cdot)\) is the loss function, \(T\) is the total number of training iterations, \(|\mathcal{B}|\) is the number of Byzantine clients, \(n\) is the
number of all clients, and \(\rho\) is a parameter that quantifies the variance of honest clients' stochastic gradients:_
\[\mathbb{E}\|\mathbf{g}_{t,i}-\mathbb{E}[\mathbf{g}_{t,i}]\|^{2}\leqslant\rho^{2} \tag{6}\]
**Interpretation of Lemma 1.** When there are no Byzantine clients, LFH recovers the optimal rate of \(\frac{\rho}{\sqrt{nT}}\). In the presence of a \(|\mathcal{B}|/n\) fraction of Byzantine clients, the rate has an additional term \(\rho\sqrt{\frac{|\mathcal{B}|/n}{T}}\), which depends on the fraction \(|\mathcal{B}|/n\) but does not improve with increasing clients.
## 3 Problem Statement and Motivation
### Problem Statement
**System Model.** Our system model follows the general setting of Fed-SGD [29]. There are multiple parties in the FL system: one aggregation server and \(n\) participating clients \(\{\mathsf{C}_{1},\cdots,\mathsf{C}_{n}\}\). The server holds a global model \(\mathbf{\theta}_{t}\in\mathbb{R}^{d}\) and each client \(\mathsf{C}_{i}\), \(i\in\{1,\cdots,n\}\) possesses a private training dataset \(\mathcal{D}_{i}\). The server communicates with each client through a secure (private and authenticated) channel. During the iterative training process, the server broadcasts the global model in current iteration to all clients and aggregates the received gradient/momentum from all clients (or a subset of clients) to update the global model until convergence. The final global model is returned after the training process as the output.
**Threat Model.** The considered adversary aims to perform a 1) privacy attack and/or 2) Byzantine attack with the following threat model respectively.
1) Privacy Attack. Following the setting of FL, we assume the server have no access to the clients' local training data, but may have incentive to infer clients' private information. In our initial solution DP-BREM, we assume a trusted server who can obtain clients' local models/updates. The adversary is a third party or the participating clients (can be any set of clients) who have access to the intermediate and final global models and may use them to infer the private data of an honest client \(\mathsf{C}_{i}\). Hence, the privacy goal is to ensure the global models (and its update) satisfy DP. In our final solution DP-BREM\({}^{+}\), in addition to third parties and clients, the adversary also includes the server who tries to infer additional information from the local updates (and may deviate from the protocol for privacy inference). Such a model is also adopted in previous work [35]. In this case, we assume a minority of malicious clients who can deviate from the protocol arbitrarily.
2) Byzantine Attack. We only consider minority of malicious clients as the adversary for Byzantine attacks because the server's primary goal is to train a robust model (thus no incentive to implement Byzantine attacks). These malicious clients can deviate from the protocol arbitrarily and have full control on both their local training data and their submission to the servers, but has no influence on other honest clients.
**Objectives.** The goal of this paper is to achieve both record-level DP and Byzantine robustness at the same time. We aim to provide high utility (i.e., high accuracy of the global model) with required DP guarantee under the existence of Byzantine attacks from malicious clients. Our ultimate privacy goal is to provide record-level DP guarantees against an untrusted server and other clients (but we start by assuming a trusted server first in our basic solution).
### Challenges and Baseline
**Challenges: replacing average-based aggregator leads to large sensitivity of DP.** Though there are many works on achieving either DP or Byzantine robustness, it's nontrivial to achieve both with high utility. The main reason is that most Byzantine-robust methods replace the averaging aggregator with median-based strategies or some complex robust aggregators, which leads to a large sensitivity of DP compared to the averaging operation, as illustrated in Example 1.
**Example 1** (Sensitivity Computation: Average vs. Median).: _Consider a one-dimensional dataset with size 5, such as \(\mathcal{D}=\{1,3,5,7,9\}\), and its neighboring dataset \(\mathcal{D}^{\prime}\) is obtained by changing one value in \(\mathcal{D}\) with at most 1, such as \(\mathcal{D}^{\prime}=\{1,3,\mathbf{4},7,9\}\). Then, the sensitivity of average-query is \(\max_{\mathcal{D},\mathcal{D}^{\prime}}|\mathsf{avg}(\mathcal{D})-\mathsf{ avg}(\mathcal{D}^{\prime})|=1/5=0.2\). However, the sensitivity of median-query is \(\max_{\mathcal{D},\mathcal{D}^{\prime}}|\mathsf{median}(\mathcal{D})-\mathsf{ median}(\mathcal{D}^{\prime})|=1\)._
_Moreover, when increasing the size of dataset, the sensitivity of average-query will be reduced (and then less noise to be added), while the sensitivity of median-query is the same._
**DP-LFH: baseline via direct combination of LFH and DP-SGD.** As shown in Section 2.3, the Byzantine-robust scheme LFH [22] utilizes an _average-based_ aggregator, which can be regarded as a non-private robust solution to address the disadvantage of _median-based_ aggregator. A straightforward method to add DP protection on top of LFH is to combine it with the DP-SGD algorithm. However, LFH requires each client to compute the local momentum \(\mathbf{m}_{t,i}\) for server aggregation, while DP-SGD aggregates gradients and accounts the privacy cost via composition of iterative gradient update. In the central setting (all data are stored on a trusted server), the server can compute the privatized momentum by first implementing DP-SGD where the gradient of each data batch satisfies DP, and then directly computing momentum from these privatized gradient (thus there is no additional privacy cost due to post-processing). In the FL setting (clients' data are stored in local), since gradients are computed by each client in LFH, the direct combination with DP-SGD is to privatize client's local gradient \(\mathbf{g}_{t,i}\) in Eq. (1) via
\[\mathbf{g}_{t,i}=\frac{1}{|\mathcal{D}_{t,i}|}\left[\sum_{\mathbf{x}\in\mathcal{D}_{t, i}}\mathsf{Clip}_{R}(\nabla_{\mathbf{\theta}}\ell(\mathbf{x},\mathbf{\theta}_{t-1}))+ \mathcal{N}((0,R^{2}\sigma^{2}\mathbf{1}_{d})\right], \tag{7}\]
where \(\mathbf{I}_{d}\) is an identity matrix with size \(d\times d\) (\(d\) is the model size, i.e., \(\mathbf{\theta}_{t}\in\mathbb{R}^{d}\)), the record-level clipping bound \(\text{Clip}_{R}(\cdot)\) restricts the sensitivity when adding/removing one record from the local dataset, and Gaussian noise \(\mathcal{N}((0,R^{2}\sigma^{2}\mathbf{I}_{d})\) introduces DP property on \(\boldsymbol{g}_{t,i}\). Since DP is immune to post-processing, the remaining steps can be implemented in the same way as the original LFH without incurring additional privacy cost. This baseline solution DP-LFH achieves record-level DP against a semi-honest server. However, it has several limitations, which lead to both poor privacy-utility tradeoff and robustness.
**Limitation 1: large aggregated noise.** Since each client locally adds noise (for DP), the overall noise after aggregation will be larger than the case of the central setting under the same privacy budget (i.e., the value of \(\epsilon\)), because only the server adds noise in the central setting. Therefore, DP-LFH has a poor privacy-utility tradeoff.
**Limitation 2: large impact on Byzantine robustness.** Since the DP noise is added locally to each client's gradient before centered clipping, it leads to a negative impact on Byzantine robustness: the noisy client momentum \(\boldsymbol{m}_{t,i}\) has larger variance than the noise-free one, which leads to larger bias and variance on the clipping step \(\text{Clip}_{C}(\boldsymbol{m}_{t,i}-\boldsymbol{m}_{t-1})\). Furthermore, this impact will be enlarged when there are more Byzantine clients, which is explained as follows. Since the parameter \(\rho^{2}\) defined in (6) quantifies the variance of client's gradient, and the DP noise is added to the local gradient in (7), the parameter \(\rho\) of the convergence rate shown in (5) is replaced by \(\rho+\sqrt{d}\sigma\) (ignoring constants) for DP-LFH, i.e., the convergence rate of DP-LFH is asymptotically of the order
\[\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}\|\nabla\ell(\mathbf{\theta}_{t-1})\|^{2} \lesssim\sqrt{\frac{(\rho+\sqrt{d}\sigma)}{T}}\frac{1+|\mathcal{B}|}{n} \tag{8}\]
Therefore, either a large \(d\) (i.e., large model) or a large \(\sigma\) (i.e., small privacy budget \(\epsilon\)) will enlarge the impact from Byzantine clients due to the order \(O(\sqrt{d\sigma^{2}|\mathcal{B}|})\) of convergence rate. We note that Guerraoui et al.'s work [19] also shares a similar insight: they show that DP with local noise and Byzantine robustness are incompatible, especially when the dimension of model parameters \(d\) is large.
**Limitation 3: no privacy amplification from client-level sampling due to momentum.** According to the recursive representation \(\boldsymbol{m}_{t,i}=(1-\beta)\boldsymbol{g}_{t,i}+\beta\boldsymbol{m}_{t-1,i}\), client \(\text{C}_{i}\)'s momentum in \(t\)-th iteration \(\boldsymbol{m}_{t,i}\) is essentially a weighted summation of all previous privatized client gradients:
\[\boldsymbol{m}_{t,i}=(1-\beta)(\boldsymbol{g}_{t,i}+\beta\boldsymbol{g}_{t-1, j}+\cdots+\beta^{t-2}\boldsymbol{g}_{2,i})+\beta^{t-1}\boldsymbol{g}_{1,i} \tag{9}\]
where \(\boldsymbol{g}_{1,i},\boldsymbol{g}_{2,i},\cdots,\boldsymbol{g}_{t,i}\) are already privatized via local noise. Assume the server samples a subset of clients for aggregation in each iteration. Assume that client \(\text{C}_{i}\)'s momentum \(\boldsymbol{m}_{t,i}\) is not aggregated (thus the aggregate in the \(t\)-th iteration is independent of \(\boldsymbol{g}_{t,i}\)). However, in a later iteration (i.e., \(\tau>t\)), if client \(\text{C}_{i}\)'s momentum \(\boldsymbol{m}_{\tau,i}\) is involved in the aggregation, it will depend on \(\boldsymbol{g}_{t,i}\) according to (9). Therefore, we need to account for the privacy cost of \(\boldsymbol{g}_{t,i}\) in all iterations. There is no privacy amplification benefit from sampling clients, leading to high privacy cost or low utility.
## 4 Dp-Brem
To address the limitations of DP-LFH, we start from the assumption of a trusted server who can obtain clients' local models/updates and generates DP noise, and propose an initial solution called DP-BREM (in Section 4.1). It is a differentially-private version of LFH with carefully designed enhancements, achieving a similar level of robustness as the non-private LFH. Since DP-BREM adds DP noise to the momentum (as versus adding noise to the gradient in DP-SGD), our privacy accountant shown in Section 4.2 is different from the traditional privacy accountant of DP-SGD. We also provide the convergence analysis in Section 4.3, where the provable convergence of LFH is maintained with only a small difference. Based on DP-BREM, we then relax the server's trust assumption in our final solution DP-BREM\({}^{+}\) (in Section 5), by adopting secure multiparty computation techniques including secure aggregation with input validation and joint noise generation, which achieves the same DP guarantee with same amount of noise as in DP-BREM, without trusting the server.
### Algorithm Design
The mathematical notations involved in our algorithm design and theoretical analysis are summarized in Table 2 (see Appendix A). The illustration of our design is shown in Figure 1, and the algorithm is shown in Algorithm 1, where all clients need to implement local updates (in Line-3), but only a subset of their momentum vectors are aggregated by the server (in Line-4). The details of client updates and server aggregation are described below.
**Client Updates.** Client \(\text{C}_{i}\) first samples a random batch \(\mathcal{D}_{t,i}\) from the local dataset \(\mathcal{D}_{i}\) with sampling rate \(p_{i}\), clips the per-record gradient \(\nabla_{\boldsymbol{\theta}}\ell(\boldsymbol{x},\boldsymbol{\theta}_{t-1})\) by \(R\) and multiplies the sum by a constant factor \(\frac{1}{p_{i}|\mathcal{D}_{i}|}\) to get the averaged gradient
\[\bar{\boldsymbol{g}}_{t,i}=\frac{1}{p_{i}|\mathcal{D}_{i}|}\sum _{\boldsymbol{x}\in\mathcal{D}_{t,i}}\text{Clip}_{R}(\nabla_{\boldsymbol{ \theta}}\ell(\boldsymbol{x},\boldsymbol{\theta}_{t-1})) \tag{10}\]
Figure 1: Illustration of our DP-BREM algorithm.
where \(\mathsf{Clip}_{R}(\cdot)\) is the clipping function defined in (4), but is used in here to bound the sensitivity for DP (refer to DP-SGD discussed in Section 2.1). Note that the batch size \(|\mathcal{D}_{t,i}|\) is random and \(\mathbb{E}[|\mathcal{D}_{t,i}|]=p_{i}|\mathcal{D}_{i}|\). Then, the local momentum can be computed by
\[\boldsymbol{\bar{m}}_{t,i}=\begin{cases}\boldsymbol{\bar{g}}_{t,i},&\text{if }t=1 \\ (1-\beta)\boldsymbol{\bar{g}}_{t,i}+\beta\boldsymbol{\bar{m}}_{t-1,i},&\text{if }t>1 \end{cases} \tag{11}\]
where \(\beta\in[0,1)\) is the momentum parameter.
**Server Aggregation.** The server implements centered clipping (for robustness) with clipping bound \(C>0\) and adds Gaussian noise with standard deviation \(R\sigma\) (thus the variance is \(R^{2}\sigma^{2}\)) to the sum of clipped terms to get the noisy global momentum \(\boldsymbol{\bar{m}}_{t}\)
\[\boldsymbol{\bar{m}}_{t}=\boldsymbol{\bar{m}}_{t-1}+\frac{1}{|I_{i}|}\left[ \sum\nolimits_{i\in I_{t}}\mathsf{Clip}_{C}(\boldsymbol{\bar{m}}_{t,i}- \boldsymbol{\bar{m}}_{t-1})+\mathcal{N}(0,R^{2}\sigma^{2}\mathbf{I}_{d})\right] \tag{12}\]
where \(\mathbf{I}_{d}\) is an identity matrix with size \(d\times d\), and only the sampled clients in \(I_{t}\) (which is obtained in Line-2 of Algorithm 1 with sampling rate \(q\)) are aggregated in \(t\)-th iteration. Note that adding noise \(\mathcal{N}(0,R^{2}\sigma^{2}\mathbf{I}_{d})\) to the _summation_ of clipped client momentum \(\sum\nolimits_{i\in I_{t}}\mathsf{Clip}_{C}(\boldsymbol{\bar{m}}_{t,i}- \boldsymbol{\bar{m}}_{t-1})\) is equivalent to adding noise \(\frac{1}{|I_{t}|}\mathcal{N}(0,R^{2}\sigma^{2}\mathbf{I}_{d})\) to the _average_ result \(\frac{1}{|I_{t}|}\sum\nolimits_{i\in I_{t}}\mathsf{Clip}_{C}(\boldsymbol{\bar {m}}_{t,i}-\boldsymbol{\bar{m}}_{t-1})\). Then, the server updates the global model \(\boldsymbol{\theta}_{t}\) with learning rate \(\eta_{t}\)
\[\boldsymbol{\theta}_{t}=\boldsymbol{\theta}_{t-1}-\eta_{t}\boldsymbol{\bar{m}} _{t} \tag{13}\]
**Remark: clipping bounds and sampling rates.** In our algorithm, we use two clipping bounds and two sampling rates. For clipping bounds, each client uses record-level bound \(R\) to bound the _per-record_ gradient in (10) for a finite sensitivity in record-level DP; while the server uses client-level bound \(C\) to bound the difference between _client momentum_\(\boldsymbol{\bar{m}}_{t,i}\) and the previous noisy global momentum \(\boldsymbol{\bar{m}}_{t-1}\) in (12), which achieves Byzantine robustness as in LFH. For sampling rates, client \(\mathsf{C}_{i}\) samples a batch of records \(\mathcal{D}_{t,i}\) from the local dataset \(\mathcal{D}_{i}\) with sampling rate \(p_{i}\), which provides privacy amplification for DP from _record-level_ sampling; while the server samples a subset of clients with sampling rate \(q\) (where the sampled clients set is denoted by \(I_{t}\)), which provides privacy amplification for DP from _client-level_ sampling.
**Remark: comparison with non-private LFH.** Comparing with the original non-private Byzantine-robust method LFH [22] (see Section 2.3), our differentially-private version has three differences. First, comparing with (1), the client gradient in (10) is computed by averaging the _clipped_record gradient (with clipping bound \(R\)), which bounds the sensitivity of final aggregation when adding/removing one record from the local dataset. Second, comparing with (3), the server adds Gaussian noise when computing the aggregated global momentum \(\boldsymbol{\bar{m}}_{t}\) in (12) to guarantee DP. Third, instead of aggregating all clients' momentum, our method only aggregates a subset of them (reflected by the index set \(I_{t}\) in (12)), which provides additional privacy amplification from _client-level_ sampling (the original privacy amplification is provided by _record-level_ sampling).
### Privacy Analysis
Before presenting the final privacy analysis of DP-BREM, we first show how we compute the sensitivity for the summation of clipped client momentum in Eq. (12) for privacy analysis of one iteration, shown in Lemma 2. We note that clients may have different size of local dataset \(\mathcal{D}_{i}\) and can use different record-level sampling rate \(p_{i}\), thus the record-level sensitivity (denoted by \(S_{i}\)) for different clients can be different.
**Lemma 2** (Sensitivity Computation).: _We use \(\|\cdot\|\) to denote \(L2\)-norm \(\|\cdot\|_{2}\). In \(t\)-th round, denote the query function \(Q_{t}(\mathcal{D}):=\sum_{j\in I_{t}}\mathsf{Clip}_{C}(\boldsymbol{m}_{t,j}- \boldsymbol{\bar{m}}_{t-1})\), where \(\boldsymbol{\bar{m}}_{t-1}\) is public and \(\mathcal{D}=\{\mathcal{D}_{j}\}_{j\in I_{t}}\). Consider the neighboring dataset \(\mathcal{D}^{\prime}=\{\mathcal{D}_{j}\}_{j\neq i,j\in I_{t}}\cup\mathcal{D} _{i}^{\prime}\) that differs in one record from client \(\mathsf{C}_{i}\)'s local data (\(i\in I_{t}\)), i.e., \(|\mathcal{D}_{i}-\mathcal{D}_{i}^{\prime}|=1\), then the sensitivity with respect to client \(\mathsf{C}_{i}\) is_
\[S_{i}\coloneqq\max_{\mathcal{D},D^{\prime}}\|Q_{t}(\mathcal{D})-Q_{t}(\mathcal{ D}^{\prime})\|=\min\left\{2C,\frac{R}{p_{i}|\mathcal{D}_{i}|}\right\} \tag{14}\]
Proof.: (Sketch) According to (10), the sensitivity of \(\boldsymbol{\bar{g}}_{t,i}\) is \(\frac{R}{p_{i}|\mathcal{D}_{i}|}\) because each clipped term \(\mathsf{Clip}_{R}(\cdot)\) has bounded L2-norm, i.e., \(\|\mathsf{Clip}_{R}(\cdot)\|\leqslant R\). Then, due to the recursive representation of local momentum in (11), the sensitivity of \(\boldsymbol{m}_{t,i}\) is \(\frac{R}{p_{i}|\mathcal{D}_{i}|}\). Finally, the client-level clipping \(\mathsf{Clip}_{C}(\cdot)\) introduces another upper-bound for the sensitivity. Refer to Appendix A for the full-version proof.
**Remark: comparison with the privacy accountant of DP-SGD momentum.** As discussed in Section 3.2, the privacy accountant of DP-SGD with momentum (i.e., account for privacy cost of gradient, then do post-processing for momentum) requires clients to add noise in the local gradients, which leads to poor utility especially when Byzantine attacks exist. In Lemma 2, we account for the privacy cost of aggregated momentum, where the _sensitivity_ is carefully computed from the bounded record-level gradient. Therefore, our scheme solves the three limitations shown in Section 3.2, which is explained as follows. First, only the server adds noise (which is the same as the central setting), thus the privacy-utility tradeoff is not impacted. Second, the noise is added after the centered clipping \(\mathsf{Clip}_{C}(\mathbf{m}_{t,i}-\mathbf{m}_{t-1})\), thus it only introduces unbiased error. We also show that (in Section 4.3) the impact from the added noise is separate from the impact from Byzantine attacks, as versus the impact from the local noise is enlarged with Byzantine attacks in DP-LFH (see Section 3.2). Third, since privacy is accounted on momentum, and only the aggregated momentum leaks privacy, our solution enjoys the privacy amplification from client-level sampling.
The final privacy analysis of DP-BREM is shown in Theorem 1. It presents how to compute the privacy budget \(\epsilon\) and privacy parameter \(\delta\) when the parameters (such as \(T\), \(\sigma\), \(q\), etc.) of the algorithm are given, and involves the composition of multiple iterations and the privacy amplification from sampling. We use an advanced privacy accountant tool called Gaussian DP (GDP) [11] (refer to Appendix F), then convert it to \((\epsilon,\delta)\)-DP. Note that in our privacy analysis, clients can use different record-level sampling rate \(p_{i}\), thus different sensitivity \(S_{i}\) shown in (14). Therefore, the final privacy budget (denoted by \(\epsilon_{i}\)) of DP-BREM may be different for different clients, which actually provides personalized privacy.
**Theorem 1** (Privacy Analysis).: _DP-BREM (in Algorithm 1) satisfies record-level \((\epsilon_{i},\delta)\)-DP for an honest client \(\mathsf{C}_{i}\) with \(\epsilon_{i}\) and \(\delta\) satisfying_
\[\delta=\Phi\left(-\frac{\epsilon_{i}}{\mu_{i}}+\frac{\mu_{i}}{2} \right)-\epsilon^{\epsilon_{i}}\cdot\Phi\left(-\frac{\epsilon_{i}}{\mu_{i}}- \frac{\mu_{i}}{2}\right), \tag{15}\]
_where \(\Phi(\cdot)\) denotes the cumulative distribution function (CDF) of standard normal distribution, and \(\mu_{i}\) is defined by_
\[\mu_{i}=qp_{i}\sqrt{T(\epsilon^{1/(2\sigma_{i}^{2})}-1)},\quad \text{with }\sigma_{i}=\sigma\cdot\max\left\{\frac{R}{2C},p_{i}|\mathcal{D}_{i}| \right\} \tag{16}\]
Proof.: See Appendix B.
### Convergence Analysis
Before presenting the final convergence analysis of our solution, we first show the aggregation error for one iteration in Theorem 2.
**Theorem 2** (Aggregation Error).: _Denote \(\mathbf{m}_{t}^{*}\coloneqq\frac{1}{|\mathbf{\beta}|}\sum_{i\in\mathcal{H}}\mathbf{m}_{t,i}\), where \(\mathbf{m}_{t,i}\) is the client momentum computed from gradient without record-level clipping. Assume the local momentum of all honest clients \(\{\mathbf{m}_{t,i}\}_{t\in\mathcal{H}}\) are i.i.d. with expectation \(\mathbf{\mu}\coloneqq\mathbb{E}[\mathbf{m}_{t,i}]\), and the variance is bounded (in terms of L2-norm)_
\[\mathbb{E}\|\mathbf{m}_{t,i}-\mathbf{\mu}\|^{2}\leqslant\rho^{2} \tag{17}\]
_After some parameter tuning (the detailed tuning is shown under (21) in Appendix C) of_
\[R\propto O\left(\rho\sqrt{n/(|\mathcal{B}|+\sqrt{d}\sigma/q)} \right),\quad C\propto O(R) \tag{18}\]
_we have the following aggregation error due to clipping, DP noise, and Byzantine clients:_
\[\mathbb{E}\|\tilde{\mathbf{m}}_{t}-\mathbf{m}_{t}^{*}\|^{2}\leqslant O \left(\frac{\rho^{2}(|\mathcal{B}|+\sqrt{d}\sigma/q)}{n}\right) \tag{19}\]
_where \(|\mathcal{B}|\) is the number of Byzantine clients, \(d\) is the dimension of model parameter \(\mathbf{\theta}_{t}\), \(\sigma\) is the noise multiplier (for DP) shown in (12), \(q\) is the client-level sampling rate shown in Line-2 of Algorithm 1, and \(\rho\) is defined in (17). The formal version of (19) is shown in (23) of Appendix C._
Proof.: (Sketch) Directly bounding \(\mathbb{E}\|\tilde{\mathbf{m}}_{t}-\mathbf{m}_{t}^{*}\|^{2}\) is not easy, thus we utilize the upper bounds of \(\mathbb{E}\|\tilde{\mathbf{m}}_{t}-\mathbf{\mu}\|^{2}\) and \(\mathbb{E}\|\mathbf{\mu}-\mathbf{m}_{t}^{*}\|^{2}\) to get the final result, where \(\mathbf{\mu}\coloneqq\mathbb{E}[\mathbf{m}_{t,i}]\) is the expected local momentum (we assume clients' local momentum are i.i.d.). When upper bounding \(\mathbb{E}\|\tilde{\mathbf{m}}_{t}-\mathbf{\mu}\|^{2}\), we can decompose it to three types of errors: error of honest clients (due to randomness and bias introduced by clipping), error of Byzantine clients (due to Byzantine perturbation), and error introduced by the added DP noise. Furthermore, we can have the optimized parameter tuning of \(C\) and \(R\) with the goal of minimizing the summation of the above three types of errors. Refer to Appendix C for the full-version proof.
**Interpretation of Theorem 2.** The value of \(\mathbb{E}\|\tilde{\mathbf{m}}_{t}-\mathbf{m}_{t}^{*}\|^{2}\) quantifies the aggregation error, i.e., how the aggregated privatized momentum \(\tilde{\mathbf{m}}_{t}\) (with clipping, DP noise, and Byzantine clients' impact) differs from the "pure" momentum aggregation \(\mathbf{m}_{t}^{*}\), where only honest clients participate and without clipping and DP noise. According to (19), the aggregation error is proportional to \(\rho^{2}\) and \(\frac{|\mathcal{B}|}{n}+\frac{\sqrt{d}\sigma}{nq}\), where \(\rho^{2}\) quantifies the variance of honest clients' local momentum, \(\frac{|\mathcal{B}|}{n}\) is the fraction of Byzantine clients, and \(\frac{\sigma}{nq}=O(1/\epsilon)\) for \(\epsilon\)-DP. In other words, the aggregation error will be enlarged when: honest clients' variance is large, or the Byzantine attacker corrupts more clients, or the training model is complex (i.e., the model dimension \(d\) is large), or we need stronger privacy (i.e., a smaller \(\epsilon\)). Furthermore, due to the format of \(\frac{|\mathcal{B}|}{n}+\frac{\sqrt{d}\sigma}{nq}\), the impact from DP noise is independent of the increase of Byzantine clients \(|\mathcal{B}|\) (versus Limitation 2 of DP-LFH in Section
3.2). On the other hand, according to the parameter tuning in (18), we could theoretically set a larger record-level clipping bound \(R\) when \(\rho\), \(d\), and \(|\mathcal{B}|\) are large, or \(\sigma\) is small (i.e., when privacy budget \(\epsilon\) is large). The tuning of client-level clipping bound \(C\) should be adjusted according to the value of \(R\). Recall that \(R\) is for DP, while \(C\) is for robustness.
By following the convergence analysis in [22] and using the result in (19), we have the convergence rate shown below.
**Theorem 3** (Convergence Rate of DP-BREM).: _The convergence rate of DP-BREM in Algorithm 1 is asymptotically (ignoring constants and higher order terms) of the order_
\[\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}\|\nabla\ell(\mathbf{\theta}_{t-1})\|^{2} \lesssim\sqrt{\frac{\rho^{2}}{T}\frac{|\mathcal{B}|+(1+\sqrt{d}\sigma)/q}{n}} \tag{20}\]
_where \(\ell(\cdot)\) is the loss function, \(T\) is the total number of training iterations, and other parameters are the same as in (19)._
Proof.: See Appendix D.
**Remark: comparison with LFH and DP-LFH.** The convergence rate of the non-private LFH, DP-LFH, and the proposed solution DP-BREM, showing in (5), (8), and (20) respectively, are summarized in Table 1. Though both DP-LFH and DP-BREM pay an additional term of \(\sqrt{d}\sigma/q\) to get the DP property, they have different impact on the convergence. As discussed in Limitation 2 of Section 3.2, the additional term \(\sqrt{d}\sigma/q\) of DP-LFH (due to DP noise added to clients' gradient) is on the term \(\rho\), thus it will enlarge the impact of Byzantine clients (i.e., the term \(|\mathcal{B}|\)). However, the additional term \(\sqrt{d}\sigma/q\) of our solution DP-BREM (due to DP noise added to the aggregated momentum) is on the term \(1+|\mathcal{B}|\), which has a squared-root order. Therefore, DP noise only has limited impact on the convergence of DP-BREM when there are Byzantine clients. We will validate above theoretical analysis via experimental results in Section 6.
## 5 DP-BREM\({}^{+}\) with Secure Aggregation
The private and robust FL solution DP-BREM (in Section 4) assumes a _trusted_ server which can access clients' momentum. In this section, we propose DP-BREM\({}^{+}\), which assumes a _malicious_ server and utilizes secure aggregation techniques, achieving the same DP and robustness guarantees as DP-BREM. As discussed in Section 3.1, we consider the server as malicious only for data privacy, while clients are malicious for both data privacy and Byzantine attacks.
### Challenges
Considering the server is malicious for data privacy, the noisy aggregate of momentum with centered clipping shown in (12) must be implemented securely with the goals of 1) privacy, i.e., each party, including clients and the server, learns nothing but the differentially-private output; and 2) integrity, i.e., the output is correctly computed. Since the noisy aggregated momentum of the previous iteration \(\mathbf{\tilde{m}}_{t-1}\) is already differentially-privatized, we can regard it as public information and only need to focus on securely computing the term \(\sum_{t\in k}\mathsf{Clip}_{C}(\mathbf{\tilde{m}}_{t,i}-\mathbf{\tilde{m}}_{t-1})+ \mathcal{N}(0,R^{2}\sigma^{2}\mathbf{I}_{d})\) in (12).
**Secure Aggregation with Verified Inputs (SAVI).** The key crypto technique we leverage to achieve the above objectives is SAVI [35], which is a type of protocols that securely aggregate only well-formed inputs. The security goals include both _privacy_ and _integrity_. Specifically, privacy means that no party should be able to learn anything about the raw input of an honest client, other than what can be learned from the final aggregation result. Integrity means that the output of the protocol returns the correct aggregate of well-formed input, where an input \(u\) passes the integrity check with a public validation predicate \(\mathsf{Valid}(\cdot)\) if and only if \(\mathsf{Valid}(u)=1\), and the aggregation is correctly computed. An instantiation of SAVI protocol is EIFFeL [35], which is described in Appendix G.
**Challenge: Secure Generation of Gaussian Noise.** A SAVI protocol can potentially solve the problem of securely aggregating the clipped vectors (by enforcing a norm-bound on the client momentum difference). However, the Gaussian noise \(\mathcal{N}(0,R^{2}\sigma^{2}\mathbf{I}_{d})\) needs to be securely generated and aggregated as well. In DP-BREM with a trusted server, the Gaussian noise \(\mathcal{N}(0,R^{2}\sigma^{2}\mathbf{I}_{d})\) is generated by the server to guarantee DP. However, when the server is assumed as malicious, since all parties (including the server) know the final noisy aggregation result, if the server knows the noise, then the exact aggregation result (without adding Gaussian noise) can be directly recovered. Therefore, the added Gaussian noise for DP purpose cannot be directly generated by the server.
A straightforward solution is to follow [36] that assumes the existence of another semi-honest server (but does not collude with the original server) that will generate DP noise and execute the privacy engine. However, the assumption of another non-colluding server may not be practical and we assume only a single server.
Another alternative solution is to leverage Distributed DP (DDP) [38], where Gaussian noise is generated by clients in a distributed way: each client generates a Gaussian noise locally, and the aggregation of Gaussian noise also follows a Gaussian distribution with an enlarged standard deviation. Since only the aggregated result is released (with the help of crypto techniques), each client can add a smaller noise with the guarantee that the _aggregated_ noise satisfies the required DP. However, this solution has two limitations in our scenario. First, distributed noise generation needs to add more noise
\begin{table}
\begin{tabular}{c|c c} \hline & Where to add noise & Convergence Rate \\ \hline LFH [22] & None & \(O(\rho\sqrt{1+|\mathcal{B}|})\) \\ DP-LFH & Clients’ gradients & \(O((\rho+\sqrt{d}\sigma)\sqrt{1+|\mathcal{B}|})\) \\ DP-BREM & Aggregated momentum & \(O(\rho\sqrt{1+|\mathcal{B}|+\sqrt{d}\sigma}\) \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of Convergence Rate
to achieve the same privacy compared with server-side noise generation due to the collusion of malicious clients. Second, malicious clients can generate arbitrary values as the local Gaussian noise, which has a large impact on the robustness.
A possible solution to address the first limitation is to _jointly_ generate Gaussian noise as in [34], where no entity learns or controls the true value of the noise (or a portion of the noise). However, the protocol in [34] is designed only for additive secret sharing scheme, which only works for honest-but-curious parties and does not tolerate malicious parties. Moreover, in [34], the Gaussian noise is jointly generated by honest-but-curious and non-colluding parties, which does not address the second limitation as the clients can be malicious in our threat model discussed in Section 3.1.
**Overview of DP-BREM\({}^{+}\).** To achieve secure aggregation with verified inputs and secure Gaussian noise generation under the threat model of a malicious server and malicious minority of clients, our DP-BREM\({}^{+}\) 1) leverages an existing SAVI protocol called EIFFeL [35] to achieve secure input validation; and 2) introduces a new protocol to achieve secure noise generation that is compatible with EIFFeL. The idea of _jointly_ generating Gaussian noise in DP-BREM\({}^{+}\) is inspired by [34], but our design is based on Shamir's secret sharing [37] with robust reconstruction, which guarantees security under malicious minority. We present the preliminaries of Shamir's secret sharing and EIFFeL protocol in Appendix G.
### Design of DP-BREM\({}^{+}\)
As discussed in Section 5.1, the main task of DP-BREM\({}^{+}\) is to securely compute the term \(\sum_{i\in I_{d}}\mathsf{Clip}_{C}(\tilde{\mathbf{m}}_{i,i}-\tilde{\mathbf{m}}_{t-1}) +\mathcal{N}(0,R^{2}\sigma^{2}\mathbf{I}_{d})\) shown in (12). In DP-BREM\({}^{+}\), after computing local momentum \(\tilde{\mathbf{m}}_{i,i}\) via (11), each client \(\mathsf{C}_{i}\) first implements centered clipping to get \(\mathbf{z}_{i}\coloneqq\mathsf{Clip}_{C}(\tilde{\mathbf{m}}_{i,i}-\tilde{\mathbf{m}}_{t-1})\), which is the private input for validation and aggregation.
**Three-Phase Design.** In DP-BREM\({}^{+}\), clients and the server jointly implement three phases: 1) secure input validation to validate the client momentum is properly centered clipped by \(C\), 2) secure noise generation, where clients generate shares of Gaussian noise which can be aggregated in Phase 3 to ensure DP, and 3) aggregation of valid inputs and noise to obtain the noisy global model. We assume the arithmetic circuit is computed over a finite field \(\mathbb{F}_{2K}\). The illustration of DP-BREM\({}^{+}\) is shown in Figure 2. Due to limited space, we present the detailed steps 1-7 in Appendix H.
**Phase 1: Secure Input Validation.** The validation function for an input \(\mathbf{z}_{i}\) considered in DP-BREM\({}^{+}\) is defined as \(\mathsf{Valid}(\mathbf{z}_{i})\coloneqq\mathbb{I}\left(\|\mathbf{z}_{i}\|\leqslant C\right)\), where \(\mathsf{Valid}(\mathbf{z}_{i})=1\) if and only if the condition \(\|\mathbf{z}_{i}\|\leqslant C\) holds. Since honest clients compute \(\mathbf{z}_{i}=\mathsf{Clip}_{C}(\tilde{\mathbf{m}}_{t,i}-\tilde{\mathbf{m}}_{t-1})\), verifying whether \(\mathbf{z}_{i}\) is well-formed, with bounded L2-norm via \(\mathsf{Valid}(\cdot)\), for all clients ensures centered clipping of client momentum \(\tilde{\mathbf{m}}_{t,i}\) (to achieve robustness as DP-BREM). We follow the design in EIFFeL [35] for secure input validation, which returns the validation result \(\mathsf{Valid}(\mathbf{z}_{i})\) (either 1 or 0) for client \(\mathsf{C}_{i}\)'s private input \(\mathbf{z}_{i}\), corresponding to steps 1, 2, and 3 shown in Figure 2. Then, clients and the server can jointly verify all inputs \(\{\mathbf{z}_{i}\}_{i\in I_{d}}\), and obtain the set of valid inputs \(\,I_{\mathsf{Valid}}\), where \(\mathsf{Valid}(\mathbf{z}_{i})=1\) for all \(i\in I_{\mathsf{Valid}}\). In the later step, only inputs in \(\,I_{\mathsf{Valid}}\) will be aggregated.
**Phase 2: Secure Noise Generation.** We develop a new protocol for secure distributed Gaussian noise generation, which returns the shares (held by each client) of a random vector \(\mathbf{\xi}\) of length \(d\) from Gaussian distribution \(\mathcal{N}(0,R^{2}\sigma^{2}\mathbf{I}_{d})\), corresponding to steps 4 and 5 shown in Figure 2. The shares of noise can be reconstructed into a single Gaussian noise (for ensuring DP) with the guarantee that no parties knows or controls the generated noise, which protects the information of private inputs after the noisy aggregate is released.
**Phase 3: Aggregation of Valid Inputs and Noise.** Finally, the server and clients can aggregate the valid inputs (obtained in Phase 1) and the generated Gaussian noise (obtained in Phase 2) by implementing steps 6 and 7 shown in Figure 2, ensuring nothing except the noisy aggregate can be learned.
**Remark on Efficiency.** DP-BREM\({}^{+}\)'s usage of EIFFeL's secure input validation is due to efficiency considerations. Instead of using secure input validation, one alternative is to use standard secure multi-party computation (MPC) for the entire aggregation to directly compute the clipped result. However, doing this under MPC would result in a very large computation/communication overhead due to the multiplication, min-operation, division, and L2-norm computation in the clipping operation \(\mathsf{Clip}_{C}(\cdot)\) defined in (4). In contrast, the secure input validation protocol only requires the verifiers to check all the multiplication gates very efficiently with just one identity test. We also note that other Byzantine robust aggregators, such as median-based methods, cannot directly use secure input validation to achieve secure aggregation because median requires cross-client computation, while secure input validation is done individually. The compatibility with secure input validation is one of the advantages of DP-BREM.
**Complexity.** According to EIFFeL [35], the computation/communication complexity of secure aggregation
Figure 2: Illustration of DP-BREM\({}^{+}\) (see Appendix H for detailed steps 1–7)
with input validation is \(O(mnd)\) for clients and \(O(n^{2}+md\min\{n,m^{2}\})\) for the server in terms of the number of clients \(n\), number of malicious clients \(m\), and data dimension \(d\). For the proposed secure noise generation (only clients are involved), the computation/communication complexity for clients is \(O(mnd)\).
### Security Analysis
In comparison, EIFFeL [35] is a secure aggregation protocol with verified inputs (without guaranteeing DP), while our solution DP-BREM\({}^{+}\) is a secure noisy aggregation protocol with verified inputs and jointly generated Gaussian noise, which provides DP on the aggregated results. Therefore, the only difference is the Gaussian noise that will be aggregated to the final result. We show the formal security guarantee of DP-BREM\({}^{+}\) in the following theorem.
**Theorem 4** (Security Guarantees of DP-BREM\({}^{+}\)).: _For the validation function \(\mathsf{Valid}(\cdot)\) considered in Section 5.2, given a security parameter \(\kappa\), the secure noisy aggregation protocol in DP-BREM\({}^{+}\) satisfies:_
_1) Integrity. The output of the protocol returns the noisy aggregate of a subset of clients \(I_{\mathsf{Valid}}\) and Gaussian noise \(\xi\), such that all clients in \(I_{\mathsf{Valid}}\) have well-formed inputs:_
\[\Pr[output=\sum\nolimits_{i\in I_{\mathsf{Valid}}}\mathbf{z}_{i}+\xi\mathbf{\xi}] \geqslant 1-\text{negl}(\kappa)\]
_where random vector \(\mathbf{\xi}\sim\mathcal{N}(0,R^{2}\sigma^{2}\mathbf{1}_{d})\), and \(\mathsf{Valid}(\mathbf{z}_{i})=1\) for all \(i\in I_{\mathsf{Valid}}\). Note that the set \(\mathsf{Valid}\) contains all honest clients (denoted by \(I_{H}\)) and the malicious clients who submitted well-formed (denoted by \(I_{M}^{*}\)), i.e., \(I_{\mathsf{Valid}}=I_{H}\cup I_{M}^{*}\)._
_2) Privacy. For a set of malicious clients \(I_{M}\) and a malicious server \(\mathsf{S}\), there exists a probabilistic polynomial-time (P.P.T.) simulator \(\mathsf{Sim}(\cdot)\) such that:_
\[\mathsf{Real}\left(\{z_{i}\}_{i\in I_{H}},\Omega_{M\cup\mathsf{S}}\right) \equiv_{\mathsf{C}}\mathsf{Sim}\left(\sum\nolimits_{i\in I_{H}}\mathbf{z}_{i}+ \mathbf{\xi}\right)\text{, }I_{H},\Omega_{M\cup\mathsf{S}}\right)\]
_where \(\{z_{i}\}_{i\in I_{H}}\) denotes the input of all the honest clients, \(\mathsf{Real}\) denotes a random variable representing the joint view of all the parties in the protocol's execution, \(\Omega_{I_{M}\cup\mathsf{S}}\) indicates a polynomial-time algorithm implementing the "next-message" function of the parties in \(I_{M}\cup\mathsf{S}\) (see [35, Appendix 11.5]), and \(\equiv_{\mathsf{C}}\) denotes computational indistinguishability._
Proof.: See Appendix I.
## 6 Experimental Evaluation
In this section, we demonstrate the effectiveness of the proposed DP-BREM/DP-BREM\({}^{+}\) on achieving both good privacy-utility tradeoff and Byzantine robustness via experimental results on MNIST [25] and CIFAR-10 [24] datasets. All experiments are developed via PyTorch1.
Footnote 1: Our source code is available in an anonymized GitHub repository: [https://anonymous.4open.science/r/DP-BREM-CSED](https://anonymous.4open.science/r/DP-BREM-CSED)
### Experimental Setup
**Datasets (non-IID) and Model Architecture.** We use two datasets for our experiments: MNIST [25] and CIFAR-10 [24], where the default value of the number of total clients is \(n=100\). For MNIST dataset, we use the CNN model from PyTorch example2. For CIFAR-10 dataset, we use the CNN model from the TensorFlow tutorial3, like the previous works [30, 48]. To simulate the heterogeneous data distributions, we make non-i.i.d. partitions of the datasets, which is a similar setup as [48] and is described below:
Footnote 2: [https://github.com/pytorch/opacus](https://github.com/pytorch/opacus)
Footnote 3: [https://www.tensorflow.org/tutorials/images/cnn](https://www.tensorflow.org/tutorials/images/cnn)
1) Non-IID MNIST: The MNIST dataset contains 60,000 training images and 10,000 testing images of 10 classes. There are 100 clients, each holds 600 training images. We sort the training data by digit label and evenly divide it into 400 shards. Each client is assigned four random shards of the data, so that most of the clients have examples of three or four digits.
2) Non-IID CIFAR-10: The CIFAR-10 dataset contains 50,000 training images and 10,000 test images of 10 classes. There are 100 clients, each holds 500 training images. We sample the training images for each client using a Dirichlet distribution with hyperparameter 0.9.
**Byzantine Attacks.** We consider the following three different Byzantine attacks in our experiments.
1) ALIE ("a little is enough") [3]. The attacker uses the empirical variance (estimated from the data of corrupted clients) to determine the perturbation range, in which the attack can deviate from the mean without being detected or filtered out.
2) IPM (inner-product manipulation) [44]. The attacker manipulates the submitted gradient to be the negative direction of the mean of other honest clients' gradients, thus the negative inner-product of the true gradient and the aggregation prevents the descent of the loss. Note that the original IPM attack assumes the _omniscient_ attacker (i.e., knows the data/gradient of all other clients), which is contradicted to our assumption that the attacker only has access to the data of the corrupted clients (otherwise, the privacy is already leaked and no need to provide DP). Thus, in the experiments, we use the data of corrupted clients to estimate the aggregated gradient of honest clients, and then manipulate the inner-product (i.e., non-omniscient attack).
3) LF (label-flipping). The attacker modifies the labels of all examples of corrupted clients' data and trains a new model with multiple iterations, then uses model replacement strategy [2] to enhance the impact on the global model.
**Byzantine Defenses with DP.** We compare the performance of four schemes against Byzantine attacks. All of them satisfy record-level DP via record-level clipping and DP noise added by the trusted server (in DP-BREM, DP-CM, and DP-FedSGD) or clients (in DP-LFH), or jointly generated by clients (in DP-BREM\({}^{+}\)), where DP-LFH and DP-BREM\({}^{+}\) do not require a trusted server. Note that privacy budget \(\epsilon\) in
Theorem 1 is the same for different clients because clients have the same size of local datasets \(|\mathcal{D}_{l}|\) and same record-level sampling rate (i.e., same \(|\mathcal{D}_{l}|\) and \(p_{i}\) for different clients \(\mathsf{C}_{l}\)). We fix \(\delta=10^{-6}\) for \((\epsilon,\delta)\)-DP in all experiments. For the setting of other parameters, refer to Appendix J.
1) DP-BREM\({}^{+}\). Since DP-BREM\({}^{+}\) achieves the same DP and robustness guarantees as DP-BREM, we do not perform the empirical experiments with secure aggregation because the accuracy results will be exactly the same as DP-BREM. We use DP-BREM/\({}^{+}\) to denote both DP-BREM and DP-BREM\({}^{+}\), and the empirical implementation follows Algorithm 1. We consider two different client-level sampling rate (\(q=1\) or \(0.2\)) for DP-BREM/\({}^{+}\).
2) DP-LFH. The baseline (shown in Section 3.2) that directly combines DP-SGD based momentum with LFH, where each client adds DP noise to the local gradient to compute the momentum that will be aggregated with centered clipping by the server. Since this method does not have privacy amplification from client-level sampling (discussed in Limitation 3 of Section 3.2), we set \(q=1\).
3) DP-CM. As a baseline that adds DP to median-based robust aggregators (discussed in Section 3.2), we implement the Byzantine-robust aggregator Coordinate-wise Median (CM) [46] with DP noise added to the median result. Note that only DP-CM uses median-based aggregation, while other methods use average-based aggregation. As discussed in Section 3.1 and Example 1, the median-based aggregation has large sensitivity and bad privacy-utility tradeoff. Since median-based aggregation can not get sensitivity benefit from large population, we set \(q=0.2\).
4) DP-FedSGD. As a baseline that only achieves DP under trusted server without robustness, we implement DP-FedSGD in which the server directly averages clients' submitted gradient (without Byzantine-robust design) and adds noise. Note that the original DP-FedSGD in [30] clips the client gradient to achieve _client-level_ DP. However, the DP-FedSGD in our experiments clips the per-record gradient as in (10) to achieve _record-level_ DP. We set \(q=1\) or \(0.2\) like in DP-BREM/\({}^{+}\).
Figure 4: MNIST: Varying privacy budget \(\epsilon\) (with \(\delta_{B}=30\%\) Byzantine clients).
Figure 5: MNIST: Varying record-level clipping bound \(R\) for DP-BREM (\(q=1\)) under different settings.
Figure 3: MNIST: Varying the percentage of Byzantine clients \(\delta_{B}\) (with \(\epsilon=3\)).
**Evaluation Metric.** We evaluate the testing accuracy of the global model. Note that both DP and Byzantine attacks reduces the accuracy. We say a method has good Byzantine robustness if the accuracy does not reduce too much with the increased number of Byzantine clients.
### Results of MNIST Dataset
**Varying \(\delta_{B}\).** Figure 3 shows how the accuracy changes w.r.t. the percentage of Byzantine clients with fixed privacy budget \(\epsilon=3\), i.e., the Byzantine robustness when DP is guaranteed. We consider different percentages of Byzantine clients \(\delta_{B}=\frac{|\beta|}{n}\times 100\%\in\{0\%,10\%,20\%,30\%\}\) and fix the privacy budget \(\epsilon=3\), where the total number of clients is \(n=100\). When \(\delta_{B}=0\), DP-BREM/\({}^{+}\) achieves almost the same accuracy as DP-FedSGD, indicating the Byzantine-robust design (client momentum with centered clipping) has almost no impact on the utility when there is no attack. However, DP-LFH and DP-CM have reduced accuracy due to local noise and large sensitivity, respectively. After increasing \(\delta_{B}\), our DP-BREM/\({}^{+}\) only has a small accuracy decrease, indicating its success on providing Byzantine robustness. However, the accuracy of DP-LFH reduces sharply, which validates that local noise enhances Byzantine attacks, as discussed in Limitation 2 of Section 3.2.
**Varying \(\epsilon\).** Figure 4 shows how the accuracy changes w.r.t. the privacy budget \(\epsilon\), i.e., the privacy-utility tradeoff when there is Byzantine attack (with fixed percentage of Byzantine clients \(\delta_{B}=30\%\)). We consider different privacy budget \(\epsilon\in\{1,3,8,\inf\}\). DP-FedSGD has very low accuracy in all cases because it does not have defense against Byzantine attacks. DP-BREM/\({}^{+}\) outperforms other methods in all cases, which validates the success of achieving good privacy-utility tradeoff and strong Byzantine robustness simultaneously. Note that when \(\epsilon=\inf\), i.e., the noise multiplier \(\sigma=0\), both DP-BREM/\({}^{+}\) and DP-LFH reduces to the non-private LFH, which was shown to be more robust than non-private median-based method in the original paper [22].
**Varying \(R\) for DP-BREM/\({}^{+}\).** Figure 5 shows how the accuracy changes w.r.t. the record-level clipping bound \(R\) in DP-BREM/\({}^{+}\) (with \(q=1\)). The experimental results show that when there are fewer Byzantine clients (i.e., smaller \(\delta_{B}\)) or the noise multiplier \(\sigma\) is smaller (i.e., larger \(\epsilon\)), we need to set a larger \(R\) to obtain a better accuracy. This observation is consistent with the theoretical parameter tuning discussed in Theorem 2 and its interpretation. Furthermore, the experimental results show that the case of \(\epsilon=1\) is very sensitive to the value of \(R\).
### Results of CIFAR-10 Dataset
Since the model for training CIFAR-10 needs more iterations to converge, we set \(T=5000\) (as versus \(T=1000\) for MNIST). Due to the larger \(T\), we set the privacy budget \(\epsilon\) to be larger than that of MNIST.
The experimental results of CIFAR-10 are similar to the results of MNIST. Figure 6 shows the accuracy changes w.r.t. \(\delta_{B}\) (when fixing \(\epsilon=9\)), and Figure 7 shows the accuracy changes w.r.t. \(\epsilon\) (when fixing \(\delta_{B}=20\%\)). Similar to the results of MNIST, DP-BREM/\({}^{+}\) outperforms other methods in almost
Figure 6: CIFAR-10: Different percentage of Byzantine clients \(\delta_{B}\) (with \(\epsilon=9\)).
Figure 7: CIFAR-10: Different privacy budget \(\epsilon\) (with \(\delta_{B}=20\%\) Byzantine clients).
all cases for CIFAR-10 (except the cases of no Byzantine attacks, i.e., \(\delta_{B}=0\%\), or weak attacks, such as \(\delta_{B}=10\%\) for ALIE attack). For DP-LFH and DP-CM, the DP noise added during the model training of CIFAR-10 has larger impact on the utility than in MNIST because CIFAR-10 dataset contains more complicated information.
**Influence of \(q\) in DP-BREM/\({}^{\star}\).** By comparing DP-BREM/\({}^{\star}\) with different client-level sampling rates of \(q=1\) vs. 0.2, we can observe that DP-BREM/\({}^{\star}\) (\(q=1\)) without privacy amplification from client-level sampling is better in most cases, especially when the Byzantine client percentage \(\delta_{B}\) increases (in Figure 6). The possible reason is that client sampling has potentially a big impact under Byzantine scenarios. However, for ALIE attack in Figure 7, we can observe that DP-BREM/\({}^{\star}\) (\(q=0.2\)) performs better. This is because ALIE attack uses a larger scale of adversarial perturbation when the number of participating clients is larger.
## 7 Related Work
**FL with DP.** Differential Privacy (DP) was originally designed for the centralized scenario where a trusted database server, who has direct access to all clients' data in the clear, wishes to answer queries or publish statistics in a privacy-preserving manner by randomizing query results. In FL, McMahan et al. [30] proposed DP-FedSGD and DP-FedAvg, which provide client-level privacy with a trusted server. Geyer et al. [18] uses an algorithm similar to DP-FedSGD for the architecture search problem, and the privacy guarantee acts on client-level and trusted server too. Li et al. [26] studies the online transfer learning and introduces a notion called task global privacy that works on record-level. However, the online setting assumes the client only interacts with the server once and does not extend to the federated setting. Zheng et al. [48] introduced two privacy notions, that describe privacy guarantee against an individual malicious client and against a group of malicious clients (but not against the server) on record-level privacy, based on a new privacy notion called \(f\)-differential privacy. Note that, our solutions achieve record-level DP under either a trusted server or a malicious server.
**Byzantine-Robust FL.** Recently, there have been extensive works on Byzantine-robust federated/distributed learning with a trustworthy server, and most of them play with median statistics of gradient contributions. Blanchard et al. [5] proposed Krum which uses the Euclidean distance to determine which gradient contributions should be removed. Yin et al. [46] proposed two robust distributed gradient descent algorithms, one based on coordinate-wise median, and the other on coordinate-wise trimmed mean. Mhamdi et al. [32] proposed a meta-aggregation rule called Bulyan, a two-step meta-aggregation algorithm based on the Krum and trimmed median, which filters malicious updates followed by computing the trimmed median of the remaining updates.
**Private and Byzantine-Robust FL.** Recently, some works tried to simultaneously achieve both privacy and robustness of FL. He et al. [20] proposed a Byzantine-resilient and privacy-preserving solution, which makes distance-based robust aggregation rules (such as Krum [5]) compatible with secure aggregation via MPC and secret sharing. So et al. [39] developed a similar scheme based on Krum, but rely on different cryptographic techniques, such as verifiable Shamir's secret sharing and Reed-Solomon code. Velicheti et al. [41] achieved both privacy and Byzantine robustness via incorporating secure averaging among randomly clustered clients before filtering malicious updates through robust aggregation. However, these works only ensure the security of the aggregation step and do not achieve DP for the aggregated model.
**Byzantine-Robust FL with DP.** Wang et al. [42] proposed an FL scheme to provide Distributed DP (via encryption) and robustness (via range proof technologies); however, this scheme only verifies whether the local model weights are in a bounded range, which provides weak robustness. As comparison, our solution utilizes client momentum and centered clipping to guarantee Byzantine robustness with provable convergence analysis. Zhu et al. [49] replaces the vanilla stochastic gradient descent (SGD) by sign-SGD to provide robustness to Byzantine attacks, and then perturbs the signs to satisfies DP. Since sign-SGD only aggregates the element-wise sign (instead of the value) of clients' gradients, it usually has degraded convergence compared with the original SGD. Also, [49] only accounts for the privacy cost of one iteration, instead of the composition of all iterations in FL. Thus, the privacy cost (i.e., the value of privacy budget \(\epsilon\)) shown in [49] is underestimated. As a comparison, our solution is based on the original SGD (with momentum), and we account for privacy cost of all iterations.
## 8 Conclusions
In this paper, we aim to develop an FL protocol in the cross-silo setting that is both differentially private (under an untrusted server for privacy) and Byzantine robust. In order to achieve a high privacy-utility tradeoff without impacting robustness, we first propose DP-BREM, a DP version of LFH-based FL protocol with a robust aggregator based on client momentum, where the server adds noise on the aggregated momentum. Based on that, we develop DP-BREM\({}^{+}\) which relaxes the server's trust assumption, by combining secure aggregation techniques with verifiable inputs and a new protocol for secure joint noise generation. DP-BREM\({}^{+}\) achieves the same DP and robustness properties as DP-BREM, under a malicious server (for privacy) and malicious minority clients. We theoretically analyzed the error and convergence of DP-BREM, and conduct extensive experiments that empirically show the advantage of DP-BREM/\({}^{\star}\) in terms of privacy-utility tradeoff and Byzantine robustness versus three baseline protocols. For future work, we will extend our work to handle other types of robust aggregators. |
2306.05374 | Towards Ultrasound Tongue Image prediction from EEG during speech
production | Previous initial research has already been carried out to propose
speech-based BCI using brain signals (e.g. non-invasive EEG and invasive sEEG /
ECoG), but there is a lack of combined methods that investigate non-invasive
brain, articulation, and speech signals together and analyze the cognitive
processes in the brain, the kinematics of the articulatory movement and the
resulting speech signal. In this paper, we describe our multimodal
(electroencephalography, ultrasound tongue imaging, and speech) analysis and
synthesis experiments, as a feasibility study. We extend the analysis of brain
signals recorded during speech production with ultrasound-based articulation
data. From the brain signal measured with EEG, we predict ultrasound images of
the tongue with a fully connected deep neural network. The results show that
there is a weak but noticeable relationship between EEG and ultrasound tongue
images, i.e. the network can differentiate articulated speech and neutral
tongue position. | Tamás Gábor Csapó, Frigyes Viktor Arthur, Péter Nagy, Ádám Boncz | 2023-05-22T08:23:51Z | http://arxiv.org/abs/2306.05374v2 | # Towards Ultrasound Tongue Image prediction from EEG
###### Abstract
Previous initial research has already been carried out to propose speech-based BCI using brain signals (e.g. non-invasive EEG and invasive sEEG / ECoG), but there is a lack of combined methods that investigate non-invasive brain, articulation, and speech signals together and analyze the cognitive processes in the brain, the kinematics of the articulatory movement and the resulting speech signal. In this paper, we describe our multimodal (electroencephalography, ultrasound tongue imaging, and speech) analysis and synthesis experiments, as a feasibility study. We extend the analysis of brain signals recorded during speech production with ultrasound-based articulation data. From the brain signal measured with EEG, we predict ultrasound images of the tongue with a fully connected deep neural network. The results show that there is a weak but noticeable relationship between EEG and ultrasound tongue images, i.e. the network can differentiate articulated speech and neutral tongue position.
Tamas Gabor Csapo\({}^{1}\), Frigyes Viktor Arthur\({}^{1}\), Peter Nagy\({}^{2,3}\), Adam Boncz\({}^{3}\)\({}^{1}\)Department of Telecommunications and Media Informatics,
Budapest University of Technology and Economics (BME), Budapest, Hungary
\({}^{2}\)Department of Measurement and Information Systems, BME, Budapest, Hungary
\({}^{3}\)Sound and speech perception Research Group, Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary
{csapot,arthur}@tmit.bme.hu, [email protected], [email protected]
**Index Terms**: ultrasound, EEG, brain-computer interface
## 1 Introduction
Brain-Computer Interfaces (BCIs) can allow computers to be controlled directly without physical activity. Augmentative and Alternative Communication (AAC) technologies (e.g. BCI) can directly read brain signals to compensate for lost speechability [1]. In the future, the use of speech neuroprocesses may help patients with neurological or speech disorders [2]. For recording the brain signal, several technologies are available: e.g., electroencephalography (EEG) [3], stereotactic deep electrodes (sEEG) [4], intracranial electrocardiography (ECoG) [5], magnetoencephalography (MEG) [6], Local Field Potential (LFP) [5]. Among these brain signal recording methods, EEG may be the most suitable for BCI, as it is affordable, involves significantly less risk than invasive methods, and can be portable [7]. Initial research has already been carried out to develop EEG and speech-based BCI [8, 4, 9], but this has not yet resulted in clearly intelligible speech. The reason is that EEG only measures the brain signal on the scalp; therefore, it is less accurate than invasive technologies. Using invasive methods, it has already been possible to create speech-like synthesized speech based on brain signals, e.g. ECoG [10, 11] and sEEG [12, 4, 13], but due to the above disadvantage (primarily the invasive nature), the latter are not expected to be widespread.
### Brain signals and articulatory movement
Articulatory movements have only sporadically been studied in parallel with brain signals during speech production, according to our knowledge. Articulation-based strategies require a two-step approach: a neural decoder for articulation and an articulation-to-speech model. There is a single study by Lesaja et al. that investigates neural correlates of lip movements, and predicts lip-landmark position from ECoG input [14]. Besides, the other related studies all use estimated articulatory data, i.e. they take into account the articulatory information inferred from the speech signal or from textual contents [15, 16, 17, 18, 11, 19, 20].
As early as a decade ago, Guenther et al. suggested using articulatory information in speech-based BCIs [15]. They state that articulatory data might be easier and more useful to predict from brain data than formant frequencies. In a follow-up study [16], they classified American English phonemes from intracortical microelectrode recordings, considering the articulatory characteristics of vowels and consonants - e.g., place and manner of articulation. In the research of Carey et al. [17], MRI of the vocal tract and functional MRI of the brain were recorded with the same speakers, in separate sessions. Since the two signals cannot be recorded simultaneously, the speakers repeated the same stimulus several times for the two modalities, so the relationship between the brain signal and articulation can be examined by aligning through the speech signal. Chartier and his colleagues [18] inferred the articulatory kinematic trajectories via Acoustic-to-Articulatory Inversion (AAI) methods, from speech acoustics. They used this estimated information to investigate neural mechanisms underlying articulation. The MOCHA-TIMIT database was used to train the AAI model (with speakers independent from the brain signal recordings), in which the articulatory movement was recorded with an electromagnetic articulograph (EMA) [21]. Next, in a follow-up [11], they estimated the kinematic information about the vocal tract (e.g. lip movement, tongue movement, and jaw position) as well as other physiological characteristics (e.g. manner of articulation) from the speech signal, using AAI. They showed that intermediate articulatory representations enhanced performance of speech BCI even with limited data. Similarly, the aim of [20] was to analyze the brain signal measured with ECoG device and the articulation information during speech production. He derived the indirect articulatory data from other speakers based on the BY2014 database, using EMA [22]. Since the same sentences but different speakers were used when the brain signal and the articulation signals were recorded, it was possible to calculate EMA-based indirect articulation information for the ECoG data based on the Dynamic Time Warping (DTW) calculated from the speech. Thus, ECoG-based speech synthesis was successfully supplemented with DTW-derived articulation information; although the synthesized speech samples are not yet
intelligible [20]. The latest related study [19] also used estimation methods, but here the articulation information is not based on real measurements, but on the so-called TADA features calculated from the speech signal [23].
The conclusion of the above studies is that for patients whose cortical / neural processing of articulation is still intact, a speech-based BCI decoder using articulatory information can be more intuitive or more natural, and easier to learn to use. According to the overview above, there is a lack of combined methods that would examine the non-invasive EEG, articulation, and speech together, and analyze the interaction between the cognitive process in the brain, the articulatory movement, and the speech signal. In the current paper, we extend the analysis of brain signals during speech production with ultrasound tongue image-based articulatory data in order for more biosignals to be available for relationship analysis. From the input brain signal measured with EEG, we use deep neural networks to predict articulatory movement information, in the form of ultrasound tongue image sequences.
## 2 Methods
### Recordings
The recordings were made in an electromagnetically shielded quiet room of the ELKH Research Centre for Natural Science, Budapest, Hungary. The EEG signal was recorded with a 64-channel Brain Products actiCHamp type amplifier, using actiCAP active electrodes. Four channels were used to track horizontal and vertical eye movements. The electrodes were placed according to the international 10-20 arrangement [24]. The impedance of the electrodes was kept below 15 kOhm. During the recording, the FCz electrode played the role of the reference electrode. The signal was sampled at a frequency of 1000 Hz.
The midsagittal movement of the tongue was recorded using the,,Micro" system (AAA V220.02 software, Articulate Instruments Ltd.) with a 2-4 MHz (penetration depth), 64-element, 20 mm radius convex ultrasound probe at 81.67 fps, and we also used a headset for probe fixing. The metal headset was placed above the EEG sensors so that the devices did not interfere with each other. Recording arrangement is shown in Fig. 2.
The speech was recorded with a Beyerdynamic TG H56c tan omnidirectional condenser microphone and digitized with an M-Audio M-Track 2x2 / FocusRite Scarlett 2i USB external sound card at 44,100 Hz. The speaker's face and mouth movements were recorded with a Logitech C925e webcam (but lip data was not used in the current study).
The output of the sound card (which contains the synchronizing signal of the,,Micro" ultrasound, i.e., 'frame sync', and the speech signal from the microphone) was connected to the AUX channel of the EEG - so the brain and articulation signals were recorded on separate computers, but after the session, we can synchronize the data. The EEG signal was recorded continuously, while the ultrasound and speech were recorded sentence-by-sentence. Since the speech signal and the ultrasound synchronization signal (thus the beginning and end of the recording of the given sentence) also appear on one of the EEG channels, we can automatically synchronize the signals afterward. Fig. 1 shows an example of the synchronized signals: EEG (a), speech (b and c), frame sync (d), and ultrasound (e). The latter is a 'kymogram' [25, Fig. 8], i.e., a kind of 'articulatory signal over time': the middle slices (midline) of the ultrasound tongue images were cut (approximately corresponding to the middle of the tongue) and plotted as a function of time, similarly to a spectrogram; thus the tongue movement is roughly visible together with the speech spectrogram.
For the current feasibility study, we recorded approximately 15 minutes of data from a single native Hungarian male speaker (the first author), which will be expanded with additional speakers in the future. The sentences were selected from PPBA [26].
### Preprocessing the data
The EEG signal was pre-processed based on [4] ([https://github.com/neuralinterfacinglab/SingleWordProductionDutch/](https://github.com/neuralinterfacinglab/SingleWordProductionDutch/)). We calculated the Hilbert envelope for each channel of the EEG signal (except EEG AUX) in four frequency bands: 1-50 Hz, 51-100 Hz,
101-150 Hz, and 151-200Hz. Notch filters were used to filter out the 50 Hz line noise and its harmonics. The envelope was averaged every 50 ms and offset by 12 ms to be consistent with the ultrasound tongue images (which were recorded at 81.67 fps). In order to take temporal information into account, we used 4 preceding and 4 following blocks of the Hilbert-transformed EEG signals. The left side of Fig. 3 shows an example of such an input signal.
Ultrasound tongue images were as 8-bit grayscale pixels, in the form of raw ultrasound of the,,Micro" system. The originally 64x842 pixel images were resized to 64x128 pixels (3. figure, right side), as this does not cause significant information loss [27], but the amount of data to be processed is less.
### Predicting articulatory information from EEG input
In the current initial phase of the research, we performed a simple experiment: we trained a fully connected (FC-DNN) deep'rectifier' neural network [28], during which we predicted the ultrasound tongue images, from the Hilbert-transformed EEG input (Fig. 3). For training, MSE error was used. During our experiments, we used a neural network structure with 5 hidden layers, each layer containing 1000 neurons, with ReLU activations, and a linear output layer (similar to earlier ultrasound-based speech synthesis studies, e.g. [26]). The input EEG values and the output ultrasound pixels were normalized to 0-1 before training. We trained until up to 100 epochs, but applied early stopping, with a patience of 3.
## 3 Experiments and results
We performed training from the 155 sentences, using 80% of the data for training the network, 10% for validation, and the remaining 10% for testing (31 000 / 3900 / 3900 sample points).
### Demonstration samples
After the DNN training, ultrasound tongue image prediction was performed from EEG input, on the test set. Fig. 4 shows some original and from EEG estimated ultrasound images from the test data of the speaker, in the 'raw' representation of the ultrasound machine. The contour of the tongue ultrasound is not always visible even in the original images - this is due to the dependence of the ultrasound tongue images on the speaker - it seems that this subject has a tongue that is difficult to acquire. In the images estimated from EEG input (i.e., the result of EEG-to-UTI prediction), the contour of the tongue is blurred, and the change in the position of the tongue from frame to frame is also difficult to observe - i.e., the DNN was able to learn the general shape of the tongue (the average image), but the fine details of the tongue movement cannot be seen. However, some general change of brightness is visible as a function of time: if the original images were darker, then this is also mapped on the predicted images (e.g, around frames 169-172-175). The one image offset in the DNN-predicted image sequence might be the result of windowing the EEG signal.
The same series of images are shown in the 'wedge' representation in Fig. 5, which were plotted using [https://github.com/UltraSuite/ultrasuite-tools](https://github.com/UltraSuite/ultrasuite-tools). In these images, a similar trend can be noticed as in Fig. 4: the upper surface of the tongue can be roughly seen in the original images, but in the images estimated based on the EEG, the ultrasound pixels are blurred, and the contour of the tongue is not visible. However, between frames 169-175, the change in light intensity can be noticed in the DNN-predicated case.
If we look at the results image by image (as in Figs. 4 and 5), then the longer-term trend is less visible. For this reason, we also show the results in a different arrangement, as a 'kymogram' representation: we cut out the middle vertical line from each ultrasound tongue image and plotted the change of this line over time. Fig. 6 shows the result of this: at the top is the spectrogram belonging to speech, in the middle is the ultrasound image center line sequence as a function of time (belonging to the same utterance), and at the bottom is the ultrasound tongue center line predicted by the DNN. The similarity between a) mel-spectrogram and b) articulatory movement is clearly noticeable: the formant movements in speech and the vertical movement of the tongue can be roughly observed in the figures. On the other hand, in c) DNN-predicted tongue ultrasound, tongue movement is not visible on the midline, i.e., the FC-DNN could not learn well the relation between EEG and ultrasound tongue images. At the same time, some information can still be seen in the DNN-predicated images: at the end of the 170th frame, one sentence ends, and the next begins, which can be clearly seen in the original ultrasound (b) and also in the estimated ultrasound (c). Overall, we can say that according to this visualization, there is a weak but noticeable relationship between EEG and ultrasound tongue images, i.e. the network can differentiate articulated speech vs. neutral tongue position.
### Objective measures
The mean squared error (MSE) values achieved with the above FC-DNN network are: train error: 0.0052, validation error: 0.0054, test error: 0.0055. The values themselves are difficult to interpret, but for example, in previous UTI-based acoustic-to-articulatory inversion experiments (during which ultrasound tongue images were predicted from the speech signal [29, 30]), the obtained NMSE validation error values were in the order of 0.0053-0.0088; and in this case, the ultrasound tongue video generated from the speech input approximated the original articulatory movement. It can also be seen from this that the MSE value in the present research is not sufficient to judge the quality of the results, and a visual inspection is necessary. In the case of previous ultrasound research, there have been experiments examining other error measures, such as Structural Similarity Index (SSIM) [31] and Complex Wavelet Structural Similarity (CW-SSIM) [32], on ultrasound [33, 34, 30]. However, due to the above visually weak results in Figs. 4-5-6, we did not investigate SSIM and CW-SSIM here for the case of EEG-to-UTI prediction, as we do not expect they would be helpful.
## 4 Discussion and conclusions
Through the multimodal (brain, speech, and articulation) analysis and synthesis described in this initial research phase, we go beyond the most state-of-the-art international trends. In our
Figure 3: _Input (left) and output (right) of the DNNs._
scientific overview of Sec. 1, we have seen many previous attempts for initial speech BCI research based on EEG (or brain signals measured with other invasive devices) [10, 11, 8, 13]. However, so far, it has not been possible to create a clearly intelligible synthesized speech based on brain signals. An obvious solution seems to be the examination of articulation as an intermediate representation between the brain signal and the resulting final speech, which we dealt with in this article. In previous speech-BCI research studies, articulatory information was only included indirectly (i.e., not measured with a piece of equipment), during the investigation of the brain signal and speech [15, 16, 17, 18, 11, 19, 20]. Although this indirect articulation information also helped to improve the results, measuring and analyzing articulation with real equipment could result in further advantages and improvement in the long-term.
In the current research, we extended the investigation of the brain signal measured with EEG and speech recorded with a microphone, with articulatory recordings using ultrasound tongue imaging. We made sure that all the signals were in good synchrony, by using hardware sync for ultrasound, and by connecting the sync signals and recording the microphone within the EEG equipment as well. We trained a deep neural network (FC-DNN) to estimate ultrasound images, based on EEG input. According to the results, the generated ultrasound tongue images are still far from the original sequence, but the relationship between EEG and ultrasound tongue images was clearly demonstrated, i.e. the network can differentiate articulated speech and neutral tongue position, like Voice Activity Detection.
In the future, we plan to compare the prediction results between articulatory and speech data, i.e. whether the articulatory data can yield better predictions. The long-term goal of this research is to contribute to speech-based brain-computer interfaces. The results can potentially be used during rehabilitation, e.g. as a communication aid.
The keras implementation of the DNN experiments presented above is available at the following address: [https://github.com/BME-SmartLab/EEG-to-UTI](https://github.com/BME-SmartLab/EEG-to-UTI).
## 5 Acknowledgements
This research was funded by the National Research, Development and Innovation Office of Hungary (FK 142163 grant). T.G.Cs. was supported by the Bolyai Janos Research Fellowship of the Hungarian Academy of Sciences and by the UNKP-22-5-BME-316 New National Excellence Program of the Ministry for Culture and Innovation from the source of the NRDIF.
Figure 4: _Original (above) and EEG-predicted (below) ultrasound tongue images, in ‘raw’ representation._
Figure 5: _Original (left) and EEG-predicted (right) ultrasound tongue images, in ‘wedge’ representation._
Figure 6: _Demonstration sample: a) 80-dimensional mel-spectrogram of the original speech sample, b) original ultrasound kymogram, c) DNN-predicted ultrasound kymogram._ |
2307.12521 | Steinberg's cross-section of Newton strata | In this note, we introduce a natural analogue of Steinberg's cross-section in
the loop group of an unramified reductive group $\mathbf G$. We show this loop
Steinberg's cross-section provides a simple geometric model for the poset
$B(\mathbf G)$ of Frobenius-twisted conjugacy classes (referred to as Newton
strata) of the loop group. As an application, we confirm a conjecture by Ivanov
on loop Delgine-Lusztig varieties of Coxeter type. This geometric model also
leads to new and direct proofs of several classical results, including the
converse to Mazur's inequality, Chai's length formula on $B(\mathbf G)$, and a
key combinatorial identity in the study affine Deligne-Lusztig varieties with
finite Coxeter parts. | Sian Nie | 2023-07-24T04:33:59Z | http://arxiv.org/abs/2307.12521v2 | # Steinberg's cross-section of Newton strata
###### Abstract.
In this note, we introduce a natural analogue of Steinberg's cross-section in the loop group of a reductive group \(\mathbf{G}\). We show this loop Steinberg's cross-section provides a simple geometric model for the poset \(B(\mathbf{G})\) of Frobenius-twisted conjugacy classes (referred to as Newton strata) of the loop group. As an application, we confirms a conjecture by Ivanov on decomposing loop Deligne-Lusztig varieties of Coxeter type. This geometric model also leads to new and direct proofs of several classical results, including the converse to Mazur's inequality, Chai's length formula on \(B(\mathbf{G})\), and a key combinatorial identity in the study affine Deligne-Lusztig varieties with finite Coxeter parts.
Key words and phrases:Steinberg's cross-section; Coxeter element; Frobenius-twisted conjugacy class; affine Deligne-Lusztig varieties 2020 Mathematics Subject Classification: 20G25
## Introduction
### Steinberg's cross-section
Let \(G\) be a reductive group over an algebraically closed field. Recall that an element of \(G\) is called regular if the dimension of its centralizer in \(G\) equals the rank \(r\) of \(G\). In his seminal work [26], Steinberg constructed a cross-section \(N\) to regular (conjugacy) classes of \(G\), and proved the following remarkable properties:
(1) \(N\) consists of regular elements;
(2) each regular class of \(G\) intersects \(N\) transversely;
(3) if \(G\) is semisimple and simply connected, each regular class intersects \(N\) at a single point.
By construction, the cross-section \(N\) is an affine \(r\)-space attached to a minimal Coxeter element in the Weyl group of \(G\). In recent works [10], [20], [24], He, Lusztig, and Sevostyanov discovered that for a wide class of Weyl group elements, analogous construction also produces transversal slices to conjugacy classes of \(G\).
### The main result
In this note we focus on a loop group version of Steinberg's result mentioned above. On the one hand, the Steinberg's cross-section has a straightforward analogue in a loop group. On the other hand, a loop group is stratified by its Frobenius-twisted conjugacy classes. So it is natural to ask how the loop Steinberg's cross-section intersects various Frobenius-twisted conjugacy classes. The answer turns out to be simple and
purely combinatorial. This suggests that the loop Steinberg's cross-section serves as a "transversal slice" to Frobenius-twisted conjugacy classes.
To formulate the main results, we introduce more notations. Let \(\mathbf{G}\) be an unramified reductive group over a non-archimedean local field \(k\). Fix a uniformizer \(\varpi\in k\). Let \(\breve{k}\) be the completion of a maximal unramified extension of \(k\). Denote by \(\sigma\) the Frobenius automorphism \(\breve{k}/k\) and the induced automorphism of the loop group \(\mathbf{G}(\breve{k})\). We fix two opposite Borel subgroups \(\mathbf{B}\) and \(\mathbf{B}^{-}\), and let \(\mathbf{T}=\mathbf{B}\cap\mathbf{B}^{-}\) be a maximal torus. Let \(\mathbf{N}\) be the normalizer of \(\mathbf{T}\) in \(\mathbf{G}\). The Weyl group of \(\mathbf{G}\) is defined by
\[W=\mathbf{N}(\mathcal{O}_{\breve{k}})/\mathbf{T}(\mathcal{O}_{\breve{k}}) \cong\mathbf{N}(\breve{k})/\mathbf{T}(\breve{k}),\]
where \(\mathcal{O}_{\breve{k}}\) is the integer ring of \(\breve{k}\). For \(w\in W\) we set
\[\mathbf{U}_{w}=\mathbf{U}\cap{}^{w^{-1}}(\mathbf{U}^{-}),\]
where \(\mathbf{U},\mathbf{U}^{-}\) are the unipotent radicals of \(\mathbf{B},\mathbf{B}^{-}\) respectively.
Fix a representative set \(\{\alpha_{1},\ldots,\alpha_{r}\}\) for the \(\sigma\)-orbits of simple roots appearing in \(\mathbf{B}\). Denote by \(s_{\alpha_{i}}\in W\) the simple reflection corresponding to \(\alpha_{i}\). Set \(c=s_{\alpha_{1}}\cdots s_{\alpha_{r}}\in W\) which is a \(\sigma\)-Coxeter element. Let \(\dot{c}\in\mathbf{N}(\breve{k})\) be lift of \(c\). Then there exist a unique cocharacter \(\mu\in X_{*}(\mathbf{T})\) and some lifts \(\tilde{s}_{\alpha_{i}}\in\mathbf{N}(\mathcal{O}_{\breve{k}})\) of \(s_{\alpha_{i}}\) such that \(\dot{c}=\varpi^{\mu}\tilde{s}_{\alpha_{1}}\cdots\tilde{s}_{\alpha_{r}}\), where \(\varpi^{\mu}=\mu(\varpi)\in\mathbf{G}(k)\). Following [26, Theorem 1.4], we define the loop Steinberg's cross-section attached to \(\dot{c}\) by
\[\dot{c}\mathbf{U}_{c}(\breve{k})=\varpi^{\mu}\tilde{s}_{\alpha_{1}}\mathbf{U }_{\alpha_{1}}(\breve{k})\cdots\tilde{s}_{\alpha_{r}}\mathbf{U}_{\alpha_{r}}( \breve{k}),\]
where \(\mathbf{U}_{\alpha_{i}}\) denotes the root subgroup of \(\alpha_{i}\).
For \(b\in\mathbf{G}(\breve{k})\) the corresponding \(\sigma\)-twisted conjugacy class is defined by
\[[b]=\{g^{-1}b\sigma(g);g\in\mathbf{G}(\breve{k})\}.\]
Thanks to Kottwitz [17], \([b]\) is determined by two invariants: the Kottwitz point \(\kappa(b)\in\pi_{1}(\mathbf{G})_{\sigma}\) and the dominant Newton point \(\nu(b)\in X_{*}(\mathbf{T})_{\mathbb{Q}}\), see SS1.1. We denote by \(B(\mathbf{G})\) the set of \(\sigma\)-twisted conjugacy classes of \(\mathbf{G}(\breve{k})\).
The main result of this note is a combinatorial description of the intersections \([b]\cap\dot{c}\mathbf{U}_{c}(\breve{k})\) for \([b]\in B(\mathbf{G})\).
**Theorem 0.1**.: _Let \(b\) and \(\dot{c}=\varpi^{\mu}\tilde{s}_{\alpha_{1}}\cdots\tilde{s}_{\alpha_{r}}\) be as above. Then we have \([b]\cap\dot{c}\mathbf{U}_{c}(\breve{k})\neq\emptyset\) if and only if \(\kappa(\dot{c})=\kappa(b)\). In this case,_
\[[b]\cap\dot{c}\mathbf{U}_{c}(\breve{k})=\varpi^{\mu}\tilde{s}_{\alpha_{1}}H_{ \mu,b,\alpha_{1}}\cdots\tilde{s}_{r}H_{\mu,b,\alpha_{r}},\]
_where_
\[H_{\mu,b,\alpha_{i}}=\begin{cases}\mathbf{U}_{\alpha_{i}}(\varpi^{\langle\mu -\nu(b),\omega_{i}\rangle}\mathcal{O}_{\breve{k}}^{\times});&\text{if }\langle\nu(b),\alpha_{i} \rangle>0\\ \mathbf{U}_{\alpha_{i}}(\varpi^{\lceil\langle\mu-\nu(b),\omega_{i}\rangle} \mathcal{O}_{\breve{k}}),&\text{otherwise.}\end{cases}\]
_Here \(\omega_{i}\) is the sum of fundamental weights corresponding simple roots in the \(\sigma\)-orbits of \(\alpha_{i}\), and \(\langle,\rangle:X_{*}(\mathbf{T})_{\mathbb{Q}}\times X^{*}(\mathbf{T})_{ \mathbb{Q}}\to\mathbb{Q}\) is the natural paring._
We remark that \(\langle\mu-\nu(b),\omega_{i}\rangle\in\mathbb{Z}\) if \(\langle\nu(b),\alpha_{i}\rangle>0\), see Definition 1.4.
**Remark 0.2**.: Note that \([b]\) is an admissible set in the sense of [13] and [9]. In particular, it makes sense to define topological invariants of \([b]\), such as the closure of \([b]\) in \(\mathbf{G}(\breve{k})\), the irreducible/connected components of \([b]\), and the relative dimension of \([b]\).
By construction, the loop Steinberg's cross-section \(\check{c}\mathbf{U}_{c}(\breve{k})\) can be viewed as an infinite dimensional affine space. It follows from Theorem 0.1 that the intersections \([b]\cap\check{c}\mathbf{U}_{c}(\breve{k})\) are admissible subsets of \(\check{c}\mathbf{U}_{c}(\breve{k})\). So it also makes sense to study their topological properties.
Let \(\leq\) denote the usual dominance order on \(B(\mathbf{G})\), see SS5.1. Our second result shows that the following Newton decomposition
\[\check{c}\mathbf{U}_{c}(\breve{k})=\bigsqcup_{[b]\in B(\mathbf{G})}[b]\cap \check{c}\mathbf{U}_{c}(\breve{k})\]
is a stratification of \(\check{c}\mathbf{U}_{c}(\breve{k})\), whose closure relation is given by \(\leq\).
**Theorem 0.3**.: _Let \(\mu\in X_{*}(\mathbf{T})\) and \([b]\in B(\mathbf{G})\) such that \(\kappa(\varpi^{\mu})=\kappa(b)\). Then the closure of \([b]\cap\check{c}\mathbf{U}_{c}(\breve{k})\) in \(\check{c}\mathbf{U}_{c}(\breve{k})\) is_
\[\overline{[b]\cap\check{c}\mathbf{U}_{c}(\breve{k})}=\bigsqcup_{[b^{\prime}] \leq[b]}[b^{\prime}]\cap\check{c}\mathbf{U}_{c}(\breve{k}).\]
_Moreover, \(\overline{[b]\cap\check{c}\mathbf{U}_{c}(\breve{k})}-([b]\cap\check{c} \mathbf{U}_{c}(\breve{k}))\) is pure of codimension one in \(\overline{[b]\cap\check{c}\mathbf{U}_{c}(\breve{k})}\)._
The Newton stratification of \(\check{c}\mathbf{U}_{c}(\breve{k})\) provides a simple geometric model of the poset \((\mathbf{G},\leq)\). This lead to new interpretations of several classical results. Here are two immediate examples.
The first one is a geometric characterization of the dominance order on \(B(\mathbf{G})\) proved by He [9].
**Theorem 0.4**.: _Let \([b],[b^{\prime}]\in B(\mathbf{G})\). Then \([b^{\prime}]\subseteq\overline{[b]}\) if and only if \([b^{\prime}]\leq[b]\). Here \(\overline{[b]}\) denotes the closure of \([b]\) in \(\mathbf{G}(\breve{k})\)._
The implication \((\Rightarrow)\) follows from a general result of Rapoport and Richartz [23]. The implication \((\Leftarrow)\) is proved by He [9] using a deep purity result due to Viehmann [27] and Hamacher [8]. Now the implication \((\Leftarrow)\) is a direct consequence of Theorem 0.3.
The second one is the Mazur's inequality criterion on the emptiness/non-emptiness of \([b]\cap\mathcal{K}\varpi^{\mu}\mathcal{K}\), where \(\mathcal{K}=\mathbf{G}(\mathcal{O}_{\breve{k}})\subseteq\mathbf{G}(\breve{k})\).
**Theorem 0.5**.: _Let \(\mu\in X_{*}(\mathbf{T})\) and \([b]\in B(\mathbf{G})\). Then \([b]\cap\mathcal{K}\varpi^{\mu}\mathcal{K}\neq\emptyset\) if and only if \([b]\in B(\mathbf{G},\mu)\), namely, \([b]\leq[\varpi^{\mu}]\)._
The implication \((\Rightarrow)\) is again due to Rapoport and Richartz [23]. The implication \((\Leftarrow)\) is a conjecure of Kottwitz and Rapoport, which is proved by Gashi [7] in a purely combinatorial way. Now we give a new proof of
the implication (\(\Leftarrow\)). First we can assume that \(\mu\) is dominant. Then \(\nu(\varpi^{\mu})\) equals the \(\sigma\)-average \(\mu^{\diamond}\) of \(\mu\). By definition, for any \([b]\in B(\mathbf{G},\mu)\) we have \(\kappa(b)=\kappa(\varpi^{\mu})\) and \(\langle\mu-\nu(b),\omega_{i}\rangle=\langle\mu^{\diamond}-\nu(b),\omega_{i} \rangle\geqslant 0\) for \(1\leqslant i\leqslant r\). Hence it follows from Theorem 0.1 that
\[\emptyset\neq[b]\cap\dot{c}\mathbf{U}_{c}(\tilde{k})\subseteq\varpi^{\mu} \tilde{s}_{\alpha_{1}}\mathbf{U}_{\alpha_{1}}(\mathcal{O}_{\tilde{k}})\cdots \tilde{s}_{\alpha_{r}}\mathbf{U}_{\alpha_{r}}(\mathcal{O}_{\tilde{k}})\subseteq \varpi^{\mu}\mathcal{K}.\]
So the implication (\(\Leftarrow\)) follows.
### Further consequences
We continue to discuss further consequences of the main results.
In [2], Chai deduced an explicit length formula for the poset \((B(\mathbf{G}),\leq)\) in a purely combinatorial way. We will give a geometric proof of Chai's formula.
**Theorem 0.6**.: _The poset \((B(\mathbf{G}),\leq)\) is ranked, and the length between two elements \([b^{\prime}]\leq[b]\in B(\mathbf{G})\) is given by_
\[\operatorname{length}([b],[b^{\prime}])=\sum_{i=1}^{r}\lceil\langle\mu-\nu(b^ {\prime}),\omega_{\mathcal{O}_{i}}\rangle\rceil-\lceil\langle\mu-\nu(b), \omega_{\mathcal{O}_{i}}\rangle\rceil,\]
_where \(\mu\in X_{*}(\mathbf{T})\) is any cocharacter such that \(\kappa(\varpi^{\mu})=\kappa(b)=\kappa(b^{\prime})\)._
In [12], He, Yu and the author show that an affine Deligne-Lusztig varieties with a finite Coxeter part has a simple geometric structure, see also [25]. A key ingredient of our approach is a combinatorical identity, whose proof is quite involved. We will give a new and direct proof of the identity by counting \(k\)-rational points on the (truncated) loop Steinberg's cross-section. The same idea also guides us to discover a new combinatorical identity.
**Theorem 0.7**.: _Let \(\mu\in X_{*}(\mathbf{T})\). Then we have_
\[\sum_{[b]\in B(\mathbf{G},\mu)}(\mathbf{q}-1)^{\sharp J_{\nu(b)}}\mathbf{q}^{- \sharp J_{\nu(b)}-\operatorname{length}([\varpi^{\mu}],[b])}=1,\]
_where \(J_{\nu(b)}=\{1\leqslant i\leqslant r;\langle\nu(b),\alpha_{i}\rangle\neq 0\}\). Moreover, if \(B(\mathbf{G},\mu)_{\operatorname{irr}}\neq\emptyset\), then_
\[\sum_{[b]\in B(\mathbf{G},\mu)_{\operatorname{irr}}}(\mathbf{q}-1)^{\sharp J_ {\nu(b)}}\mathbf{q}^{r-\sharp J_{\nu(b)}-\operatorname{length}([\varpi^{\mu}],[b])}=1.\]
_Here \(B(\mathbf{G},\mu)_{\operatorname{irr}}\) is the set of Hodge-Newton irreducible elements in \(B(\mathbf{G},\mu)\), see SS5.3._
**Remark 0.8**.: The first identity is new. The second identity is the one from [12], which is also proved by Lim [19] based on a probability-theoretic interpretation.
The last application is devoted to loop Deligne-Lusztig varieties. These varieties were first introduced by Lusztig [22]. As in the classical Deligne-Lusztig theory [6], the cohomologies of loop Deligne-Lusztig varieties are expected to realize interesting representations of \(p\)-adic groups. In the case of
general linear groups and their inner forms, Boyarchenko, Weinstein, Chan, and Ivanov carried out an extensive study of loop Deligne-Lusztig varieties of Coxeter type see [1], [3], [4], [5] and references therein. As a desirable consequence, they obtained a purely local, purely geometric and explicit realization of local Langlands and Jacquet-Langlands correspondences for a wide class of irreducible supercuspidal representations.
A fundamental step in the works mentioned above is to decompose the loop Deligne-Lusztig variety into a union of translates of parahoric level Deligne-Lusztig varieties. By Theorem 0.1 we have the following general result.
**Theorem 0.9**.: _The decomposition theorem for loop Deligne-Lusztig varieties of Coxeter type holds for all unramified reductive groups, see [15, Theorem 1.1] for the precise formulation. In particular, these varieties are representable by perfect schemes._
**Remark 0.10**.: Theorem 0.9 is also proved in [16] using a different method. The last statement of the theorem confirms a conjecture of Ivanov [14, Conjecture 1.1] in the basic case. If \(\mathbf{G}\) is of classical type, the theorem is proved by Ivanov [15].
### Outline
The note is organized as follows. In SS1 we introduce basic notions/constructions, and reduce Theorem 0.1 to certain translation case, see Theorem 1.6. In SS2 we study the intersection \([b]\cap\partial\mathbf{T}(\mathcal{O}_{\tilde{k}})\mathbf{U}_{c}(\tilde{k})\) when \(b\) is a translation, and show it equals the image of certain explicit map, see Proposition 2.4. In SS3 we make a digression to introduce an algorithm to compute the image. In SS4 we finish the proof of Theorem 0.1 using the algorithm. In the last section, we prove Theorem 0.6 and Theorem 0.7 as applications of Theorem 0.1.
### Acknowledgement
We would like to thank Xuhua He, Alexander Ivanov and Qingchao Yu for helpful comments and discussions.
## 1. Preliminary
In this section, we introduce basic notations and reduce Theorem 0.1 to the translation case, namely the case where \(b=\varpi^{\chi}\) for certain \(\chi\in X_{*}(\mathbf{T})\).
We keep the notations \(\mathbf{G},\mathbf{B},\mathbf{U},\mathbf{T},\mathbf{N},W\) introduced in the introduction. Let \(\Phi^{+}\subseteq\Phi\) be the set of (positive) roots of \(\mathbf{T}\) that appears in \(\mathbf{B}\). We have \(\Phi=\Phi^{+}\sqcup\Phi^{-}\) with \(\Phi^{-}=-\Phi^{+}\). Let \(\Pi\subseteq\Phi^{+}\) be the set of simple roots. We say \(v\in X_{*}(\mathbf{T})\) is dominant if \(\langle v,\alpha\rangle\geqslant 0\) for all \(\alpha\in\Phi^{+}\). Here \(\langle,\rangle:X_{*}(\mathbf{T})\otimes X_{*}(\mathbf{T})\to\mathbb{Q}\) is the natural pairing. Denote by \(\mathbb{S}\subseteq W\) the corresponding set of simple reflections of \(W\). Then \((W,\mathbb{S})\) is a Coxeter system.
Let \(\alpha\in\Phi\). We denote by \(\alpha^{\vee}\) the coroot of \(\alpha\), and let \(s_{\alpha}\in W\) be the reflection sending \(\lambda\) to \(\lambda-\langle\lambda,\alpha\rangle\alpha^{\vee}\) for \(\lambda\in X_{*}(\mathbf{T})\). Choose root subgroups
\(\mathbf{U}_{\alpha}:\mathbb{G}_{a}\to\mathbf{G}\) such that for \(z\in\breve{k}^{\times}\) we have
(*) \[\mathbf{U}_{-\alpha}(-z^{-1})\mathbf{U}_{\alpha}(z)\mathbf{U}_{-\alpha}(-z^{-1}) =z^{\alpha^{\vee}}\tilde{s}_{-\alpha}\in\mathbf{N}(\breve{k}),\]
where \(\tilde{s}_{-\alpha}=\mathbf{U}_{-\alpha}(-1)\mathbf{U}_{\alpha}(1)\mathbf{U}_ {-\alpha}(-1)\in\mathbf{N}(\mathcal{O}_{\breve{k}})\).
Let \(b\in\mathbf{G}(\breve{k})\). Then Kottwitz point \(\kappa(b)=\kappa_{\mathbf{G}}(b)\) is the image of \(b\) under the natural projection \(\kappa:\mathbf{G}(\breve{k})\to\pi_{1}(\mathbf{G})_{\sigma}=\pi_{1}(G)/(1- \sigma)\pi_{1}(\mathbf{G})\). To define the Newton point, note that there exists a representative \(x=\varpi^{\lambda}\dot{w}\in[b]\), where \(\lambda\in X_{*}(\mathbf{T})\) and \(\dot{w}\in\mathbf{N}(\mathcal{O}_{\breve{k}})\) is a lift of some element \(w\in W\). Let \(\nu_{x}\in X_{*}(\mathbf{T})_{\mathbb{Q}}\) be the \((w\sigma)\)-average of \(\mu\). Then \(\nu(b)=\nu_{\mathbf{G}}(b)\) equals the unique dominant \(W\)-conjugate of the \(\nu_{x}\). Note that \(\nu(b)=\sigma(\nu(b))\) is independent of the choice of \(x\in\mathbf{N}(\breve{k})\cap[b]\).
Let \(\{\alpha_{1},\ldots,\alpha_{r}\}\) be a representative set for the \(\sigma\)-orbits of \(\Pi\). Let \(c=s_{\alpha_{1}}s_{\alpha_{2}}\cdots s_{\alpha_{r}}\in W\), which is a minimal \(\sigma\)-Coxeter element.
Let \(\xi=\sum_{i=1}^{r}n_{i}\alpha_{i}^{\vee}\) with \(n_{i}\in\mathbb{Z}\). We define
\[K_{\xi}=\mathbf{U}_{-\alpha_{1}}(\varpi^{m_{1}}\mathcal{O}_{\breve{k}}^{\times })\cdots\mathbf{U}_{-\alpha_{r}}(\varpi^{m_{r}}\mathcal{O}_{\breve{k}}^{\times }),\]
where \(m_{i}=-n_{i}-\sum_{j=i+1}^{r}n_{j}\langle\alpha_{j}^{\vee},\alpha_{i}\rangle\) for \(1\leqslant i\leqslant r\).
**Lemma 1.1**.: _Let \(\mu,\eta\in X_{*}(\mathbf{T})\). Let \(\dot{c}\in\varpi^{\mu}\mathbf{N}(\mathcal{O}_{\breve{k}})\) be a lift of \(c\). Then_
\[\varpi^{\eta}\mathbf{T}(\mathcal{O}_{\breve{k}})\mathbf{U}_{-\alpha_{1}}( \breve{k}^{\times})\cdots\mathbf{U}_{-\alpha_{r}}(\breve{k}^{\times})\cap \mathbf{U}(\breve{k})\dot{c}\mathbf{T}(\mathcal{O}_{\breve{k}})\mathbf{U}( \breve{k})=\varpi^{\eta}\mathbf{T}(\mathcal{O}_{\breve{k}})K_{\mu-\eta}.\]
_Here we set \(K_{\mu-\eta}=\emptyset\) if \(\mu-\eta\notin\sum_{i=1}^{r}\mathbb{Z}\alpha_{i}^{\vee}\)._
Proof.: We argue by induction on \(r\). If \(r=0\) the statement is trivial. Let \(z_{1},\ldots,z_{r}\in\breve{k}^{\times}\) such that
(a) \[\varpi^{\eta}\mathbf{U}_{-\alpha_{1}}(z_{1})\cdots\mathbf{U}_{-\alpha_{r}}(z_{ r})\in\mathbf{U}(\breve{k})\dot{c}\mathbf{T}(\mathcal{O}_{\breve{k}})\mathbf{U}( \breve{k}).\]
Let \(n_{r}\in\mathbb{Z}\) such that \(z_{r}\in\varpi^{-n_{r}}\mathcal{O}_{\breve{k}}^{\times}\). By SS1.1 (*) we have
\[\varpi^{\eta}\mathbf{U}_{-\alpha_{1}}(z_{1})\cdots\mathbf{U}_{- \alpha_{r}}(z_{r})\] \[= \varpi^{\eta}\mathbf{U}_{-\alpha_{1}}(z_{1})\cdots\mathbf{U}_{- \alpha_{r-1}}(z_{r-1})\mathbf{U}_{\alpha_{r}}(z_{r}^{-1})z_{r}^{-\alpha_{r}^{ \vee}}\tilde{s}_{\alpha_{r}}\mathbf{U}_{\alpha_{r}}(z_{r}^{-1})\] \[\subseteq\mathbf{U}_{\alpha_{r}}(\breve{k})\varpi^{\eta+n_{r} \alpha_{r}^{\vee}}\mathbf{T}(\mathcal{O}_{\breve{k}})\mathbf{U}_{-\alpha_{1}} (z_{r}^{-\langle\alpha_{r}^{\vee},\alpha_{1}\rangle}z_{1})\cdots\mathbf{U}_{- \alpha_{r-1}}(z_{r}^{-\langle\alpha_{r}^{\vee},\alpha_{r-1}\rangle}z_{r-1}) \tilde{s}_{\alpha_{r}}\mathbf{U}_{\alpha_{r}}(\breve{k}).\]
As the simple reflections \(s_{\alpha_{i}}\) are distinct, (a) is equivalent to the following inclusion
\[\varpi^{\eta+n_{r}\alpha_{r}^{\vee}}\mathbf{U}_{-\alpha_{1}}(z_{r}^{-\langle \alpha_{r}^{\vee},\alpha_{1}\rangle}z_{1})\cdots\mathbf{U}_{-\alpha_{r-1}}(z_{r }^{-\langle\alpha_{r}^{\vee},\alpha_{r-1}\rangle}z_{r-1})\in\mathbf{U}(\breve{k })\dot{c}\tilde{s}_{\alpha_{r}}\mathbf{T}(\mathcal{O}_{\breve{k}})\mathbf{U}( \breve{k}).\]
Now the statement follows by induction hypothesis.
For \(w\in W\) we set \(\Phi^{+}_{w\sigma}=\Phi^{+}\cap(w\sigma)^{-1}(\Phi^{-})\). Then \(\Phi^{+}_{c\sigma}=\{\beta_{1},\ldots,\beta_{r}\}\), where \(\beta_{i}=\sigma^{-1}s_{\alpha_{r}}\cdots s_{\alpha_{i+1}}(\alpha_{i})\in\Phi^{+}\) for \(1\leqslant i\leqslant r\). Note that \(\mathbf{U}_{c}=\prod_{i=1}^{r}\mathbf{U}_{\sigma(\beta_{i})}\).
**Lemma 1.2**.: _The root subgroups \(\mathbf{U}_{\beta_{i}}\) for \(1\leqslant i\leqslant r\) commute with each other._
Proof.: Assume otherwise. Then \(\beta_{i}+\beta_{i^{\prime}}\in\Phi^{+}_{c\sigma}\) for some \(1\leqslant i<i^{\prime}\leqslant r\). So \(\beta_{i}+\beta_{i^{\prime}}=\beta_{i^{\prime\prime}}\) for some \(1\leqslant i^{\prime\prime}\leqslant r\). As \(c\) is product of distinct simple reflections, it follows that
\[\sigma(\beta_{l})\in\alpha_{l}+\sum_{l+1\leqslant j\leqslant r}\mathbb{Z}_{ \geqslant 0}\alpha_{j}.\]
Thus \(i^{\prime\prime}=i\), which is impossible.
Let \(J\subseteq\{1,\ldots,r\}\) and \(\lambda\in X_{*}(\mathbf{T})_{\mathbb{Q}}\) such that \(\langle\lambda,\beta_{i}\rangle\in\mathbb{Z}\) for \(i\in J\). We define
\[H^{\lambda}_{c\sigma,J}=\prod_{i\in J}\mathbf{U}_{\beta_{i}}(\varpi^{\langle \lambda,\beta_{i}\rangle}\mathcal{O}_{\breve{k}}^{\times})\prod_{i\in\{1, \ldots,r\}-J}\mathbf{U}_{\beta_{i}}(\varpi^{\lceil(\lambda,\beta_{i})\rceil} \mathcal{O}_{\breve{k}}),\]
which is independent of the choice of the product order by Lemma 1.2.
For a simple root \(\alpha\in\Pi\) let \(\omega_{\alpha}\in\mathbb{R}\Phi\) be the fundamental weights corresponding to \(\alpha\). For \(1\leqslant i\leqslant r\) let \(\omega_{i}=\sum_{\alpha}\omega_{\alpha}\), where \(\alpha\) ranges over the \(\sigma\)-orbit of \(\alpha_{i}\). Let \(\Phi^{\vee}\subseteq X_{*}(\mathbf{T})\) denote the set of coroots of \(\mathbf{T}\) in \(\mathbf{G}\).
**Lemma 1.3**.: _Let \(\mu\in X_{*}(\mathbf{T})\) and \(b\in G(\breve{k})\) such that \(\kappa(\varpi^{\mu})=\kappa(b)\). Then there exists a unique vector \(\lambda\in X_{*}(\mathbf{T})_{\mathbb{Q}}\) modulo \(X_{*}(\mathbf{Z})_{\mathbb{Q}}^{\sigma}\) such that \(\mu-\nu(b)=\lambda-c\sigma(\lambda)\), where \(\mathbf{Z}\) is the center of \(\mathbf{G}\). Moreover,_
_(1) \(\langle\lambda,\gamma\rangle+\langle\mu,c\sigma(\gamma)\rangle-\langle\lambda, c\sigma(\gamma)\rangle=\langle\nu(b),c\sigma(\gamma)\rangle\) for \(\gamma\in\Phi\);_
_(2) \(\langle\lambda,\beta_{i}\rangle=\langle\mu-\nu(b),\omega_{i}\rangle\) for \(1\leqslant i\leqslant r\);_
_(3) \(\langle\mu,\alpha_{1}\rangle-\langle\lambda,\beta_{1}\rangle-\langle\lambda, \alpha_{1}\rangle=\langle\nu(b),\alpha_{1}\rangle\)._
Proof.: As \(\kappa(\varpi^{\mu})=\kappa(b)\) we have \(\mu-\nu(b)\in\mathbb{Q}\Phi^{\vee}\). As \(1-c\sigma\) restricts to an automorphism of \(\mathbb{Q}\Phi^{\vee}\), there exists a unique \(\lambda\in\mathbb{Q}\Phi^{\vee}\) such that
(a) \[\mu-\nu(b)=\lambda-c\sigma(\lambda).\]
The uniquness of \(\lambda\) follows from that \((1-c\sigma)\) preserves the each factor of the direct sum \(X_{*}(\mathbf{T})_{\mathbb{Q}}=\mathbb{Q}\Phi^{\vee}\oplus X_{*}(\mathbf{Z})_ {\mathbb{Q}}\).
Now (1) follows directly from (a). Note that \(c\sigma(\lambda)=\sigma(\lambda)-\sum_{l=1}^{r}\langle\lambda,\beta_{l} \rangle\alpha_{l}^{\vee}\) and \(\sigma(\omega_{i})=\omega_{i}\). Thus, (2) follows from that
\[\langle\mu-\nu(b),\omega_{i}\rangle=\langle\lambda-\sigma(\lambda),\omega_{i} \rangle+\sum_{l=1}^{r}\langle\lambda,\beta_{l}\rangle\langle\alpha_{l}^{\vee}, \omega_{i}\rangle=\langle\lambda,\beta_{i}\rangle.\]
Finally, (3) follows from (1) by noticing that \(-\alpha_{1}=c\sigma(\beta_{1})\).
**Definition 1.4**.: Let \(\mu,b,\lambda\) be as in Lemma 1.3. Set
\[J_{\nu(b)}=\{1\leqslant i\leqslant r;\langle\nu(b),\alpha_{i}\rangle\neq 0\}.\]
By [11, Lemma 3.5],
\[\langle\lambda,\beta_{i}\rangle=\langle\mu-\nu(b),\omega_{i}\rangle\in\mathbb{Z} \text{ for }i\in J_{\nu(b)}.\]
We define
\[H_{c,\mu,b}={}^{\sigma}H^{\lambda}_{c\sigma,J_{\nu(b)}}\subseteq\mathbf{U}_{c} (\breve{k}),\]
which is independent of the choice of \(\lambda\in X_{*}(\mathbf{T})\).
Now Theorem 0.1 amounts to the following result.
**Theorem 1.5**.: _Let \(\mu\in X_{*}(\mathbf{T})\) and \(b\in\mathbf{G}(\breve{k})\) such that \(\kappa(\varpi^{\mu})=\kappa(b)\). Let \(\dot{c}\in\varpi^{\mu}\mathbf{N}(\mathcal{O}_{\breve{k}})\) be a lift of \(c\). Then_
\[[b]\cap\dot{c}\mathbf{U}_{c}(\breve{k})=\dot{c}H_{c,\mu,b}=\dot{c}\check{s}_{ \alpha_{1}}H_{\mu,b,\alpha_{1}}\cdots\check{s}_{\alpha_{r}}H_{\mu,b,\alpha_{r}},\]
_where_
\[H_{\mu,b,\alpha_{i}}=\begin{cases}\mathbf{U}_{\alpha_{i}}(\varpi^{\langle\mu- \nu(b),\omega_{i}\rangle}\mathcal{O}_{\breve{k}}^{\times});&\text{ if }i\in J_{\nu(b)}\\ \mathbf{U}_{\alpha_{r}}(\varpi^{\lceil(\mu-\nu(b),\omega_{i})\rceil}\mathcal{ O}_{\breve{k}}),&\text{ otherwise.}\end{cases}\]
The following result is a special case of Theorem 1.5.
**Theorem 1.6**.: _Let \(\mu,b,\lambda\) be as in Lemma 1.3. Let \(\dot{c}\in\varpi^{\mu}\mathbf{N}(\mathcal{O}_{\breve{k}})\) be a lift of \(c\). Suppose that \(\lambda\in X_{*}(\mathbf{T})\). Then_
\[[b]\cap\dot{c}\mathbf{T}(\mathcal{O}_{\breve{k}})\mathbf{U}_{c}(\breve{k})= \dot{c}\mathbf{T}(\mathcal{O}_{\breve{k}})H_{c,\mu,b},\]
_that is, \([b]\cap\dot{c}\mathbf{U}_{c}(\breve{k})=\dot{c}H_{c,\mu,b}\)._
Now we prove Theorem 1.5 using Theorem 1.6.
Proof of Theorem 1.5.: Let \(\lambda\in X_{*}(\mathbf{T})_{\mathbb{Q}}\) such that \(\mu-\nu(b)=\lambda-c\sigma(\lambda)\). Choose \(N\in\mathbb{Z}_{\geqslant 1}\) such that \(\lambda\in\frac{1}{N}X_{*}(\mathbf{T})\). Let \(\breve{k}_{N}=\breve{k}[\varpi^{1/N}]\), and denote by \([b]_{N}\) the \(\mathbf{G}(\breve{k}_{N})\)-\(\sigma\)-conjugacy class of \(b\). Applying Theorem 1.6 to the field \(\breve{k}_{N}\) we have
\[[b]_{N}\cap\dot{c}\mathbf{U}_{c}(\breve{k}_{N})=\dot{c}H_{c,\mu,b,N},\]
where \(H_{c,\mu,b,N}\) is defined similarly as \(H_{c,\mu,b}\) by replacing \(\breve{k}\) with \(\breve{k}_{N}\). Thus
\[[b]\cap\dot{c}\mathbf{U}_{c}(\breve{k})\subseteq[b]_{N}\cap\dot{c}\mathbf{U}_ {c}(\breve{k}_{N})\cap\dot{c}\mathbf{U}_{c}(\breve{k})=\dot{c}H_{c,\mu,b,N} \cap\dot{c}\mathbf{U}_{c}(\breve{k})=\dot{c}H_{c,\mu,b}.\]
On the other hand, for any \(x\in\dot{c}H_{c,\mu,b}\subseteq[b]_{N}\) we have \(\nu(x)=\nu(b)\) and \(\kappa(x)=\kappa(\dot{c})=\kappa(b)\). So \(x\in[b]\cap\dot{c}\mathbf{U}_{c}(\breve{k})\), and the equality \([b]\cap\dot{c}\mathbf{U}_{c}(\breve{k})=\dot{c}H_{c,\mu,b}\) follows.
## 2. The translation case
The aim of this section is to study the intersection \([b]\cap\dot{c}\mathbf{T}(\mathcal{O}_{\breve{k}})\mathbf{U}_{c}(\breve{k})\) when \(b=\varpi^{\chi}\) for some \(\chi\in X_{*}(\mathbf{T})\). Let \(w_{0}\in W\) be the longest element.
First we recall two statements which are essentially proved in Proposition (2.2) and Corollary (2.5) of [21] respectively.
**Proposition 2.1**.: _We have_
_(1) \({}^{w_{0}}\mathbf{U}(\breve{k})\cap\mathbf{B}(\breve{k})c\mathbf{B}(\breve{k })=\mathbf{U}_{-\alpha_{1}}(\breve{k}^{\times})\cdots\mathbf{U}_{-\alpha_{r} }(\breve{k}^{\times})\);_
_(2) if \(g\in\mathbf{G}(\breve{k})\) and \(b\in\mathbf{T}(\breve{k})\) satisfies that \(g^{-1}b\sigma(g)\in\mathbf{B}(\breve{k})c\mathbf{B}(\breve{k})\), then \(g\in\mathbf{B}(\breve{k})w_{0}\mathbf{B}(\breve{k})\)._
The next result is Proposition 8.9 of [26]. See [10] and [18] for generalizations.
**Theorem 2.2**.: _The map \((x,y)\mapsto x^{-1}y\sigma(x)\) induces a bijection_
\[\Psi_{\dot{c}}:\mathbf{U}(\breve{k})\times\dot{c}\mathbf{T}(\mathcal{O}_{ \breve{k}})\mathbf{U}_{c}(\breve{k})\stackrel{{\sim}}{{ \longrightarrow}}\mathbf{U}(\breve{k})\dot{c}\mathbf{T}(\mathcal{O}_{\breve{k} })\mathbf{U}(\breve{k}).\]
**Lemma 2.3**.: _Let \(\mu,\chi\in X_{*}({\bf T})\) such that \(\kappa(\varpi^{\mu})=\kappa(\varpi^{\chi})\). Then there exists a unique cocharacter \(\eta\in X_{*}({\bf T})\) such that \(\mu-\eta\in\sum_{i=1}^{r}\mathbb{Z}\alpha_{i}^{\vee}\) and \(\chi-\eta\in(1-\sigma)X_{*}({\bf T})\)._
Proof.: The statement follows from the natural isomorphism \(\sum_{i=1}^{r}\mathbb{Z}\alpha_{i}^{\vee}\cong\mathbb{Z}\Phi^{\vee}/((1-\sigma )\mathbb{Z}\Phi^{\vee})\).
**Proposition 2.4**.: _Let \(b=\varpi^{\chi}\) for some \(\chi\in X_{*}({\bf T})\) and let \(\dot{c}\in\varpi^{\mu}{\bf N}(\mathcal{O}_{\breve{k}})\) be a lift of \(c\) for some \(\mu\in X_{*}({\bf T})\). Then \([b]\cap\dot{c}{\bf U}_{c}(\breve{k})\neq\emptyset\) if and only if \(\kappa(\dot{c})=\kappa(b)\). In this case,_
(a) \[[b]\cap\dot{c}{\bf T}(\mathcal{O}_{\breve{k}}){\bf U}_{c}(\breve{k})={\rm pr} _{2}\Psi_{\dot{c}}^{-1}(\varpi^{\eta}{\bf T}(\mathcal{O}_{\breve{k}})K_{\mu- \eta}),\]
_where \({\rm pr}_{2}:{\bf U}(\breve{k})\times\dot{c}{\bf T}(\mathcal{O}_{\breve{k}}){ \bf U}_{c}(\breve{k})\to\dot{c}{\bf T}(\mathcal{O}_{\breve{k}}){\bf U}_{c}( \breve{k})\) is the natural projection and \(\eta\in X_{*}({\bf T})\) is the unique cocharacter such that \(\mu-\eta\in\sum_{i=1}^{r}\mathbb{Z}\alpha_{i}^{\vee}\) and \(\chi-\eta\in(1-\sigma)(X_{*}({\bf T}))\) as in Lemma 2.3._
Proof.: As \([b]\cap\dot{c}{\bf U}_{c}(\breve{k})\neq\emptyset\) implies that \(\kappa(b)=\kappa(\varpi^{\chi})\), it suffices to deal with the case that \(\kappa(\varpi^{\chi})=\kappa(b)=\kappa(\dot{c})=\kappa(\varpi^{\mu})\). Let \(b^{\prime}=\varpi^{w_{0}(\chi)}\), where \(w_{0}=\sigma(w_{0})\) is the longest element of \(W\). Then \([b^{\prime}]=[b]\).
Let \(g\in{\bf G}(\breve{k})\) such that \(g^{-1}b^{\prime}\sigma(g)\in\dot{c}{\bf T}(\mathcal{O}_{\breve{k}}){\bf U}_{c} (\breve{k})\). By Proposition 2.1 (2), \(g\in{\bf B}(\breve{k})w_{0}{\bf B}(\breve{k})\). Write \(g=u\dot{w}_{0}u^{\prime}\) for some \(u,u^{\prime}\in{\bf U}(\breve{k})\) and some lift \(\dot{w}_{0}\in{\bf N}(\breve{k})\) of \(w_{0}\). Then
\[h:=\dot{w}_{0}^{-1}u^{-1}b^{\prime}\sigma(u)\sigma(\dot{w}_{0})\in{\bf U}( \breve{k})\dot{c}{\bf T}(\mathcal{O}_{\breve{k}}){\bf U}(\breve{k}).\]
As \(b^{\prime}=\varpi^{w_{0}(\chi)}\) and \(\sigma(w_{0})=w_{0}\), it follows that \(h\in\varpi^{\eta}{\bf T}(\mathcal{O}_{\breve{k}})^{w_{0}}{\bf U}(\breve{k})\) for some \(\eta\in\chi+(1-\sigma)X_{*}({\bf T})\). By Proposition 2.1 (1) we have
\[h\in\varpi^{\eta}{\bf T}(\mathcal{O}_{\breve{k}}){\bf U}_{-\alpha_{1}}(\breve{ k}^{\times})\cdots{\bf U}_{-\alpha_{r}}(\breve{k}^{\times}).\]
By Lemma 1.1 we have \(\mu-\eta\in\sum_{i=1}^{r}\mathbb{Z}\alpha_{i}^{\vee}\) and \(h\in\varpi^{\eta}{\bf T}(\mathcal{O}_{\breve{k}})K_{\mu-\eta}\). In particular, \(\eta\) is uniquely determined as in Lemma 2.3.
As \(u^{\prime}\in{\bf U}(\breve{k})\) and \(g^{-1}b^{\prime}\sigma(g)={u^{\prime}}^{-1}h\sigma(u^{\prime})\), it follows by definition that \({\rm pr}_{2}\Psi_{\dot{c}}^{-1}(h)=g^{-1}b^{\prime}\sigma(g)\in\dot{c}{\bf T}( \mathcal{O}_{\breve{k}}){\bf U}(\breve{k})\) and hence
\[[b]\cap\dot{c}{\bf T}(\mathcal{O}_{\breve{k}}){\bf U}_{c}(\breve{k})\subseteq{ \rm pr}_{2}\Psi_{\dot{c}}^{-1}(\varpi^{\eta}{\bf T}(\mathcal{O}_{\breve{k}})K_{ \mu-\eta}).\]
Conversely, by Lemma 1.1 and that \(\varpi^{\mu}{\bf T}(\mathcal{O}_{\breve{k}}){\bf U}(\breve{k})\subseteq[\varpi ^{\eta}]=[b]\) we have
\[\varpi^{\eta}{\bf T}(\mathcal{O}_{\breve{k}})K_{\mu-\eta}\subseteq[\varpi^{ \eta}]\cap{\bf U}(\breve{k})\dot{c}{\bf T}(\mathcal{O}_{\breve{k}}){\bf U}( \breve{k})=[b]\cap{\bf U}(\breve{k})\dot{c}{\bf T}(\mathcal{O}_{\breve{k}}){ \bf U}(\breve{k}).\]
So \([b]\cap\dot{c}{\bf T}(\mathcal{O}_{\breve{k}}){\bf U}_{c}(\breve{k})\supseteq{ \rm pr}_{2}\Psi_{\dot{c}}^{-1}(\varpi^{\eta}{\bf T}(\mathcal{O}_{\breve{k}})K_{ \mu-\eta})\) and (a) follows.
## 3. A digression
In this section, we digress to introduce a method to compute the right hand side of Proposition 2.4 (a). For simplicity we assume that \(k=\mathbb{F}_{q}((\varpi))\), where \(\mathbb{F}_{q}\) is a finite field with \(q\) elements.
Let \(\lambda\in X_{*}(\mathbf{T})\) and \(J\subseteq\{1,\ldots,r\}\). By definition, each point \(x=(x_{i}^{l})_{i,l}\) in \(H^{\lambda}_{c\sigma,J}\) can be uniquely written as
\[x=\prod_{i=1}^{r}\mathbf{U}_{\beta_{i}}(\varpi^{\langle\lambda,\beta_{i} \rangle}\sum_{l=0}^{\infty}x_{i}^{l}\varpi^{l}),\]
where \(x_{i}^{l}\in\overline{\mathbb{F}}_{q}^{\times}\) if \(l=0\) and \(i\in J\), and \(x_{i}^{l}\in\overline{\mathbb{F}}_{q}\) otherwise. Let \((X_{i}^{l})_{i,l}\) be the coordinates of the points \(x=(x_{i}^{l})_{i,l}\in H^{\lambda}_{c\sigma,J}\). Set \(\mathcal{R}=R((\varpi))\), where
\[R=\overline{\mathbb{F}}_{q}[X_{i}^{l};l\in\mathbb{Z}_{\geqslant 0},\ 1\leqslant i \leqslant r][(X_{j}^{0})^{-1};j\in J].\]
Let \(\epsilon\in\mathbb{R}\). Let \(\mathcal{R}^{\geqslant\epsilon}\) be the linear space of elements \(\sum_{l}f^{l}\varpi^{l}\in\mathcal{R}\) such that
* \(f^{l}=0\) if \(l<\epsilon\);
* \(f^{l}\in\overline{\mathbb{F}}_{q}[X_{i}^{l^{\prime}};0\leqslant l^{\prime} \leqslant l-\epsilon,1\leqslant i\leqslant r][(X_{j}^{0})^{-1};j\in J]\) for \(l\geqslant\epsilon\).
Similarly, let \(\mathcal{R}^{>\epsilon}\) be the linear space of elements \(\sum_{l}f^{l}\varpi^{l}\) such that
* \(f^{l}=0\) if \(k\leqslant\epsilon\);
* \(f^{l}\in\overline{\mathbb{F}}_{q}[X_{i}^{l^{\prime}};0\leqslant l^{\prime} <l-\epsilon,1\leqslant i\leqslant r][(X_{j}^{0})^{-1};j\in J]\) for \(l>\epsilon\).
The following results are immediate by definition.
**Lemma 3.1**.: _We have the following properties._
_(1) \(\mathcal{R}^{\geqslant\epsilon}\mathcal{R}^{\geqslant\epsilon^{\prime}}\subseteq \mathcal{R}^{\geqslant\epsilon+\epsilon^{\prime}}\) and \(\mathcal{R}^{\geqslant\epsilon}\mathcal{R}^{>\epsilon^{\prime}}\subseteq \mathcal{R}^{>\epsilon+\epsilon^{\prime}}\) for \(\epsilon,\epsilon^{\prime}\in\mathbb{R}\)._
_(2) \(d\mathcal{R}^{\geqslant\epsilon}=\mathcal{R}^{\geqslant\epsilon+m}\) and \(d\mathcal{R}^{>\epsilon}=\mathcal{R}^{>\epsilon+m}\) for \(d\in\varpi^{m}\mathcal{O}_{k}^{\times}\) with \(m\in\mathbb{Z}\)._
_(3) \(f_{j}^{-1}\in\mathcal{R}^{\geqslant-\langle\lambda,\beta_{j}\rangle}\) for \(j\in J\), where \(f_{j}=\varpi^{\langle\lambda,\beta_{j}\rangle}\sum_{l=0}^{\infty}X_{j}^{l} \varpi^{l}\in\mathcal{R}^{\geqslant\langle\lambda,\beta_{j}\rangle}\)._
We set \(\mathbf{U}_{\alpha}(\mathcal{R})^{\geqslant\lambda}=\mathbf{U}_{\alpha}( \mathcal{R}^{\geqslant\langle\lambda,\alpha\rangle})\) for \(\alpha\in\Phi\). It follows from Lemma 3.1 that
\[[\mathbf{U}_{\alpha}(\mathcal{R})^{\geqslant\lambda},\mathbf{U}_{\beta}( \mathcal{R})^{\geqslant\lambda}]\subseteq\prod_{i,j\geqslant 1}\mathbf{U}_{i \alpha+j\beta}(\mathcal{R})^{\geqslant\lambda}\ \ \text{for}\ -\alpha\neq\beta\in\Phi.\]
So for \(w\in W\) we can define the following subgroup
\[\mathbf{U}_{w\sigma}(\mathcal{R})^{\geqslant\lambda}=\prod_{\gamma\in\Phi^{ \pm}_{w\sigma}}\mathbf{U}_{\gamma}(\mathcal{R})^{\geqslant\lambda}\subseteq \mathbf{U}_{w\sigma}(\mathcal{R}).\]
Moreover, we can define \(\mathbf{U}_{\gamma}(\mathcal{R})^{>\lambda}\) and \(\mathbf{U}_{w\sigma}(\mathcal{R})^{>\lambda}\) in a similarly way. Note that \(\mathbf{U}_{w\sigma}(\mathcal{R})^{>\lambda}\) is a normal subgroup of \(\mathbf{U}_{w\sigma}(\mathcal{R})^{\geqslant\lambda}\).
Note that each element of \(\mathbf{U}(\mathcal{R})\) defines a map from \(H^{\lambda}_{c\sigma,J}\) to \(\mathbf{U}(\tilde{k})\) in a natural way.
**Lemma 3.2**.: _Let \(g\in h\mathbf{U}_{c\sigma}(\mathcal{R})^{>\lambda}\) with \(h=\prod_{i=1}^{r}\mathbf{U}_{\beta_{i}}(f_{i})\), where_
\[f_{i}=\varpi^{\langle\lambda,\beta_{i}\rangle}\sum_{l=0}^{\infty}X_{i}^{l} \varpi^{l}\text{ for }1\leqslant i\leqslant r.\]
_Then \(g\) induces a surjective endomorphism of \(H^{\lambda}_{c\sigma,J}\)._
Proof.: Write \(g=\prod_{i=1}^{r}{\bf U}_{\beta_{i}}(g_{i})\) with \(g_{i}=\varpi^{\langle\lambda,\beta_{i}\rangle}\sum_{l=0}^{\infty}g_{i}^{l}\varpi ^{l}\in\mathcal{R}\). By assumption and Lemma 1.2 we have
(a) \(g_{i}^{0}=X_{i}^{0}\);
(b) \(g_{i}^{l}=X_{i}^{l}+\delta_{i}^{l}\) with \(\delta_{i}^{l}\in\overline{\mathbb{F}}_{q}[X_{i^{\prime}}^{l^{\prime}};0\leqslant l ^{\prime}<l,\ 1\leqslant i^{\prime}\leqslant r][(X_{j}^{0})^{-1};j\in J]\). By (a) we have \(g(H^{\lambda}_{c\sigma,J})\subseteq H^{\lambda}_{c\sigma,J}\). It remains to show that \(g(H^{\lambda}_{c\sigma,J})=H^{\lambda}_{c\sigma,J}\).
Let \(y=(y_{i}^{l})\in H^{\lambda}_{c\sigma,J}\). We construct a point \(x=(x_{i}^{l})\in H^{\lambda}_{c\sigma,J}\) inductively as follows. If \(l=0\), we set \(x_{i}^{l}=y_{i}^{l}\). Suppose that the point \((x_{i}^{l^{\prime}})_{l^{\prime}<l}\) is already constructed. In view of (b) we set
\[x_{i}^{l}=y_{i}^{l}-\delta_{i}^{l}((x_{i}^{l^{\prime}})_{l^{\prime}<l})\]
and this finishes the construction of \(x\). It follows from (a) and (b) that \(y_{i}^{l}=g_{i}^{l}(x)\) and hence \(g(x)=y\). So \(g(H^{\lambda}_{c\sigma,J})=H^{\lambda}_{c\sigma,J}\) as desired.
Let \(\gamma\in\Phi^{+}\). Following [18] we define
\[\operatorname{Cross}_{c\sigma}(\gamma)=\Phi^{+}\cap c\sigma(\mathbb{Z}_{\geqslant 1 }\gamma+\mathbb{Z}_{\geqslant 0}\Phi^{+}_{c\sigma}).\]
For \(\Gamma\subseteq\Phi^{+}\) we set \(\operatorname{Cross}_{c\sigma}(\Gamma)=\cup_{\gamma\in\Gamma}\operatorname{ Cross}_{c\sigma}(\gamma)\).
The following result is proved in Theorem B and Lemma 2.37 of [18].
**Theorem 3.3**.: _We have \(\operatorname{Cross}^{d}_{c\sigma}(\Phi^{+})=\emptyset\) for \(d\gg 0\). Here \(\operatorname{Cross}^{d}_{c\sigma}\) denotes the \(d\)-fold composition of \(\operatorname{Cross}_{c\sigma}\)._
Let \(\mathcal{R}\) be the \(\breve{k}\)-algebra as in SS3.1. For a lift \(\dot{c}\in{\bf N}(\breve{k})\) of \(c\) we define
\[\Xi_{\dot{c}\sigma}:{\bf U}(\mathcal{R})\times{\bf U}_{c\sigma}(\mathcal{R}) \longrightarrow{\bf U}(\mathcal{R})\times{\bf U}_{c\sigma}(\mathcal{R}),\ (x,y)\longmapsto(x^{\prime},y^{\prime})\]
such that \(x^{\prime}\dot{c}\sigma y^{\prime}=\dot{c}\sigma yx\).
**Corollary 3.4**.: _The image of \(\operatorname{pr}_{1}\circ\Xi_{c\sigma}^{d}\) is trivial for \(d\gg 0\). Here \(\Xi_{\dot{c}\sigma}^{d}\) is the \(d\)-fold composition of \(\Xi_{\dot{c}\sigma}\), and \(\operatorname{pr}_{1}:{\bf U}(\mathcal{R})\times{\bf U}_{c\sigma}(\mathcal{R}) \rightarrow{\bf U}(\mathcal{R})\) is the natural projection._
Proof.: For \(d\in\mathbb{Z}_{\geqslant 0}\) let \({\bf U}_{d}\subseteq{\bf U}\) be the subgroup generated by \({\bf U}_{\alpha}\) for \(\alpha\in\operatorname{Cross}^{d}_{c\sigma}(\Phi^{+})\). By Theorem 3.3, it suffices to show that
\[\Xi_{\dot{c}\sigma}({\bf U}_{d}(\mathcal{R})\times{\bf U}_{c\sigma}(\mathcal{ R}))\subseteq{\bf U}_{d+1}(\mathcal{R})\times{\bf U}_{c\sigma}(\mathcal{R}),\]
that is, \(\dot{c}\sigma{\bf U}_{c\sigma}(\mathcal{R}){\bf U}_{\alpha}(\mathcal{R}) \subseteq{\bf U}_{d+1}(\mathcal{R})\dot{c}\sigma{\bf U}_{c\sigma}(\mathcal{ R})\) for \(\alpha\in\operatorname{Cross}^{d}_{c\sigma}(\Phi^{+})\). Let
\[\Gamma=\mathbb{Z}_{\geqslant 1}\alpha+\mathbb{Z}_{\geqslant 0}\Phi^{+}_{c \sigma}\subseteq\Phi^{+}.\]
Note that there is decomposition
\[\prod_{\gamma\in\Gamma}{\bf U}_{\gamma}=\prod_{\gamma\in\Gamma\cap\Phi^{+}_{w_ {0}c\sigma}}{\bf U}_{\gamma}\prod_{\gamma\in\Gamma\cap\Phi^{+}_{c\sigma}}{\bf U }_{\gamma}\]
of unipotent subgroups of \({\bf G}\). For \(x\in{\bf U}_{\alpha}(\mathcal{R})\) and \(y\in{\bf U}_{c\sigma}(\mathcal{R})\) we have
\[yxy^{-1}\in\prod_{\gamma\in\Gamma}{\bf U}_{\gamma}(\mathcal{R}).\]
Write \(yxy^{-1}=x^{\prime}y^{\prime}\), where \(x^{\prime}\in\prod_{\gamma\in\Gamma\cap\Phi^{+}_{w_{0}c\sigma}}\mathbf{U}_{ \gamma}(\mathcal{R})\) and \(y^{\prime}\in\prod_{\gamma\in\Gamma\cap\Phi^{+}_{c\sigma}}\mathbf{U}_{\gamma}( \mathcal{R})\). Thus
\[\dot{c}\sigma yx=\dot{c}\sigma(x^{\prime})\dot{c}\sigma y^{\prime}y,\]
where \(y^{\prime}y\in\mathbf{U}_{c\sigma}(\mathcal{R})\) and
\[\dot{c}\sigma(x^{\prime})\in\prod_{\gamma\in\Phi^{+}\cap^{c\sigma}\Gamma} \mathbf{U}_{\gamma}(\mathcal{R})=\prod_{\gamma\in\operatorname{Cross}_{c \sigma}(\alpha)}\mathbf{U}_{\gamma}(\mathcal{R})\subseteq\mathbf{U}_{d+1}( \mathcal{R})\]
as desired.
## 4. Proof of Theorem 1.6
First we show that the main result is independent of the choice of minimal \(\sigma\)-Coxeter elements.
Recall that two element \(w,w^{\prime}\in W\) are equivalent by \(\sigma\)-cyclic shifts if there exists a sequence
\[w=w_{0},w_{1},\dots,w_{n}=w^{\prime}\]
such that for each \(1\leqslant i\leqslant n\) the elements \(w_{i-1},w_{i}\) have the same length, and are \(\sigma\)-conjugate by a simple reflection.
**Lemma 4.1**.: _Let \(c^{\prime}\) be another minimal \(\sigma\)-Coxeter element. Then Theorem 1.6 holds for \(c\) if and only if it holds for \(c^{\prime}\)._
Proof.: Note that all minimal \(\sigma\)-Coxeter elements are equivalent by \(\sigma\)-cyclic shifts. So we can assume that \(c^{\prime}=s_{\alpha_{1}}c\sigma(s_{\alpha_{1}})\). Suppose that \(\dot{c}\in\varpi^{\mu}\mathbf{N}(\mathcal{O}_{\breve{k}})\) and let \(\dot{c}^{\prime}=\tilde{s}_{\alpha_{1}}^{-1}\dot{c}\sigma(\tilde{s}_{\alpha_{1 }})\in\varpi^{s_{\alpha_{1}}(\mu)}\mathbf{N}(\mathcal{O}_{\breve{k}})\). Note that there are natural isomorphisms
\[\dot{c}\mathbf{T}(\mathcal{O}_{\breve{k}})\mathbf{U}_{c}(\breve {k}) \cong\varpi^{\mu}\mathbf{T}(\mathcal{O}_{\breve{k}})\times\tilde{s} _{\alpha_{1}}\mathbf{U}_{\alpha_{1}}(\breve{k})\times\dots\times\tilde{s}_{ \alpha_{r}}\mathbf{U}_{\alpha_{r}}(\breve{k})\] \[\cong\tilde{s}_{\alpha_{1}}\mathbf{U}_{\alpha_{1}}(\breve{k}) \times\varpi^{s_{\alpha_{1}}(\mu)}\mathbf{T}(\mathcal{O}_{\breve{k}})\times \tilde{s}_{\alpha_{2}}\mathbf{U}_{\alpha_{2}}(\breve{k})\times\dots\times \tilde{s}_{\alpha_{r}}\mathbf{U}_{\alpha_{r}}(\breve{k}).\]
For \(z\in\varpi^{s_{\alpha_{1}}(\mu)}\mathbf{T}(\mathcal{O}_{\breve{k}})\) and \(u_{i}\in\mathbf{U}_{\alpha_{i}}(\breve{k})\) the map
\[\tilde{s}_{\alpha_{1}}u_{1}z\tilde{s}_{\alpha_{2}}u_{2}\cdots\tilde{s}_{ \alpha_{r}}u_{r}\longmapsto z\tilde{s}_{\alpha_{2}}u_{2}\cdots\tilde{s}_{ \alpha_{r}}u_{r}{}^{\sigma}(\tilde{s}_{\alpha_{1}}u_{1})\]
induces a bijection
\[\psi:\dot{c}\mathbf{T}(\mathcal{O}_{\breve{k}})\mathbf{U}_{c}(\breve{k}) \cong\dot{c}^{\prime}\mathbf{T}(\mathcal{O}_{\breve{k}})\mathbf{U}_{c^{ \prime}}(\breve{k}).\]
Thus \(\psi([b]\cap\dot{c}\mathbf{T}(\mathcal{O}_{\breve{k}})\mathbf{U}_{c}(\breve{ k}))=[b]\cap\dot{c}^{\prime}\mathbf{T}(\mathcal{O}_{\breve{k}})\mathbf{U}_{c^{ \prime}}(\breve{k})\). Now the statement follows from the following equality
\[\psi(\dot{c}\mathbf{T}(\mathcal{O}_{\breve{k}})H_{c,\mu,b})\] \[=\psi(\varpi^{\mu}\mathbf{T}(\mathcal{O}_{\breve{k}})\tilde{s}_{ \alpha_{1}}H_{\mu,b,\alpha_{1}}\tilde{s}_{\alpha_{2}}H_{\mu,b,\alpha_{2}} \cdots\tilde{s}_{\alpha_{r}}H_{\mu,b,\alpha_{r}})\] \[=\varpi^{s_{\alpha_{1}}(\mu)}\mathbf{T}(\mathcal{O}_{\breve{k}} )\tilde{s}_{\alpha_{2}}H_{\mu,b,\alpha_{2}}\cdots\tilde{s}_{\alpha_{r}}H_{\mu,b,\alpha_{r}}\sigma(\tilde{\alpha_{1}})^{\sigma\varpi^{s_{\alpha_{1}}(\mu)}}H_ {\mu,b,\alpha_{1}}\] \[=\varpi^{s_{\alpha_{1}}(\mu)}\mathbf{T}(\mathcal{O}_{\breve{k}}) \tilde{s}_{\alpha_{2}}H_{s_{\alpha_{1}}(\mu),b,\alpha_{2}}\cdots\tilde{s}_{ \alpha_{r}}H_{s_{\alpha_{1}}(\mu),b,\alpha_{r}}\tilde{s}_{\sigma(\alpha_{1})}H _{s_{\alpha_{1}}(\mu),b,\sigma(\alpha_{1})}\] \[=\dot{c}^{\prime}\mathbf{T}(\mathcal{O}_{\breve{k}})H_{c^{\prime },s_{\alpha_{1}}(\mu),b},\]
where the third equality follows from that
\[\langle s_{\alpha_{1}}(\mu)-\nu(b),\omega_{i}\rangle=\begin{cases}\langle\mu-\nu(b),\omega_{i}\rangle-\langle\mu,\alpha_{i}\rangle,&\text{ if }i=1;\\ \langle\mu-\nu(b),\omega_{i}\rangle,&\text{ otherwise}.\end{cases}\]
The proof is finished.
**Lemma 4.2**.: _Let \(\mu,b,\lambda\) be as in Theorem 1.6. Let \(\mathcal{R}\) be as in SS3.1 associated to the pair \((\lambda,J_{\nu(b)})\). Then the map \(\Xi_{\dot{c}\sigma}\) in SS3.2 induces endomorphism of (1) \(\mathbf{U}(\mathcal{R})^{\geqslant\lambda}\times\mathbf{U}_{c\sigma}(\mathcal{ R})^{\geqslant\lambda}\) and (2) \(\mathbf{U}(\mathcal{R})^{>\lambda}\times h\mathbf{U}_{c\sigma}(\mathcal{R})^{>\lambda}\) for \(h\in\mathbf{U}_{c\sigma}(\mathcal{R})^{\geqslant\lambda}\)._
Proof.: By the proof of Corollary 3.4 and that \(\mathbf{U}_{c\sigma}(\mathcal{R})^{\geqslant\lambda}\) normalizes \(\mathbf{U}_{c\sigma}(\mathcal{R})^{>\lambda}\), it suffices to show that
\[\mathring{c}\sigma\mathbf{U}_{w_{0}c\sigma}(\mathcal{R})^{>\lambda}\subseteq \mathbf{U}(\mathcal{R})^{>\lambda}\text{ and }\mathring{c}\sigma\mathbf{U}_{w_{0}c\sigma}( \mathcal{R})^{\geqslant\lambda}\subseteq\mathbf{U}(\mathcal{R})^{\geqslant \lambda},\]
where \(w_{0}\) is the longest element of \(W\). We only show the first inclusion. Indeed, for \(\gamma\in\Phi_{w_{0}c\sigma}^{+}\), that is, \(c\sigma(\gamma)\in\Phi^{+}\), we have
\[\mathring{c}\sigma\mathbf{U}_{\gamma}(\mathcal{R}^{>\langle\lambda,\gamma \rangle}) =\mathbf{U}_{c\sigma(\gamma)}(\mathcal{R}^{>\langle\mu,c\sigma( \gamma)\rangle+\langle\lambda,\gamma\rangle})=\mathbf{U}_{c\sigma(\gamma)}( \mathcal{R}^{>\langle\lambda,c\sigma(\gamma)\rangle+\langle\nu(b),c\sigma( \gamma)\rangle})\] \[\subseteq\mathbf{U}_{c\sigma(\gamma)}(\mathcal{R}^{>\langle \lambda,c\sigma(\gamma)\rangle}),\]
where the second equality follows from Lemma 1.3 (1), and the inclusion follows from that \(\langle\nu(b),c\sigma(\gamma)\rangle\geqslant 0\) since \(c\sigma(\gamma)\in\Phi^{+}\). So the statement follows.
Let \(A,B\subseteq\mathbf{G}(\breve{k})\) and let \(C\) be a subgroup of \(\mathbf{G}(\breve{k})\). We write \(A\sigma\sim_{C}B\sigma\) if \(C\cdot_{\sigma}A=C\cdot_{\sigma}B\).
Proof Proposition 1.6.: As \(\mu-\nu(b)=\lambda-c\sigma(\lambda)\) and \(\lambda\in X_{*}(\mathbf{T})\) it follows that \(\nu(b)\in X_{*}(\mathbf{T})\). Thus we may assume \(b=\varpi^{\eta}\) for some \(\eta\in X_{*}(\mathbf{T})\) (for example \(\eta=\nu(b)\)) such that the \(\sigma\)-average \(\eta^{\diamond}\) of \(\eta\) is dominant. Set \(\nu=\nu(b)=\eta^{\diamond}\) and \(J=J_{\nu(b)}\). By Lemma 4.1 we may assume \(\langle\nu,\alpha_{1}\rangle\neq 0\) if \(\nu\) is non-central.
By Lemma 2.3 we may assume further that \(\mu-\eta=\sum_{i=1}^{r}n_{i}\alpha_{i}^{\vee}\) with \(n_{i}\in\mathbb{Z}\) for \(1\leqslant i\leqslant r\). By Proposition 2.4, it is equivalent to show that
\[\varpi^{\eta}\mathbf{T}(\mathcal{O}_{\breve{k}})K_{\mu-\eta}\sigma\sim_{ \mathbf{U}(\breve{k})}\mathring{c}\sigma\mathbf{T}(\mathcal{O}_{\breve{k}})H _{c\sigma,J}^{\lambda}.\]
We argue by induction on \(r\). If \(r=0\), the statement is trivial. Suppose \(r\geqslant 1\). Let \(\mathbf{M}\) be the standard Levi subgroup generated by \(\mathbf{T}\) and the root subgroup \(\mathbf{U}_{\pm\alpha}\) for \(\alpha\in\Pi-C_{1}\), where \(C_{1}\subseteq\Pi\) is the \(\sigma\)-orbit of \(\alpha_{1}\). Let \(c_{\mathbf{M}}=s_{\alpha_{1}}c\) and let \(\dot{c}_{\mathbf{M}}\in\varpi^{\mu-n_{1}\alpha_{1}^{\vee}}\mathbf{N}(\mathcal{ O}_{\breve{k}})\) be a lift of \(c_{\mathbf{M}}\). By Lemma 1.3,
\[\langle\lambda,\beta_{i}\rangle=\langle\mu-\nu,\omega_{i}\rangle=\langle\mu- \eta^{\diamond},\omega_{i}\rangle=\langle\mu-\eta,\omega_{i}\rangle=n_{i} \text{ for }1\leqslant i\leqslant r.\]
Hence (using that \(\mu-\nu=\lambda-c\sigma(\lambda)\)) we have
\[\mu-n_{1}\alpha^{\vee}-\nu=\lambda-c_{\mathbf{M}}\sigma(\lambda).\]
By induction hypothesis and Proposition 2.4 (where we take \((\mathbf{G},\dot{c},b)=(\mathbf{M},\dot{c}_{\mathbf{M}},\varpi^{\eta})\), and note that \(\kappa_{\mathbf{M}}(b)=\kappa_{\mathbf{M}}(\dot{c}_{\mathbf{M}})\) and \(\nu_{\mathbf{M}}(b)=\nu\)),
(a) \[\varpi^{\eta}\mathbf{T}(\mathcal{O}_{\breve{k}})K_{\mu-n_{1}\alpha_{1}^{\vee}- \eta}^{\mathbf{M}}\sigma\sim_{\mathbf{U}_{\mathbf{M}}(\breve{k})}\dot{c}_{ \mathbf{M}}\sigma\mathbf{T}(\mathcal{O}_{\breve{k}})H_{c_{\mathbf{M}}\sigma,J ^{\mathbf{M}}}^{\lambda,\mathbf{M}},\]
where \(\mathbf{U}_{\mathbf{M}}=\mathbf{U}\cap\mathbf{M}\), \(J^{\mathbf{M}}=J-\{1\}\), and \(K^{\mathbf{M}}_{\mu-n_{1}\alpha_{1}^{\vee}-\eta}\), \(H^{\lambda,\mathbf{M}}_{c_{\mathbf{M}}\sigma,J^{\mathbf{M}}}\) are defined similarly for \(\mathbf{M}\). Note that
\[K_{\mu-\eta}=\mathbf{U}_{-\alpha_{1}}(\varpi^{m_{1}}\mathcal{O}_{\breve{k}}^{ \times})K^{\mathbf{M}}_{\mu-n_{1}\alpha_{1}^{\vee}-\eta},\]
where \(m_{1}=-n_{1}-\sum_{i=2}^{r}n_{i}\langle\alpha_{i}^{\vee},\alpha_{1}\rangle\). Thus
\[\varpi^{\eta}\mathbf{T}(\mathcal{O}_{\breve{k}})K_{\mu-\eta}\sigma\] \[=\varpi^{\eta}\mathbf{T}(\mathcal{O}_{\breve{k}})\mathbf{U}_{- \alpha_{1}}(\varpi^{m_{1}}\mathcal{O}_{\breve{k}}^{\times})K^{\mathbf{M}}_{ \mu-n_{1}\alpha_{1}^{\vee}-\eta}\sigma\] \[=\mathbf{U}_{-\alpha_{1}}(\varpi^{m_{1}-\langle\eta,\alpha_{1} \rangle}\mathcal{O}_{\breve{k}}^{\times})\varpi^{\eta}\mathbf{T}(\mathcal{O}_{ \breve{k}})K^{\mathbf{M}}_{\mu-n_{1}\alpha_{1}^{\vee}-\eta}\sigma\] \[\sim_{\mathbf{U}_{\mathbf{M}}(\breve{k})}\mathbf{U}_{-\alpha_{1} }(\varpi^{m_{1}-\langle\eta,\alpha_{1}\rangle}\mathcal{O}_{\breve{k}}^{\times })\dot{c}_{\mathbf{M}}\sigma\mathbf{T}(\mathcal{O}_{\breve{k}})H^{\lambda, \mathbf{M}}_{c_{\mathbf{M}}\sigma,J^{\mathbf{M}}}\] \[=\mathbf{U}_{-\alpha_{1}}(\varpi^{m_{1}-\langle\eta,\alpha_{1} \rangle}\mathcal{O}_{\breve{k}}^{\times})\varpi^{\mu-n_{1}\alpha_{1}^{\vee}} \mathbf{T}(\mathcal{O}_{\breve{k}})\tilde{s}_{\alpha_{2}}\cdots\tilde{s}_{ \alpha_{r}}\sigma H^{\lambda,\mathbf{M}}_{c_{\mathbf{M}}\sigma,J^{\mathbf{M}}}\] \[=\varpi^{\mu-n_{1}\alpha_{1}^{\vee}}\mathbf{T}(\mathcal{O}) \mathbf{U}_{-\alpha_{1}}(\varpi^{-n_{1}}\mathcal{O}_{\breve{k}}^{\times}) \tilde{s}_{\alpha_{2}}\cdots\tilde{s}_{\alpha_{r}}\sigma H^{\lambda,\mathbf{M} }_{c_{\mathbf{M}}\sigma,J^{\mathbf{M}}}\] \[=\cup_{y\in\mathcal{O}_{\breve{k}}^{\times}}\varpi^{\mu-n_{1} \alpha_{1}^{\vee}}\mathbf{T}(\mathcal{O})(\varpi^{-n_{1}}y^{-1})^{-\alpha_{1}^ {\vee}}\mathbf{U}_{\alpha_{1}}(\varpi^{-n_{1}}y^{-1})\tilde{s}_{\alpha_{1}} \mathbf{U}_{\alpha_{1}}(\varpi^{n_{1}}y)\tilde{s}_{\alpha_{2}}\cdots\tilde{s}_ {\alpha_{r}}\sigma H^{\lambda,\mathbf{M}}_{c_{\mathbf{M}}\sigma,J^{\mathbf{M} }}\] \[=\cup_{z\in\mathbf{T}(\mathcal{O}_{\breve{k}})}\cup_{y\in \mathcal{O}_{\breve{k}}^{\times}}\mathbf{U}_{\alpha_{1}}(d_{z}\varpi^{\langle \mu,\alpha_{1}\rangle-n_{1}}y^{-1})\varpi^{\mu}z\tilde{s}_{\alpha_{1}}\cdots \tilde{s}_{\alpha_{r}}\sigma\mathbf{U}_{\beta_{1}}(\varpi^{n_{1}}y)H^{\lambda, \mathbf{M}}_{c_{\mathbf{M}}\sigma,J^{\mathbf{M}}},\]
where the relation \(\sim_{\mathbf{U}_{M}(\breve{k})}\) follows from (a) and the observation that \(\mathbf{U}_{\mathbf{M}}\) commutes with \(\mathbf{U}_{-\alpha_{1}}\), the fourth equality follows from the equality \(\mu-\eta=\sum_{i=1}^{r}n_{i}\alpha_{i}^{\vee}\), and \(d_{z}\in\mathcal{O}_{\breve{k}}^{\times}\) is certain constant depending on \(z\in\mathbf{T}(\mathcal{O}_{\breve{k}})\). Therefore, it suffices to show
( \[*\] ) \[\dot{c}\sigma H^{\lambda}_{c\sigma,J}\sim_{\mathbf{U}(\breve{k})}\cup_{y\in \mathcal{O}_{\breve{k}}^{\times}}\mathbf{U}_{\alpha_{1}}(d\varpi^{\langle\mu, \alpha_{1}\rangle-n_{1}}y^{-1})\dot{c}\sigma\mathbf{U}_{\beta_{1}}(\varpi^{n_ {1}}y)H^{\lambda,\mathbf{M}}_{c_{\mathbf{M}}\sigma,J^{\mathbf{M}}},\]
where \(d\in\mathcal{O}_{\breve{k}}^{\times}\) and \(\dot{c}\in\varpi^{\mu}\mathbf{N}(\mathcal{O}_{\breve{k}})\) is a lift of \(c\).
Set
\[V^{\lambda}_{c\sigma,J}=\mathbf{U}_{\beta_{1}}(\varpi^{n_{1}}\mathcal{O}_{ \breve{k}}^{\times})H^{\lambda,\mathbf{M}}_{c_{\mathbf{M}}\sigma,J^{\mathbf{M} }}\subseteq H^{\lambda}_{c\sigma,J}.\]
Let \(\mathcal{R}\) be as in SS3.1 associated to the pair \((J,\lambda)\). As \(n_{1}=\langle\lambda,\beta_{1}\rangle\), the right hand side of \((*)\) is the image of map
\[h^{0}=(h_{1}^{0},h_{2}^{0}):V^{\lambda}_{c\sigma,J}\longrightarrow\mathbf{U}( \breve{k})\times\mathbf{U}_{c\sigma}(\breve{k}),\]
where \(h_{1}^{0}=\mathbf{U}_{\alpha_{1}}(d\varpi^{\langle\mu,\alpha_{1}\rangle}f_{1}^ {-1})\), \(h_{2}^{0}=\prod_{i=1}^{r}\mathbf{U}_{\beta_{i}}(f_{i})\), and
\[f_{i}=\varpi^{\langle\lambda,\beta_{i}\rangle}\sum_{k=0}^{\infty}X_{i}^{l}\varpi^ {l}\in\mathcal{R}^{\geqslant\langle\lambda,\beta_{i}\rangle}.\]
Let \(h^{n}=(h_{1}^{n},h_{2}^{n})=\Xi^{n}_{\dot{c}\sigma}\circ h^{0}\) for \(n\in\mathbb{Z}\). By Corollary 3.4, there exists \(N\gg 0\) such that \(h_{1}^{N}\) is trivial. Hence \((*)\) is equivalent to that the image \(\mathrm{Im}(h_{2}^{N})\) of the map \(h_{2}^{N}:V^{\lambda}_{c\sigma,J}\rightarrow\mathbf{U}_{c\sigma}(\breve{k})\) equals \(H^{\lambda}_{c\sigma,J}\).
Note that \(f_{1}^{-1}\in\mathcal{R}^{\geqslant-\langle\lambda,\beta_{1}\rangle}\) by Lemma 3.1 (3). Moreover, It follows from Lemma 1.3 (3) that
\[\langle\mu,\alpha_{1}\rangle-\langle\lambda,\beta_{1}\rangle-\langle\lambda, \alpha_{1}\rangle=\langle\nu,\alpha_{1}\rangle\geqslant 0.\]
So \(h_{1}^{0}\in\mathbf{U}(\mathcal{R})^{\geqslant\lambda}\), and \(h_{1}^{0}\in\mathbf{U}(\mathcal{R})^{>\lambda}\) if \(\langle\nu,\alpha_{1}\rangle>0\).
First we assume that \(\langle\nu,\alpha_{1}\rangle\neq 0\). In this case, \(h^{0}\in\mathbf{U}(\mathcal{R})^{>\lambda}\times h_{2}^{0}\mathbf{U}_{c\sigma }(\mathcal{R})^{>\lambda}\) and \(V_{c\sigma,J}^{\lambda}=H_{c\sigma,J}^{\lambda}\). It follows from Lemma 4.2 (2) that \(h_{2}^{N}\in h_{2}^{0}\mathbf{U}_{c\sigma}^{>\lambda}\). By Lemma 3.2 we have \(\mathrm{Im}(h_{2}^{N})=H_{c\sigma,J}^{\lambda}\) as desired.
Now we assume that \(\langle\nu,\alpha_{1}\rangle=0\). Then \(\nu\) is central by assumption. It follows from Lemma 4.2 (1) that \(h_{2}^{N}\in\mathbf{U}_{c\sigma}(\mathcal{R})^{\geqslant\lambda}\). Hence
\[\mathrm{Im}(h_{2}^{N})\subseteq{}^{\varpi^{\lambda}}\mathbf{U}_{c\sigma}( \mathcal{O}_{\breve{k}})=H_{c\sigma,J}^{\lambda}.\]
By Proposition 2.4 this means that
\[[b]\cap\dot{c}\mathbf{U}_{c}(\breve{k})\subseteq\dot{c}H_{c,\mu,b}.\]
On the other hand, as \(\nu\) is central and \(\mu+c\sigma(\lambda)=\lambda+\nu\), we see that \(\dot{c}\sigma\) fixes the Moy-Prasard group \(H\) generated by \(\mathbf{T}(\mathcal{O}_{\breve{k}})\) and \(\mathbf{U}_{\alpha}(\varpi^{\langle\lambda,\alpha\rangle}\mathcal{O}_{\breve{k }})\) for \(\alpha\in\Phi\). Since \({}^{\sigma^{-1}}H_{c,\mu,b}=H_{c\sigma,J}^{\lambda}\subseteq H\), it follows from Lang's theorem that
\[\dot{c}H_{c,\mu,b}\subseteq[\dot{c}]\cap\dot{c}\mathbf{U}_{c}(\breve{k})=[b] \cap\dot{c}\mathbf{U}_{c}(\breve{k}).\]
So \([b]\cap\dot{c}\mathbf{U}_{c}(\breve{k})=\dot{c}H_{c,\mu,b}\) and the theorem also holds.
## 5. Applications
Let \(\mu\in X_{*}(\mathbf{T})\) and let \(\dot{c}\in\varpi^{\mu}\mathbf{N}(\mathcal{O}_{\breve{k}})\) be a lift of \(c\).
First we recall the dominance order \(\leq\) on \(B(\mathbf{G})\). Namely, \([b^{\prime}]\leqslant[b]\in B(\mathbf{G})\) if \(\kappa(b)=\kappa(b^{\prime})\) and \(\nu(b)-\nu(b^{\prime})\in\sum_{\alpha\in\Pi}\mathbb{R}_{\geqslant 0}\alpha^{ \vee}\).
Now we prove Theorem 0.3. By Theorem 0.1 we have
(a) \[\overline{[b]\cap\dot{c}\mathbf{U}_{c}(\breve{k})}=\varpi^{\mu}\prod_{i=1}^{r} \tilde{s}_{\alpha_{i}}\mathbf{U}_{\alpha_{i}}(\varpi^{\lceil(\mu-\nu(b),\omega _{i})\rceil}\mathcal{O}_{\breve{k}}).\]
Then it follows that \(\overline{[b]\cap\dot{c}\mathbf{U}(\breve{k})}-([b]\cap\dot{c}\mathbf{U}( \breve{k}))\) is pure of codimension one having \(\sharp J_{\nu(b)}\) irreducible components.
Let \([b^{\prime}]\leqslant[b]\). Then \(\langle\mu-\nu(b),\omega_{i}\rangle\leqslant\langle\mu-\nu(b^{\prime}), \omega_{i}\rangle\) for \(1\leqslant i\leqslant r\). By (a) we have
\[\overline{[b^{\prime}]\cap\dot{c}\mathbf{U}(\breve{k})}\subseteq\overline{[b ]\cap\dot{c}\mathbf{U}(\breve{k})}.\]
On the other hand, Let \([b^{\prime\prime}]\in B(\mathbf{G})\) which intersects \(\overline{[b]\cap\dot{c}\mathbf{U}(\breve{k})}\). Then \(\kappa(b^{\prime\prime})=\kappa(b)\). Moreover, it follows from Theorem 0.1 that
(a) \[\langle\mu-\nu(b),\omega_{i}\rangle\leqslant\langle\mu-\nu(b^{\prime\prime}), \omega_{i}\rangle\text{ for }i\in J_{\nu(b^{\prime\prime})}.\]
Let \(\Pi_{1},\Pi_{2}\subseteq\Pi\) be \(\sigma\)-stable subsets (which may be empty) such that \(\Pi=\Pi_{1}\sqcup\Pi_{2}\) and
\[\nu(b)-\nu(b^{\prime\prime})=v_{1}-v_{2},\]
where \(v_{1}\in\sum_{\alpha\in\Pi_{1}}\mathbb{R}_{\geqslant 0}\alpha^{\vee}\) and \(v_{2}\in\sum_{\alpha\in\Pi_{2}}\mathbb{R}_{>0}\alpha^{\vee}\). It follows from (a) that \(\langle\nu(b^{\prime\prime}),\alpha\rangle=0\) for \(\alpha\in\Pi_{2}\). Let \(\rho_{2}\) be the half sum of roots in \(\Phi^{+}\) spanned by \(\Pi_{2}\). Then \(\langle\nu(b^{\prime\prime}),\rho_{2}\rangle=0\) and \(\langle\alpha^{\vee},\rho_{2}\rangle>0\) for \(\alpha\in\Pi_{2}\). We have
\[0\leqslant\langle v_{2},\rho_{2}\rangle=\langle v_{1},\rho_{2}\rangle-\langle \nu(b),\rho_{2}\rangle+\langle\nu(b^{\prime\prime}),\rho_{2}\rangle\leqslant 0.\]
Thus \(v_{2}=0\) and hence \([b^{\prime\prime}]\leqslant[b]\) as desired.
Now we prove Theorem 0.6. Let
\[[b^{\prime}]=[b_{0}]<[b_{1}]<\cdots<[b_{n}]=[b]\]
be a maximal chain in the poset \((B(\mathbf{G}),\leq)\). We claim that
(a) \([b_{i-1}]\cap\dot{c}\mathbf{U}(\ddot{k})\) is of codimension one in \(\overline{[b_{i}]\cap\dot{c}\mathbf{U}(\ddot{k})}\) for \(1\leqslant i\leqslant n\)
By Theorem 0.3, there is \([b^{\prime}_{i}]<[b_{i}]\) such that \(\overline{[b^{\prime}_{i}]\cap\dot{c}\mathbf{U}(\ddot{k})}\) is an irreducible component of \(\overline{[b_{i}]\cap\dot{c}\mathbf{U}(\ddot{k})}-([b_{i}]\cap\dot{c}\mathbf{ U}(\ddot{k}))\) containing \([b_{i-1}]\cap\dot{c}\mathbf{U}(\ddot{k})\). In particular, \([b^{\prime}_{i}]\cap\dot{c}\mathbf{U}(\ddot{k})\) is of codimension one in \(\overline{[b_{i}]\cap\dot{c}\mathbf{U}(\ddot{k})}\), and \([b_{i-1}]\leq[b^{\prime}_{i}]<[b_{i}]\). Hence \([b^{\prime}_{i}]=[b_{i-1}]\) since the chain \([b_{i-1}]\leq[b_{i}]\) is maximal. So (a) is proved.
By (a) it follows that the length of any maximal chain between \([b^{\prime}]\) and \([b]\) equals the codimension of \([b^{\prime}]\cap\dot{c}\mathbf{U}(\ddot{k})\) in \(\overline{[b]\cap\dot{c}\mathbf{U}(\ddot{k})}\). So the poset \((B(\mathbf{G}),\leq)\) is ranked, and in view of SS5.1 the length function is given by
\[\operatorname{length}([b],[b^{\prime}]) =\operatorname{codim}([b^{\prime}]\cap\dot{c}\mathbf{U}(\ddot{k}),\overline{[b]\cap\dot{c}\mathbf{U}(\ddot{k})})\] \[=\sum_{i=1}^{r}\lceil\langle\mu-\nu(b^{\prime}),\omega_{i}\rangle \rceil-\lceil\langle\mu-\nu(b),\omega_{i}\rangle\rceil,\]
as desired.
For \(\mu\in X_{*}(\mathbf{T})\) recall that
\[B(\mathbf{G},\mu)_{\operatorname{irr}}=\{[b]\in B(\mathbf{G},\mu);\nu(\varpi ^{\mu})-\varpi(b)\in\sum_{\alpha\in\Pi}\mathbb{R}_{>0}\alpha^{\vee}\}.\]
First we show the following result.
**Lemma 5.1**.: _Let \(\mu\in X_{*}(\mathbf{T})\) and let \(\dot{c}\in\varpi^{\mu}\mathbf{N}(\mathcal{O}_{\ddot{k}})\) be a lift of \(c\). Suppose that \(B(\mathbf{G},\mu)_{\operatorname{irr}}\neq\emptyset\). Then_
\[\dot{c}\mathbf{U}_{c}(\varpi\mathcal{O}_{\ddot{k}})=\bigsqcup_{[b]\in B( \mathbf{G},\mu)_{\operatorname{irr}}}[b]\cap\dot{c}\mathbf{U}_{c}(\ddot{k}).\]
Proof.: We may assume \(\mu\) is dominant. Then \(\nu(\varpi^{\mu})\) equal the \(\sigma\)-average \(\mu^{\circ}\) of \(\mu\). Note that \(B(\mathbf{G},\mu)_{\operatorname{irr}}\neq\emptyset\) if and only if \(\mu^{\circ}\) is non-central on each \(\sigma\)-orbit of connected components of the Dynkin diagram of \(\Pi\).
If \([b]\in B(\mathbf{G},\mu)_{\operatorname{irr}}\), by definition
\[\langle\mu-\nu(b),\omega_{i}\rangle=\langle\mu^{\circ}-\nu(b),\omega_{i} \rangle>0\text{ for }1\leqslant i\leqslant r.\]
So it follows from Theorem 0.1 that \([b]\cap\dot{c}\mathbf{U}_{c}(\ddot{k})\subseteq\dot{c}\mathbf{U}(\varpi \mathcal{O}_{\ddot{k}})\).
On the other hand, let \([b^{\prime}]\in B(\mathbf{G},\mu)-B(\mathbf{G},\mu)_{\mathrm{irr}}\) which intersects \(\dot{c}\mathbf{U}_{c}(\varpi\mathcal{O}_{\breve{k}})\). Then there exist a proper subset \(\sigma\)-stable \(\Pi_{1}\subsetneq\Pi\) and \(v\in\sum_{\alpha\in\Pi_{1}}\mathbb{R}_{>0}\alpha^{\vee}\) such that \(v=\mu^{\diamond}-\nu(b^{\prime})\). By Theorem 0.1 we have \(\langle\nu(b^{\prime}),\alpha\rangle=0\) for \(\alpha\in\Pi-\Pi_{1}\). Let \(\rho_{2}\) be the half sum of roots in \(\Phi^{+}\) spanned by \(\Pi-\Pi_{1}\). Then
\[0\leqslant\langle\mu^{\diamond},\rho_{2}\rangle=\langle\nu(b^{\prime}),\rho_{ 2}\rangle+\langle v,\rho_{2}\rangle=\langle v,\rho_{2}\rangle\leqslant 0.\]
Thus \(\langle\mu^{\diamond},\rho_{2}\rangle=\langle v,\rho_{2}\rangle=0\). This means that \(\mu^{\diamond}\) is central on \(\Pi-\Pi_{1}\), and \(\langle\alpha^{\vee},\beta\rangle=0\) for \(\alpha\in\Pi_{1}\) and \(\beta\in\Pi-\Pi_{1}\). Therefore, \(\Pi-\Pi_{1}\) is a union of \(\sigma\)-orbits of connected components of the Dynkin diagram of \(\Pi\). This contradicts that \(\mu^{\diamond}\) is non-central on any \(\sigma\)-orbit of connected components of the Dynkin diagram of \(\Pi\). The proof is finished.
Now we are ready to prove Theorem 0.7. We can assume \(\mu\) is dominant. Let \(m\) be a sufficiently large integer such that \(m>\langle\mu,\omega_{i}\rangle\) for \(1\leqslant i\leqslant r\). Suppose that the residue field of \(k\) has \(q\) elements. Then for any \([b]\in B(\mathbf{G},\mu)\) it follows from Theorem 0.1 and Theorem 0.6 that
(a) \[\sharp(([b]\cap\dot{c}\mathbf{U}_{c}(k))/\mathbf{U}_{c}(\varpi^{m}\mathcal{O} _{k}))=(q-1)^{\sharp J_{\nu(b)}}q^{mr-\sharp J_{\nu(b)}-\mathrm{length}([b],[ \varpi^{\mu}])},\]
where \(\mathcal{O}_{k}\) is the integer ring of \(k\).
Suppose \(B(\mathbf{G},\mu)_{\mathrm{irr}}\neq\emptyset\). Combining (a) with Lemma 5.1 we have
\[q^{(m-1)r} =\sharp(\dot{c}\mathbf{U}_{c}(\varpi\mathcal{O}_{k})/\mathbf{U}_ {c}(\varpi^{m}\mathcal{O}_{k}))\] \[=\sum_{[b]\in B(\mathbf{G},\mu)_{\mathrm{irr}}}\sharp(([b]\cap \dot{c}\mathbf{U}_{c}(k))/\mathbf{U}_{c}(\varpi^{m}\mathcal{O}_{k}))\] \[=\sum_{[b]\in B(\mathbf{G},\mu)_{\mathrm{irr}}}(q-1)^{\sharp J_{ \nu(b)}}q^{mr-\sharp J_{\nu(b)}-\mathrm{length}([b],[\varpi^{\mu}])}.\]
Thus, \(\sum_{[b]\in B(\mathbf{G},\mu)_{\mathrm{irr}}}(q-1)^{\sharp J_{\nu(b)}}q^{r- \sharp J_{\nu(b)}-\mathrm{length}([b],[\varpi^{\mu}])}=1\) and the second identity follows.
Recall that \(\nu(\varpi^{\mu})\) equals the \(\sigma\)-average \(\mu^{\diamond}\) of \(\mu\). Then \(\langle\mu-\nu(\varpi^{\mu}),\omega_{i}\rangle=\langle\mu-\mu^{\diamond}, \omega_{i}\rangle=0\) for \(1\leqslant i\leqslant r\). Therefore,
\[\dot{c}\mathbf{U}_{c}(\mathcal{O}_{\breve{k}})=\overline{[\varpi^{\mu}]\cap \dot{c}\mathbf{U}_{c}(\breve{k})}=\bigsqcup_{[b]\in B(\mathbf{G},\mu)}[b] \cap\dot{c}\mathbf{U}_{c}(\breve{k}).\]
Now the first identity follows in a similar way as above.
|
2303.01914 | Highly nonperturbative nature of the Mott metal-insulator transition:
Two-particle vertex divergences in the coexistence region | We thoroughly analyze the divergences of the irreducible vertex functions
occurring in the charge channel of the half-filled Hubbard model in close
proximity to the Mott metal-insulator transition (MIT). In particular, by
systematically performing dynamical mean-field theory (DMFT) calculations on
the two-particle level, we determine the location and the number of the vertex
divergences across the whole coexistence region adjacent to the first-order
metal-to-insulator transition. We find that the lines in the parameter space,
along which the vertex divergences occur, display a qualitatively different
shape in the coexisting metallic and insulating phase, which is also associated
to an abrupt jump of the number of divergences across the MIT. Physically, the
systematically larger number of divergences on the insulating side of the
transition reflects the sudden suppression of local charge fluctuation at the
MIT. Further, a systematic analysis of the results demonstrates that the number
of divergence lines increases as a function of the inverse temperature
${\beta\!=\!(k_\mathrm{B} T)^{-1}}$ by approaching the Mott transition in the
zero temperature limit. This makes it possible to identify the zero-temperature
MIT as an accumulation point of an infinite number of vertex divergence lines,
unveiling the highly nonperturbative nature of the underlying transition. | Mathias Pelz, Severino Adler, Matthias Reitner, Alessandro Toschi | 2023-03-03T13:36:33Z | http://arxiv.org/abs/2303.01914v3 | # The highly nonperturbative nature of the Mott metal-insulator transition:
###### Abstract
We thoroughly analyze the divergences of the irreducible vertex functions occurring in the charge channel of the half-filled Hubbard model in close proximity to the Mott metal-insulator transition (MIT). In particular, by systematically performing dynamical mean-field theory (DMFT) calculations on the two-particle level, we determine the location and the number of the vertex divergences across the whole coexistence region adjacent to the first-order metal-to-insulator transition. We find that the lines in the parameter space, along which the vertex divergences occur, display a qualitatively different shape in the coexisting metallic and insulating phase, which is also associated to an abrupt jump of the number of divergences across the MIT. Physically, the systematically larger number of divergences on the insulating side of the transition reflects the sudden suppression of local charge fluctuation at the MIT. Further, a systematic analysis of the results demonstrates that the number of divergence lines increases as a function of the inverse temperature \(\beta\!=\!(k_{\rm B}T)^{-1}\) by approaching the Mott transition in the zero temperature limit. This makes it possible to identify the zero-temperature MIT as an _accumulation_ point of an _infinite_ number of vertex divergence lines, unveiling the highly nonperturbative nature of the underlying transition.
## I Introduction
The multifaceted manifestations [1; 2; 3; 4; 5; 6; 7; 8; 9; 10] of the breakdown of the self-consistent perturbation theory for the many-electron problem have been recently in focus of several studies [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28]. In particular, it has been demonstrated [10] how the breakdown of the perturbative expansion corresponds to the crossings of different solutions [3; 4] of the Luttinger-Ward functional or, equivalently, to multiple divergences [1; 9; 13] of the two-particle vertex functions irreducible in the charge channel, i.e. the kernel of the Bethe-Salpeter equation describing the charge response of the many-electron system under consideration.
On a less formal perspective, the crucial role [21; 23; 25; 27] of the local moment formation in triggering the perturbation-theory breakdown as well as the contra-intuitive physical consequences associated to nonperturbative scattering processes [20] have been extensively investigated in the most recent literature, e.g. for the Anderson impurity and the Hubbard model.
Hitherto, however, the analysis of an equally important aspect of this problem, namely the precise relation linking the above mentioned manifestations of the perturbative breakdown to the occurrence of Mott-Hubbard metal-to-insulator transitions (MITs) [29], has been put, to some extent, aside. In fact, in some of the earliest works [1; 9] on this subject, it was suggested that the divergences of the irreducible vertex could be viewed as "precursors" of the Mott-Hubbard MIT. In later studies, however, it was shown [13] that multiple irreducible vertex divergences were also occurring in cases, such as the Anderson Impurity model (AIM), where _no_ Mott-Hubbard transition takes place. The contradiction of this observation with the previously proposed interpretation has left the full understanding of this aspect of the perturbative breakdown unsolved.
In this work, we will hence address the still outstanding question of how the Mott MIT, which represents an intrinsically nonperturbative phenomenon, is actually related to the breakdown of the perturbation expansion by analyzing one of its characterizing manifestations, the divergences of the irreducible vertex functions.
To this aim, we will perform systematic dynamical mean-field theory (DMFT) [30] calculations of the two-particle Green's/vertex functions [31] of the Hubbard model in one of its most delicate parameter regimes, the coexistence region of the Mott MIT, which was not considered in preceding studies on this specific topic.
In particular, we will determine the location, the number, and the properties of the vertex divergences occurring in both the (coexisting) metallic and insulating DMFT solution of the half-filled Hubbard model in the proximity of the Mott transition. Hereby, we focus on the changes taking place across the finite-\(T\) first-order transition and on the extrapolation of the corresponding results towards the \(T\!=\!0\) second-order (quantum) critical endpoint of the MIT. This procedure will eventually allow to draw rigorous conclusions about the link between vertex divergences and the Mott-Hubbard MIT.
The plan of the paper is the following: In Sec. II, we introduce the model and the formalism necessary for our analysis and briefly recapitulate results of previous studies relevant for our scopes; in Sec. III we illustrate our DMFT results for the divergences of the irreducible vertex functions systematically obtained in the coexistence region of the Mott MIT and the corresponding extrapolation performed down to zero-temperature; then in Sec. IV we will discuss the overall scenario emerging from our study, and, eventually, in Sec. V we will present the conclusions and the outlook of our work.
Model, formalism and methods
### Model
In this study we consider a single-band Hubbard model (HM) [32] on the Bethe lattice with infinite connectivity, whose density of state is semicircular with half-bandwidth \(D\!=\!1\), which serves as unit of energy throughout the work.
The Hamiltonian reads
\[\mathcal{H}=-t\sum_{\left\langle i\,j\right\rangle\sigma}c^{\dagger}_{i\sigma}c _{j\sigma}+U\sum_{i}n_{i\uparrow}n_{i\downarrow}\,, \tag{1}\]
where \(t\!=\!\frac{1}{2D}\) is the (spin-independent) nearest neighbor hopping between neighboring lattice sites \(i\) and \(j\), and \(c^{\dagger}_{i\sigma}\), (\(c_{i\sigma}\)) the fermionic creation (annihilation) operator with spin \(\sigma\!=\!\uparrow,\downarrow\) at site \(i\), and \(U\) is the local (Hubbard) repulsive interaction between two electrons on the same lattice site (\(n_{i\sigma}\!=\!c^{\dagger}_{i\sigma}c_{i\sigma}\) denoting the particle-number operator at site \(i\) for spin \(\sigma\)).
We set the density to half-filling \(\left(\left\langle n_{\uparrow}\right\rangle\!=\!\left\langle n_{\downarrow} \right\rangle\!=\!\frac{1}{2}\right)\), where the HM we consider, which can be exactly solved by means of dynamical mean-field theory (DMFT), is known to feature a paradigmatic realization of the Mott-Hubbard MIT transition. Specifically, we briefly recall here [30] that, by neglecting possible SU(2) symmetry-broken phases, the DMFT solution of Eq. (1) yields a first-order transition between a paramagnetic metallic (PM) and a paramagnetic insulating phase (PI) along a \(U_{c}(T)\) transition line, ending with second-order critical points at finite \(T\!=\!T_{c}\!\approx\!\frac{1}{39}\) and at \(T\!=\!0\), as schematically shown in the leftmost panel of Fig. 1. The first-order nature of the transition is witnessed by the presence of a broad hysteresis region, delimited by the lines \(U_{\varepsilon_{1}}(T)\) (\(<\!U_{c}(T)\) on the left side) and \(U_{\varepsilon_{2}}(T)\) (\(>\!U_{c}(T)\) on the right side) where a PM and a PI numerical DMFT solution coexists [33], with distinct physical properties (as exemplified by the qualitatively different shape of the corresponding one particle Green's functions, shown in the inset of the leftmost panel in Fig. 1).
### Formalism
As mentioned in the Introduction, an evident manifestation of the breakdown of the self-consistent perturbation expansion in many-electron problems is the divergence of the kernel of the Bethe-Salpeter equation (BSE) for the system response in the charge sector. Hence, the central quantity for our DMFT investigation will be the on-site generalized susceptibility, which, in the imaginary time-domain, is defined as:
\[\mathbf{\chi}_{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}}(\tau_{1},\tau_{2},\tau_{3},\tau_{4}):=\] \[\qquad\left[\left\langle\mathcal{T}_{\tau}c^{\dagger}_{\sigma_{1 }}(\tau_{1})c_{\sigma_{2}}(\tau_{2})c^{\dagger}_{\sigma_{3}}(\tau_{3})c_{ \sigma_{4}}(\tau_{4})\right\rangle-\right.\] \[\qquad\qquad\left.\left\langle\mathcal{T}_{\tau}c^{\dagger}_{ \sigma_{1}}(\tau_{1})c_{\sigma_{2}}(\tau_{2})\right\rangle\!\left\langle \mathcal{T}_{\tau}c^{\dagger}_{\sigma_{3}}(\tau_{3})c_{\sigma_{4}}(\tau_{4}) \right\rangle\right] \tag{2}\] \[:= G^{(2)}_{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}}(\tau_{1}, \tau_{2},\tau_{3},\tau_{4})-\] \[\qquad\qquad G_{\sigma_{1},\sigma_{2}}(\tau_{1},\tau_{2})\,G_{ \sigma_{3},\sigma_{4}}(\tau_{3},\tau_{4}), \tag{3}\]
in terms of the one \(G_{\sigma_{1},\sigma_{2}}(\tau_{1},\tau_{2})\) and two-particle \(G^{(2)}_{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}}(\tau_{1},\tau_{2},\tau_{ 3},\tau_{4})\) local Green's functions of the DMFT solution, where \(\sigma_{i}\) denotes the spins of the incoming/outgoing particles and \(\tau_{i}\) their imaginary time arguments [31]. By taking the Fourier transform into Matsubara frequencies, and exploiting the time-translational invariance as well as the SU(2) symmetry of the problem, one then obtains [31]
\[\mathbf{\chi}_{ph,\sigma\sigma^{\prime}}^{\nu\nu^{\prime}\Omega} = \int_{0}^{\beta}d\tau_{1}d\tau_{2}d\tau_{3}\,e^{-i\nu\tau_{1}}e^{ i(\nu+\Omega)\tau_{2}}e^{-i(\nu^{\prime}+\Omega)\tau_{3}}\cdot \tag{4}\] \[\mathbf{\chi}_{\sigma,\sigma,\sigma^{\prime},\sigma^{\prime}}(\tau_{ 1},\tau_{2},\tau_{3},0),\]
where, in the particle-hole convention we adopted, \(\nu\), \(\nu^{\prime}\) (\(\Omega\)) denote fermionic (bosonic) Matsubara frequencies, respectively. We recall that the expression in Eq. (4) can be directly connected to the physical charge response \(\chi_{c}(\Omega)\) of the system by performing a summation over both fermionic Matsubara frequencies \(\nu\), \(\nu^{\prime}\) and spin indices:
\[\chi_{c}(\Omega)=\frac{1}{\beta^{2}}\,\sum_{\nu,\nu^{\prime}}\left(\mathbf{\chi} _{ph,\uparrow\uparrow}^{\nu\nu^{\prime}\Omega}+\mathbf{\chi}_{ph,\uparrow\downarrow }^{\nu\nu^{\prime}\Omega}\right). \tag{5}\]
Consistent with the SU(2) symmetry of the problem, the BSE of the generalized response in the charge sector \(\mathbf{\chi}_{c}^{\nu\nu^{\prime}\Omega}\!=\!\mathbf{\chi}_{ph,\uparrow\uparrow}^{ \nu\nu^{\prime}\Omega}\!+\!\mathbf{\chi}_{ph,\uparrow\downarrow}^{\nu\nu^{\prime} \Omega}\) can be recasted, for each value of the bosonic frequency \(\Omega\), in a closed matrix equation for the fermionic Matsubara frequency space, as diagrammatically shown in Fig. 2.
The equation describes the infinitely repeated insertions of (two-particle) irreducible vertex corrections (\(\mathbf{\Gamma}_{c}\)) to the independent propagation of a particle and a hole (i.e., the so-called bubble term: \(\mathbf{\chi}_{c,0}^{\nu\nu^{\prime},\Omega}\!=\!-\beta\,G(\nu)\,G(\nu\!+\!\Omega) \,\delta_{\nu\nu^{\prime}}\), where \(G(\nu)\) denotes the one-particle local DMFT Green's function on the Matsubara axis). Hence, the irreducible vertex function, which represents the kernel of the BSE, is defined (and can be explicitly computed) as [9]
\[\mathbf{\Gamma}_{c}^{\nu\nu^{\prime}\Omega}=\beta^{2}\bigg{(}\big{[}\mathbf{\chi}_{c}^{ \Omega}\big{]}_{\nu\nu^{\prime}}^{-1}-\big{[}\mathbf{\chi}_{c,0}^{0}\big{]}_{\nu \nu^{\prime}}^{-1}\bigg{)}. \tag{6}\]
In this work, we consider the case of zero bosonic frequency, which is linked, according to Eq. (5), to the isothermal (or static) charge response [36; 37] of the system, as the corresponding vertex divergences are the first
to occur by increasing values of \(U\) and are those directly related [10] to the crossing of solutions of the Luttinger-Ward functional. Hence, for the sake of readability, we will denote \(\mathbf{\Gamma}_{c}^{\nu\nu^{\prime}\Omega=0}\equiv\mathbf{\Gamma}_{c}^{\nu\nu^{\prime}}\) and \(\mathbf{\chi}_{c}^{\nu\nu^{\prime}\Omega=0}\equiv\mathbf{\chi}_{c}^{\nu\nu^{\prime}}\).
By a quick inspection of Eq. (6), one understands that, at any finite temperature, no divergence of \(\mathbf{\Gamma}_{c}\) can be caused by the inversion of the (frequency diagonal) bubble term, as \(G(\nu)\neq 0\) for all \(\nu\). As a consequence, all divergences of \(\mathbf{\Gamma}_{c}\) must originate from the non-invertibility of the generalized susceptibility matrix or, more formally, from the vanishing of one of its eigenvalues \(\lambda_{i}\)[1; 9]. Hence, all points \((T,U)\), where a specific eigenvalue \(\lambda_{i}\) of \(\mathbf{\chi}_{c}\) crosses zero, define a line in the phase-diagram of the half-filled HM, where the corresponding irreducible charge vertex diverges (\(\mathbf{\Gamma}_{c}^{\infty}\)-line). It is important to stress, here, that since for small \(U\), \(\mathbf{\chi}_{c}\!\approx\!\mathbf{\chi}_{c,0}\), and the latter is a positive definite (diagonal) matrix, the number of negative eigenvalues (\(N_{\lambda<0}\)) of the generalized charge susceptibility matrix at a specific parameter set corresponds to the number of crossed \(\mathbf{\Gamma}_{c}^{\infty}\)-lines coming from \(U=0\) and can be also used to approximate the shape of the \(\mathbf{\Gamma}_{c}^{\infty}\)-lines in close proximity to the MIT (where they will become particularly dense) and analyze their behavior towards \(T\!\to\!0\).
### Limitations of existing results
The occurrence of divergences of the irreducible vertex functions in several fundamental many-electron problems has been recently investigated in different publications [16; 17; 9; 12; 13; 14]. In particular, one of the most systematic analysis made for the case of the half-filled HM on a square lattice solved in DMFT (essentially equivalent [38] to the case considered here) has been reported in [9], whose results are summarized in the central/right most panels of Fig. 1. Specifically, the right panel of the figure, a zoom of the parameter region marked by a black box in the DMFT phase-diagram of the central panel, shows the first few \(\mathbf{\Gamma}_{c}^{\infty}\)-lines of the DMFT solution of the HM, marked in red or orange, depending whether the corresponding vertex divergence occurs in the charge sector only or, simultaneously, in the charge and in the particle-particle channels [39]. As it was already noted [9], all the \(\mathbf{\Gamma}_{c}^{\infty}\)-lines tend to approach the corresponding vertex divergences lines of the Hubbard atom (straight dashed lines starting from \(U\!=\!0\)) for large values of \(T\) and \(U\), while displaying a clear bending around the Mott MIT.
Evidently the clear bending of the first divergences lines, and their somewhat similar shape as the rightmost border of the MIT hysteresis \(U_{c_{2}}(T)\), may suggest the originally proposed interpretation of the occurrence of the vertex divergences as precursors of the Mott MIT itself. However, as already mentioned in the Introduction, subsequent studies have demonstrated the occurrence of similar vertex divergences in models, such as
Figure 1: Location of the irreducible vertex divergence lines in the phase-diagram of Hubbard model on a square lattice solved by dynamical mean field theory (DMFT). Left panel: Schematic illustration of the MIT and its coexistence region in DMFT. The insets show examples of the imaginary part of the one-particle Green’s function on the Matsubara frequency axis for the PM (left inset) and the PI (right inset) DMFT solution, featuring completely different low-energy behaviors. The arrows represent the scanning direction for the respective convergent solution of the two phases in the coexistence region. Middle panel: Reproduced from Ref. [9], solid red and orange lines [34] mark the first five \(\mathbf{\Gamma}_{c}^{\infty}\)-lines of the HM bending around the metal-insulator transition (blue solid line). Red and orange dashed lines show the first two \(\mathbf{\Gamma}_{c}^{\infty}\)-lines of the Hubbard atom (HA) for comparison. Right panel: Close-up of the region near the MIT marked in the middle panel. The blue shaded area indicates the coexistence region of the MIT. The thermodynamic phase transition (\(U_{c}\)) is marked in blue and the borders of the coexistence region (\(U_{e1}\) and \(U_{c2}\)) are displayed as dark blue lines. To the left in red and orange the closest vertex divergence lines from [9] are visible.
the AIM [13], where no Mott MIT takes place, and ascribed [8; 21; 27; 13] them to suppressive effects of the on-site/impurity charge fluctuation triggered by the formation of a local magnetic moment.
Hence, in order to clarify the nature of the relation linking the Mott MIT and the occurrence of divergences in the irreducible vertex functions of the charge sector, it is necessary to extend the DMFT studies performed hitherto to the most challenging parameter regime, namely, the coexistence region across the MIT, which represents the central goal of the present work.
### Methods
The DMFT calculations of the generalized local charge susceptibility have been performed by using a continuous-time quantum Monte Carlo (CT-QMC) solver [40] to sample the one and two-particle Green's functions in Eq. (3) for the auxiliary AIM of the corresponding self-consistent DMFT solutions. Specifically, we used the CT-QMC solver of the _w2dynamics_ package [41; 42]. Further technical details about the numerical calculations are shortly reported in Appendix A. Here, we want to concisely recall, how the PM and PI DMFT-solutions in the coexistence region are obtained: Starting from outside of the coexistence region, the interaction \(U\) is changed step-by-step for a fixed temperature \(T\), whereby the previously converged DMFT calculations are used as a starting point of the new self-consistent DMFT cycle (as schematically illustrated by the two arrows in the leftmost panel of Fig. 1). By entering the coexistence/hysteresis region, the variation-steps in \(U\) must be small (e.g. \(\mathcal{O}(0.1)\!-\!\mathcal{O}(0.01)\)), in order to allow for convergence to different meta-stable DMFT solutions, depending on the initial condition used. In this way two different solutions can be stabilized, at a given temperature \(T\), in the interval \(U_{c1}(T)\!<\!U\!<\!U_{c2}(T)\), the PM (PI) one being obtained along the path from the left to the right \(U_{c1}(T)\!\rightarrow\!U_{c2}(T)\) (the right to the left \(U_{c2}(T)\!\rightarrow\!U_{c1}(T)\)). For each PM or PI converged DMFT-solution obtained in the coexistence region, the corresponding on-site generalized charge susceptibility is then computed as explained above, and Fourier transformed in Matsubara frequencies. The diagonalization of their corresponding matrix representation in the fermionic Matsubara frequencies allows to determine the number of negative eigenvalues, \(N_{\lambda<0}\), which, as discussed in Sec. II B, corresponds to the number of crossed \(\mathbf{\Gamma}_{c}^{\infty}\)-lines and can then be used to approximate the \(\mathbf{\Gamma}_{c}^{\infty}\)-lines (see Appendix B for further details) in the region of the phase-diagram close to the Mott MIT.
## III Results
### Metallic coexistence region
Our results for the PM solutions of the MIT coexistence region are shown in the upper panel of Fig. 3. Here the coexistence region is indicated as a blue framed and
Figure 3: Phase diagrams of the MIT with PM solution in the coexistence region (blue-shaded area) for the Hubbard model (HM) on the Bethe lattice. \(U_{c}\) (blue), taken from Ref. [43], denotes the thermodynamic transition. Upper panel: Coexistence region with phase points of performed DMFT calculations, where green diamonds correspond to a metallic solution and red squares to an insulating one. The numbers next to markers are \(N_{\lambda<0}\) and the background of the points within the coexistence region shows an interpolating color scale of \(N_{\lambda<0}\) for the metallic solution. Lower panel: Same phase diagram as the upper panel, but showing the distinct \(\mathbf{\Gamma}_{c}^{\infty}\)-lines approximated from the data of the phase points (see Appendix B). Here, \(n_{HM}\) indicates the number of crossed \(\mathbf{\Gamma}_{c}^{\infty}\)-lines of the Hubbard model, coming from \(U\!=\!0\). Dashed red and orange lines (in both panels) mark the \(\mathbf{\Gamma}_{c}^{\infty}\)-lines of the Hubbard atom (HA) according to [9] as reference, where \(n_{HA}\) is the number of crossed lines coming from \(U\!=\!0\).
shaded area in the \(T\)-\(U\) phase diagram. The two-particle calculations performed by scanning the MIT starting from the PM side are marked, respectively, with green diamonds or red squares, depending on whether a PM or a PI solution is found (the stabilization of the first PI solutions evidently corresponds to the crossing of the \(U_{c2}(T)\) line). The calculated number \(N_{\lambda<0}\) of negative eigenvalues of \(\mathbf{\chi}_{c}^{\nu\nu^{\prime}}\) is shown right below the corresponding markers. In the background of the data points, a color scale, interpolating between the numerical results, indicates the change of \(N_{\lambda<0}\) within the coexistence region.
In the lower panel of Fig. 3 instead, we show the corresponding vertex divergence (\(\mathbf{\Gamma}_{c}^{\infty}\)) lines, starting with the \(\mathbf{\Gamma}_{c}^{\infty}\)-line associated to \(N_{\lambda<0}\!=\!n_{HM}\!=\!19\), and we adopt the same (red and orange) color-coding [34] as introduced in Sec. II C. As reference for the Mott insulating phase, the \(\mathbf{\Gamma}_{c}^{\infty}\)-lines of the Hubbard atom (HA) [9; 14] are shown as dashed red and orange lines on the right (PI) side of the coexistence region, starting with \(N_{\lambda<0}\!=\!n_{H}\!=\!26\).
In both panels of Fig. 3 a clear bending of the \(\mathbf{\Gamma}_{c}^{\infty}\)-lines towards the critical point of the Mott MIT at \(T\!=\!0\) (\(U_{c2}^{T=0}\)) is observed in the whole PM coexistence region, corresponding to an increase of \(N_{\lambda<0}\) along an increase of \(U\) and \(T\). One also notices that the \(\mathbf{\Gamma}_{c}^{\infty}\)-lines occur particularly dense for increasing \(U\) at low temperatures. More specifically, the shape of the bending reminds on the first \(\mathbf{\Gamma}_{c}^{\infty}\)-lines encountered in the correlated metallic regime of the DMFT solution of the HM [9] (central panel in Fig. 1), as well as the \(\mathbf{\Gamma}_{c}^{\infty}\)-lines found in low-\(T\) region of the phase-diagram of the AIM [13]. On a more physical perspective, the \(\mathbf{\Gamma}_{c}^{\infty}\)-lines in the PM coexistence region display a qualitative similarity with the Mott transition line \(U_{c}\) (solid blue line in Fig. 3), especially for low temperatures, while no significant match with the effective [23] Kondo temperature associated to the auxiliary AIM of the DMFT solution (estimated through the so-called "fingerprints criterion" [21]) can be noticed. These observations suggest the existence of a direct connection between the vertex divergences and the Mott MIT itself.
### Insulating coexistence region
Our results for the PI solutions of the MIT coexistence region are shown in the upper panel of Fig. 4. The two-particle calculations performed by scanning the MIT starting from the PI side are marked, respectively, with red squares or green diamonds, depending on whether a PI or a PM solution is found (the stabilization of the first PM solutions evidently corresponds to the crossing of the \(U_{c1}(T)\) line).
As before, the color scale shows the interpolation of \(N_{\lambda<0}\) in the coexistence region. In the lower panel of Fig. 4, the distinct \(\mathbf{\Gamma}_{c}^{\infty}\)-lines (orange/red) are shown. The \(\mathbf{\Gamma}_{c}^{\infty}\)-lines are now straight lines, similar to the dashed (orange/red) lines of the HA (i.e. the extreme case of an HM with \(t\!=\!0\)), shown as reference on the right side of the coexistence region. Due to the different shape of the divergences lines w.r.t. the PM solution analyzed before, \(N_{\lambda<0}\) still increases with increasing \(U\), but, differently from the PM case of Fig. 3, it increases for decreasing \(T\).
At the same time, the qualitative similarity between the \(\mathbf{\Gamma}_{c}^{\infty}\)-line shape in the HA and the PI phase of the HM solved in DMFT cannot represent, evidently, an identity [44], reflecting the intrinsic difference between the "perfect" localization of the HA and the actual one of the
Figure 4: Phase diagrams of the MIT with PI solution in the coexistence region (blue-shaded area) for the Hubbard model (HM) on the Bethe lattice. The blue line \(U_{c}\), taken from Ref. [43], denotes the thermodynamic transition. Upper panel: The data points, red squares for an insulating phase and green diamonds for a metallic phase, mark the performed DMFT calculations. The number next to the points shows the corresponding \(N_{\lambda<0}\) and the background of the data points in the coexistence region shows an interpolating color scale of \(N_{\lambda<0}\). Lower panel: Same phase diagram as the upper panel with approximated \(\mathbf{\Gamma}_{c}^{\infty}\)-lines of the HM, \(n_{HM}\) is the number of crossed lines coming from \(U\!=\!0\). Dashed red and orange lines (in both panels) mark the \(\mathbf{\Gamma}_{c}^{\infty}\)-lines of the Hubbard atom (HA) according to [9] as reference, where \(n_{HA}\) is the number of \(\mathbf{\Gamma}_{c}^{\infty}\)-lines of the HA.
Mott PI phase, where finite (though quite small) double-occupancy is found even in the ground-state. In fact, by comparing the results for the HA and the PI phase in DMFT, on a quantitative level, (e.g. by considering the 28-th line of the HA \(n_{HA}\!=\!28\), and the corresponding line of the HM \(N_{\lambda<0}\!=\!n_{HM}\!=\!28\) in the lower panel of Fig. 4) a systematic shift of HM-\(\mathbf{\Gamma}_{c}^{\infty}\)-lines w.r.t. the HA-\(\mathbf{\Gamma}_{c}^{\infty}\)-lines can be noted. To effectively account for this shift, one might rescale the interaction of the HA [45], by comparing the number of \(n_{HA}\) to \(n_{HM}\) for all phase points within the coexistence region, which yields an average factor of \(\eta\!=\!\frac{n_{HA}}{n_{HM}}\!=\!1.19\pm 0.03\) (see Appendix C for further details). Hence, at least in terms of vertex divergences, it might be tempting to "approximate" the PI phase of the HM in a similar spirit as in Ref. [44], as HA with "reduced" effective interaction \(U_{eff}\):
\[\mathcal{H}_{PI}=U_{eff}\sum_{i}n_{i\uparrow}n_{i\downarrow}\ \ \text{with}\ \ U_{eff}=\frac{U}{\eta}. \tag{7}\]
Obviously, this analogy cannot be pushed too far: In the limit of \(U\!\rightarrow\!\infty\), the HM-\(\mathbf{\Gamma}_{c}^{\infty}\)-lines will gradually approach the HA-\(\mathbf{\Gamma}_{c}^{\infty}\)-lines asymptotically, i.e., \(U_{eff}\!\rightarrow\!U\) and \(\eta\!\rightarrow\!1\) for \(U\!\rightarrow\!\infty\). In practice, \(\eta\!\approx\!1.19\) can be used as a good approximation for the vertex divergences of the PI phase of the HM, in the (physically relevant) regime of the MIT [46].
### Vertex divergence lines at the phase transition
After separately analyzing how vertex divergences occur in the two (PM and PI) sets of DMFT solutions in the coexistence region, the question of their behavior across the thermodynamic transition can be now readily addressed. We briefly recall that the actual thermodynamic phase transition takes place at \(U\!=\!U_{c}(T)\) (cf. Ref. [43]), when the free energy of the PM and the PI converged DMFT-solutions become equal. The corresponding transition line [43] marked as white line in all three panels of Fig. 5, thus separates the thermodynamically stable PI and PM solutions (labeled as physical in Fig. 5) on the two sides of the coexistence region and is associated, except at its second-order (quantum) critical endpoints, to an abrupt (first-order) jump of the physical properties from a PM to a PI behavior, except at its second-order (quantum) critical endpoints.
After these premises, we summarize in Fig. 5 our results for the vertex divergences in the proximity of the Mott MIT. In the first two panels, the respective values \(N_{\lambda<0}\) for the PM (left panel) and the PI (middle panel) solutions are shown as color-intensity plots, making the qualitatively different shape of the \(\mathbf{\Gamma}_{c}^{\infty}\)-lines between the PM and the PI quite evident. From the direct comparison of the first two panels, the quantitative difference in \(N_{\lambda<0}\) becomes also clearly visible, as the values of \(N_{\lambda<0}\) appear to be systematically higher in the PI than in the PM realization of the coexistence region.
Hence, when directly considering the vertex divergence behavior for the thermodynamic transition, as we do in the rightmost panel of Fig. 5, \(N_{\lambda<0}\) displays an evident jump \(\Delta N_{\lambda<0}\) between the PM and PI, at the first-order transition \(U_{c}(T)\). On a more quantitative level, we note that \(\Delta N_{\lambda<0}\) between the PM and the PI along \(U_{c}\) increases with decreasing temperature. Only at the second order critical point at \(T\!=\!T_{c}\!\approx\!\frac{1}{39}\) we observe a continuous transition of \(N_{\lambda<0}\) with \(\Delta N_{\lambda<0}\!=\!0\), as expected. In this regard, \(N_{\lambda<0}\) well reflects the behavior of the order parameter characterizing the transition. It is therefore plausible to assume that also for the second order critical point at \(T\!=\!0\)\(N_{\lambda<0}\) will be continuous \(\Delta N_{\lambda<0}\!=\!0\).
Figure 5: \(T\)-\(U\) diagram of the MIT coexistence regions for the Hubbard model on the Bethe lattice showing the number of negative eigenvalues of the generalized charge susceptibility \(N_{\lambda<0}\) as interpolating color scale for the metallic solution (left), the insulating solution (middle) and comparing \(N_{\lambda<0}\) at the thermodynamic stable (physical) transition \(U_{c}\) of Ref. [43] (right panel).
### Accumulation of vertex divergence lines
Our calculations for \(N_{\lambda<0}\) in the coexistence region at finite temperatures naturally raise the question of how \(N_{\lambda<0}\) will behave at temperatures lower than those accessible by our numerical calculations and, in particular, for \(T\!\rightarrow\!0\). To this aim, we performed numerical extrapolations for \(N_{\lambda<0}(T)\) from our data and tested the validity of those by comparing them to additional, numerically heavier calculations, performed at a "testbed" temperature value (e.g., \(\beta=300\) for the PM-phase) lower than the temperature-interval used for the extrapolation.
Our results for the PM solution are shown in Fig. 6. Specifically, in the left panel, we show, as a guidance, \(\beta\)-\(U\) paths in the coexistence region, along which our extrapolation are performed, superimposed to the corresponding color-intensity plot for (the numerically interpolated) \(N_{\lambda<0}\). In right panel, then, we report our numerically calculated data \(N_{\lambda<0}\) as function of the inverse temperature \(\beta\) (black markers) together with the corresponding fits along \(U_{c1}\), \(U_{c}\), \(U_{c2}\) (solid lines) as well as along intermediate (dash dotted) paths in parameter space (see Appendix D for further details). The range of the \(N_{\lambda<0}(\beta)\) values in the coexistence region is indicated as blue-shaded area. As mentioned before, the reliability of our extrapolation has been tested by comparing additional calculations at \(\beta\!=\!300\) (crosses) to the extrapolated values (empty markers), which yielded a good agreement.
By a closer inspection of the results, we note the following differences in the \(T\!\rightarrow\!0\) behavior of \(N_{\lambda<0}\) along the distinct paths we selected. For instance, starting with the transition line \(U_{c1}(\beta)\) we find \(N_{\lambda<0}\!\cong\!a_{1}\!+\!b_{1}\beta^{-1}\)[47] for \(\beta\!\rightarrow\!\infty\) (\(a_{1}\!\approx\!6.25\)), which would be compatible with \(N_{\lambda<0}\!=\!6\) at \(U_{c1}(T\!=\!0)\). Along the intermediate lines at \(U_{c1}\!<\!U\!<\!U_{c}\) we find \(N_{\lambda<0}\!\cong\!a_{i}\!+\!b_{i}\beta^{-1}\) with finite \(a_{i}\!>\!a_{1}\). At the transition lines \(U_{c}(\beta)\) and \(U_{c2}(\beta)\), which both end at the second-order critical point at \(U_{c2}(T=0)\), \(N_{\lambda<0}(\beta)\) is observed to grow logarithmically and linearly in \(\beta\), respectively. Consistently, the intermediate lines in the interval \(U_{c}\!<\!U\!<\!U_{c2}\) feature a divergent behavior of \(N_{\lambda<0}(\beta)\!\propto\!\beta^{\pi}\) with \(0\!<\!x\!<\!1\). This indicates that an infinite number of \(\Gamma_{c}^{\infty}\)-lines need to be crossed before reaching \(U_{c2}(T\!=\!0)\), i.e., the location of the Mott-Hubbard MIT in the ground-state.
Our extrapolations for \(N_{\lambda<0}(\beta)\) of the PI solution are displayed Fig. 7. Again, in the left panel of the figure the respective location of the paths considered in the \(\beta\)-\(U\) coexistence region are shown, superimposed to the corresponding values of \(N_{\lambda<0}(\beta)\) for the PI solutions. Due to the numerical hurdles of performing two-particle DMFT calculations for low \(T\) in the PI-phase (see Appendix A.1), the extrapolations (solid lines) and the test calculations (empty markers) have been performed at higher temperatures than for the PM phase (PI test calculations at \(\beta\!=\!200\)), which likely explains the overall less satisfactory agreement between extrapolation and test calculations w.r.t. the PM case. The inset in the right panel presents a close-up of the corresponding deviations. Fits including the \(\beta=200\) calculations, which should be re
Figure 6: Left panel: Phase diagram of the inverse temperature \(\beta\!=\!(k_{\rm B}T)^{-1}\) and interaction \(U\) showing the MIT and the coexistence region with an interpolating color scale of the number of negative eigenvalues of the generalized charge susceptibility \(N_{\lambda<0}\) for the metallic phase, where the thermodynamic transition line \(U_{c}\) is taken from Ref. [43]. Right panel: \(N_{\lambda<0}(\beta)\); black markers are calculated values of \(N_{\lambda<0}\) for \(U_{c1}\) (triangles), \(U_{c}\) (circles) and \(U_{c2}\) (squares) according to the left panel. The solid lines correspond to fits for these data points, yielding \(N_{\lambda<0}^{U_{c1}}(\beta)\!\cong\!a_{1}\!+\!b_{1}\beta^{-1}\)[47] for \(\beta\rightarrow\infty\) with the red dashed line as asymptotic value (the red dotted lines are a guide to the eye), \(N_{\lambda<0}^{U_{c2}}(\beta)\!\propto\!\ln(\beta)\) and \(N_{\lambda<0}^{U_{c2}}(\beta)\!\propto\!\beta\). Black empty markers are estimated values from the fits for \(\beta\!=\!300\) and colored crosses are the results of corresponding DMFT test calculations. The dashed dotted lines are intermediate lines (\(U_{i}^{int}\)) interpolating between \(U_{c1}\), \(U_{c}\), and \(U_{c2}\).
garded as the most precise results which we could obtain in the PI phase, are displayed as dashed lines. As comparison, dashed dotted lines present \(N_{\lambda<0}(\beta)\) of the HA along \(U_{c1}\), \(U_{c}\), and \(U_{c2}\), all showing higher values than the PI solution of the HM in the coexistence region.
The extrapolations for \(N_{\lambda<0}(\beta)\) in the PI (with and without test calculations) display a linear behavior in \(\beta\) for all three transition lines \(U_{c1}\), \(U_{c}\), \(U_{c2}\), corresponding to an infinite number of \(\mathbf{\Gamma}_{c}^{\infty}\)-lines to be crossed before reaching \(T\!=\!0\). On a more quantitative level, we note that a comparison between the slopes \(\alpha_{i}\) of the extrapolations of \(N_{\lambda<0}(\beta)\) and the corresponding results of the HA, can be also used to obtain \(\eta\!=\!\frac{\alpha_{HA}}{\alpha_{HM}}\) for an effective HA like description of the PI solution of the HM in Eq. (7) of Sec. III B, e.g. \(\eta\!=\!1.22219\) for \(U_{c1}(T)\) (for details see Appendix C).
From our extrapolations of \(N_{\lambda<0}(\beta)\) we can now determine the overall behavior of \(N_{\lambda<0}\) at \(T\!=\!0\) across the MIT. By crossing the MIT from PM\(\,\rightarrow\,\)PI at \(T\!=\!0\), the number of \(\mathbf{\Gamma}_{c}^{\infty}\)-lines increases to infinity at the critical endpoint \(U_{c2}(T\!=\!0)\). In this respect, at \(T\!>\!0\) the physical transition line \(U_{c}(T)\) can bee seen as the first of the intermediate line-paths in \(\beta\),\(U\) of Fig. 6 along which \(N_{\lambda<0}(\beta)\) diverges for \(\beta\!\rightarrow\!\infty\), namely logarithmically. In the metastable metallic phase for \(U_{c}\!<\!U\!<\!U_{c2}\) - along the intermediate lines - \(N_{\lambda<0}(\beta)\) can diverge faster than \(\log\beta\), and finally \(N_{\lambda<0}(\beta)\!\propto\!\beta\) at \(U_{c2}\). The same \(\beta\)-dependence, namely \(N_{\lambda<0}(\beta)\!\propto\!\beta\), occurs in the PI coexistence region as well as in the HA. On the basis of this numerical evidence, we can then conclude that at the Mott-Hubbard MIT at \(T\!=\!0\) an accumulation point of an infinite number of vertex divergence lines occurs, a clear link between the Mott MIT at \(T\!=\!0\) and the vertex divergence lines.
## IV Theoretical and algorithmic implications
Our analysis of the irreducible vertex divergences in the close proximity of the Mott MIT has several important implications both of algorithmic as well as of conceptual nature. We demonstrated the existence of a large number of divergence lines found in the coexistence region of the MIT, increasing up to infinity along all calculated parameter paths towards the \(T\!=\!0\) MIT at \(U_{c2}(T\!=\!0)\) evidently poses a huge challenge to the applicability of all algorithmic approaches using and/or explicitly manipulating irreducible vertices of DMFT as input, such as the full-fledged, parquet-equation based version of the dynamical vertex approximation [48, 49, 50, 51, 52] and the QUADRILEX scheme [53]. In fact, while one could realistically cope with a situation of a few well-separated divergence lines, it becomes hard to stabilize the numerical manipulation of irreducible vertices in parameter regimes, where their frequency structure will display large oscillations, due to the proximity of several and closely spaced divergence lines.
At the same time, we recall that the occurrence of vertex divergences has been recently proven to be pivotal [21, 27] to ensure the correct transfer of physical information among the spin/magnetic and the charge/density or the pairing channels in the nonperturbative regimes, e.g., to feature the proper freezing of the local charge (or pair
Figure 7: Left panel: Phase diagram of the MIT and its coexistence region with an interpolating color scale of the number of negative eigenvalues \(N_{\lambda<0}\) of the generalized charge susceptibility for the insulating results. The thermodynamic transition line \(U_{c}\) is taken from [43]. Right panel: \(N_{\lambda<0}(\beta)\); black markers are determined values of \(N_{\lambda<0}\) for \(U_{c1}\) (triangles), \(U_{c}\) (circles) and \(U_{c2}\) (squares) according to the left panel. The solid lines mark fits of these data, yielding linear behavior for the three transition lines. Black empty markers are estimated values from these fits for \(\beta\!=\!200\) and colored crosses are the results of corresponding test calculations (a close-up is shown in the inset on the right). The dashed lines are fits including the test calculations and the dashed dotted ones present the number of crossed \(\mathbf{\Gamma}_{c}^{\infty}\)-lines of the Hubbard atom along \(U_{c1}\), \(U_{c}\) and \(U_{c2}\)
ing) fluctuations associated to the pre-formation of local magnetic moments. Hence, it becomes clear why self-consistent diagrammatic approaches, whose irreducible vertex functions cannot diverge per construction [54], such as the truncated functional renormalization group (fRG) [55] or the parquet approximation [56], will not be able to yield a consistent picture of the Mott MIT and its related phenomena.
While it is beyond the scope of this work to address the issue of possible strategies how to include this relevant piece of physical information in diagrammatic treatments of the many-electron problems in nonperturbative regimes, e.g, by means of the merger of DMFT and fRG [57, 58, 59, 60, 61, 62] or by rewriting ladder diagrammatic resummations or parquet equations in terms of the full two-particle scattering amplitude of the local problem [63, 50, 31, 64], here we want to discuss some more fundamental theoretical implication of our results.
Indeed, the identification of the Mott-Hubbard MIT at zero temperature with the accumulation point of an infinite number of \(\mathbf{\Gamma}_{\mathrm{c}}^{\infty}\)-lines does not only clarify the precise relation linking the occurrence of vertex divergences in the charge channel and the MIT, highlighting the profoundly nonperturabtive nature of the MIT itself, but also allows to draw interesting, more general considerations about how the vertex divergences affect different many-electron problems. In particular, our results for the HM in DMFT (summarized in the central panel of Fig. 8) can be put into a broader perspective by directly comparing them to the corresponding ones for the Hubbard atom [9, 14] (HA, left panel) and the (metallic) Anderson impurity model of Ref. [13] (AIM, right panel) in Fig. 8, where we used the same color-coding for denoting the different vertex divergence lines.
Thereby, we note that for the HA, i.e. for an Hubbard model with \(t\!\equiv\!0\), which features a perfect local-moment/insulating behavior for all \(U\!>\!U_{MIT}\!=\!0\), an accumulation point of vertex divergence lines occurs exactly at the origin of the phase diagram, namely at \(U\!=\!T\!=\!0\). On the other hand, by looking at the phase-diagram of the AIM, where no transition to an insulating ground state occurs for any value of \(U\), we do not observe an accumulation point of vertex divergence lines in the available data [13] and, in general, we do not expect one at \(T\!=\!0\). We can note, nonetheless, that their low-\(T\) distribution becomes denser by increasing interaction values. One can then assume the asymptotic value \(U_{MIT}\!=\!+\!\infty\) as an hypothetical value for the transition to an insulating state in the AIM. In this regime, the physics would be eventually dominated by \(U\), similarly to the insulating phases of the other two cases, and the progressively denser distribution of vertex divergence lines by increasing \(U\) could then lead asymptotically to an accumulation point.
Within this framework [65], the results we obtained for the HM in DMFT can be naturally interpreted as an "intermediate" case between the two extreme situations of the HA (with accumulation point at \(U_{MIT}\!=\!0\)) and the AIM (at \(U_{MIT}\!=\!+\!\infty\)), where \(U_{MIT}\) yields the finite value of \(U_{c2}(T\!=\!0)\), as schematically illustrated by the black arrow sketched below the three panels of Fig. 8. Heuristically, one could imagine to start from the AIM case (where \(U_{MIT}\!=\!+\!\infty\)). Then, by progressively moving the value of \(U_{MIT}\) first towards lower finite values and, subsequently, down to \(0\), one could qualitatively "reconstruct" the other two cases by squeezing the corresponding \(\mathbf{\Gamma}_{\mathrm{c}}^{\infty}\)-lines against the corresponding accumulation points.
On a more formal level, we note that the presence or the absence of an accumulation point of vertex divergence lines, as those discussed here, might play an im
Figure 8: Phase diagrams of the Hubbard atom (HA) (reconstructed from Ref. [9]), Hubbard model (HM) and Anderson impurity model (AIM) (reconstructed from Ref. [13]) with the corresponding \(\mathbf{\Gamma}_{\mathrm{c}}^{\infty}\)-lines and their accumulation point (blue dot) at \(T\!=\!0\) (\(U_{MIT}\!\theta\)) for the HA, \(U_{MIT}\!=\!U_{c2}^{T\!=\!0}\) for the HM and \(U_{MIT}\!\rightarrow\!\infty\) for the AIM). The black arrow below sketches the shift of the accumulation point of the \(\mathbf{\Gamma}_{\mathrm{c}}^{\infty}\)-lines between the different models. The accumulation point at the MIT of the DMFT solution of the Hubbard model at \(U_{c2}(T\!=\!0)\) can be seen as intermediate case w.r.t. the _“extreme”_ cases of the purely insulating Hubbard atom and the purely metallic Anderson impurity model.
portant role in the way vertex divergences may affect the (\(T\!=\!0\))-physical properties of strongly correlated electron systems, such as, e.g. the validity (or the violation) of the Luttinger theorem [66].
## V Conclusion and outlook
In this work, we have investigated the relation between a characteristic aspect of the breakdown of the self-consistent perturbation expansion [10], namely the occurrence of divergences of irreducible vertex functions in the charge channel [9; 1] and the Mott-Hubbard metal-to-insulator transition. To this aim, by performing DMFT calculations on the two-particle level for the half-filled Hubbard model on a Bethe lattice, we have systematically studied the occurrence of irreducible vertex divergences in the coexistence region adjacent to the MIT. Our results demonstrate how the shape and the number of irreducible vertex divergence lines (\(\mathbf{\Gamma}_{c}^{\infty}\)-lines) in the coexistence region is significantly different in the PM and the PI solutions, with an abrupt jump across the thermodynamic first-order transition line, reflecting the different degree of suppression of on-site charge fluctuations [21; 27; 8; 10] in the two phases. Further, we could show that, in spite the evident backward bending displayed by the \(\mathbf{\Gamma}_{c}^{\infty}\)-lines in the correlated PM phase, the number of divergences crossed by approaching the \(T\!=\!0\) MIT at \(U\!=\!U_{c2}(T\!=\!0)\) is diverging, making the location of the MIT itself an accumulation point for \(\mathbf{\Gamma}_{c}^{\infty}\)-lines. This finding establishes a clear connection between the irreducible-vertex divergences and the occurrence of the MIT, clarifying the difference with cases where vertex divergences appear in systems where no MIT takes place, and substantiating the interpretation of the Mott transition as highly-nonperturbative phenomenon in a more precise context.
Beyond the possible algorithmic implications of our results, especially relevant for Feynman diagrammatic approaches to treat intermediate-to-strong coupling regimes [67; 50], it would be interesting, in the future, to extend our study by including the effects on non-local correlations, e.g. by means of cluster extensions [68] of DMFT. In particular, one could investigate, whether the shift towards lower \(U\) values of the Mott MIT, found by including spatial correlations of progressively larger size [68; 69] in the two-dimensional Hubbard model on a square lattice, would be accompanied by corresponding shift of the accumulation point of the \(\mathbf{\Gamma}_{c}^{\infty}\)-lines. This piece of information would be of particular importance for precisely characterizing the nonperturbative nature of electronic correlations in two-dimensional systems, for which the location of the MIT, in absence of geometrical frustration, may be shifted [70] down to \(U\!=\!0^{+}\) in the low-temperature limit.
###### Acknowledgements.
We thank S. Andergassen, S. Ciuchi, P. Chalupa-Gantner, H. Efl, D. Fus, G. Rohringer, G. Sangiovanni and T. Schafer for very insightful discussions and P. Chalupa-Gantner also for carefully reading the manuscript. We acknowledge financial support from the Austrian Science Fund (FWF), within the project I-5487 (M.R. and A.T.) as well as project I 5868 (Project P01, part of the FOR 5249 [QUAST] of the German Science Foundation, DFG) (S.A.). Matthias Reither acknowledges support as a recipient of a DOC fellowship of the Austrian Academy of Sciences. Calculations have been performed on the Vienna Scientific Cluster (VSC-4).
## Appendix A DMFT-calculations of the two-particle Green's function \(G^{(2)}\)
As mentioned in Sec. II D, we use a continuous-time quantum Monte Carlo solver (CT-QMC) in the hybridization expansion (CT-HYB) [40] of the _w2dynamics_ package [41; 42] for the DMFT calculations of the two-particle Green's function \(G^{(2)}\) of the auxiliary impurity model and performed our calculations on the Vienna scientific cluster (VSC-4).
The detailed calculation process for the two-particle Green's function for a specific phase point in \(U\), \(T\) is the following:
1. Calculations of the one-particle Green's function \(G^{(1)}\) with a comparable low number of measurements (e.g. \(\sim\!10^{5}\)) at each DMFT iteration, to converge the DMFT self-consistence cycle. The convergence is verified by tracking the behavior of \(G^{(1)}\) for the first Matsubara frequency and the double occupancy.
2. The first computations are followed by a calculation with a high number of measurements (e.g. \(\sim\!10^{7}\)) and a small number of DMFT iterations to obtain \(G^{(1)}\) as function of Matsubara frequencies with a satisfactory error to noise resolution as starting point for a two-particle calculation.
3. One DMFT iteration including a calculation of \(G^{(2)}\) with a high number of measurements (e.g. \(\sim\!10^{7}\)) using a frequency box of 100\(\times\)100 fermionic frequencies. Each individual computation required an average 15 CPU hours.
### Sampling methods
Within the CT-HYB calculations of the _w2dynamics_ package, different sampling methods for the computation of one- and two-particle Green's function can be used. For most of the parameter regimes we have used partition function sampling with the superstate-sampling method
[42] (segment sampling [71] yielded no computational advantage).
However, for the insulating solution in the coexistence region of our model and especially at low temperatures, partition function sampling runs into problems. This manifests in numerical artifacts, which occur as diagonal stripes in the \(\mathbf{\chi}_{c}\)-matrix causing complex pairs of the eigenvalues (see Fig. 9), which are however prohibited by the particle-hole and SU(2)-symmetry of the model considered (\(\mathbf{\chi}_{c}\) must be a real bisymmetric matrix with only real eigenvalues [17]), and therefore result in unusable data. This is presumably caused by the suppressed hybridization function \(\Delta(\tau)\) in the insulating phase, resulting in a Monte Carlo estimator with high variance, stemming from functional derivatives of the partition function w.r.t. the hybridization function [72]. To overcome these numerical difficulties we used the worm-sampling method of _w2dynamics_[41], which uses Monte Carlo sampling in both the partition function and the Green's function space. Since, the worm-sampling method is numerically more expensive than superstate-sampling, we used worm-sampling only for lower temperatures in the insulating phase (\(\beta\!>\!60\)).
## Appendix B Approximation of vertex divergence lines
Our numerical two-particle calculations for the determination of \(N_{\lambda<0}\) have been performed for a finite set of \(T\), \(U\) parameters. To extract the points in the phase space, where the vertex divergence occurs, i.e. \(\lambda\!=\!0\), from our data, we compared different approaches which are detailed in the following subsections. For our results in Sec. III we used the polynomial fits described in Appendix B.2. For both methods the symmetry of the eigenvectors was not taken into account. Hence, a possible crossing of divergence lines [13] was not investigated. Instead, the line ordering found in Refs. [9; 17] is assumed.
### Approximation via eigenvalues of \(\chi_{c}\)
For fixed temperature \(T\) scans with two-particle calculations close to the divergence, one can use a straight forward approach to extract the approximate interaction value \(U_{\mathbf{\Gamma}_{c}^{\infty}}\), where \(\lambda\!=\!0\) and a vertex divergence occurs: Linearly extrapolating from the first two data points, where \(\lambda\!<\!0\) is already negative. \(U_{\mathbf{\Gamma}_{c}^{\infty}}\) is obtained, where the extrapolation equals zero. This is schematically presented in Fig. 10. The corresponding \(\mathbf{\Gamma}_{c}^{\infty}\)-line of \(\lambda_{l}\!=\!0\) for \(U,T\) is approximated by connecting the obtained results for different \(T\). The results for the \(\mathbf{\Gamma}_{c}^{\infty}\)-lines are presented in Fig. 12, where the left panel corresponds to the extrapolation approach.
### Approximation via number \(N_{\lambda<0}\) of negative eigenvalues of \(\chi_{c}\)
Alternatively to the previously described extrapolation scheme to approximate the \(\mathbf{\Gamma}_{c}^{\infty}\)-lines, \(N_{\lambda<0}\) may be fitted by polynomial functions \(p(U)\) for fixed \(T\) by using the least square method. This approach, using the total number of negative eigenvalues, will in general yield less accurate positions of the divergence lines as compared to the approach of Appendix B.1. However, this approach takes the non-linear behavior of \(N_{\lambda<0}\) along the temperature scans into account. The fits of the metallic coexistence region are shown in Fig. 11. For those fits we used \(p(U)\) with different polynomial degrees. The polynomial degree thereby increases with decreasing \(T\) (increasing \(\beta\)), due to the rapid non-linear increase of \(N_{\lambda<0}\): \(p(U)\!\propto\!U\) for \(\beta\!\in\!\{40,44,50,60,75\}\), \(p(U)\!\propto\!U^{2}\) for
Figure 9: Left column: real part of the generalized charge susceptibility \(\mathbf{\chi}_{c}^{\nu\nu^{\prime}}\) for \(\beta\!=\!60\) and \(U\!=\!2.46\) for a PI solution. Right column: corresponding eigenvalues \(\lambda_{\chi_{c}}\) around zero. Upper row is calculated with state sampling, lower row with worm sampling.
\(\beta\!\in\!\{100,133\}\) and \(p(U)\!\propto\!U^{3}\) for \(\beta=200\). The \(\mathbf{\Gamma}_{c}^{\infty}\)-lines are approximated by connecting the points in \(T\), \(U\) with the same value of \(N_{\lambda<0}\). The results for these lines are shown in the right panel of Fig. 12.
Let us stress that, although, the precise position of the \(\mathbf{\Gamma}_{c}^{\infty}\)-lines in \(T,U\) differs between the different approximation schemes, the overall behavior of \(N_{\lambda<0}(U,T)\) and its asymptotic for \(T\!\to\!0\) does not dependent on them. Eventually, we note that, to generate \(N_{\lambda<0}\) as function of \(U\), \(T\) for the parameter space, which is shown as color scale plots in Sec. III, an additional linear interpolation between different temperatures \(T\) has been used.
## Appendix C Effective Hubbard atom like description of the insulating Hubbard model
To approximately describe the behavior the divergence lines of the Mott insulating phase of the Hubbard model with Eq. (7) and to account for the correct limit of \(U_{eff}\!\to\!\infty\), we can construct a simple function
\[\eta(U)=\frac{U_{c1}^{T=0}}{U}(\eta_{0}-1)+1\, \tag{10}\]
where \(\eta_{0}\!=\!1.19\pm 0.03\!\approx\!U_{c1}^{T=0}/2\) is the mean shift between \(\mathbf{\Gamma}_{c}^{\infty}\)-lines of the HM and the HA within the coexistence region. In Eq. (10) we interpolate between \(\eta_{0}\) at the beginning of the Mott insulating phase and \(\eta\!\to\!1\) at \(U\!\to\!\infty\) to account for the asymptotic behavior of the \(\mathbf{\Gamma}_{c}^{\infty}\)-lines. Hence, for \(U_{eff}\) in \(\mathcal{H}_{PI}\) of Eq. (7) we get
\[U_{eff}=\frac{U}{\eta(U)}=\frac{U^{2}}{U+U_{c1}^{T=0}(\eta_{0}-1 )}=\] \[\stackrel{{ 2\eta_{0}\sim U_{c1}^{T=0}}}{{=}}\frac{U^{2}}{ U+U_{c1}^{T=0}(\frac{1}{2}U_{c1}^{T=0}-1)} \tag{11}\]
The results for the temperature behavior of \(N_{\lambda<0}\) of the PI solution in Sec. III.4 suggest to introduce another way to evaluate \(\eta_{0}\), by comparing the slopes \(\alpha_{i}\) of \(N_{\lambda<0}(\beta)\) in Fig. 7 of Sec. III.2 between the HA and the HM at the same \(U\). We can estimate the parameter \(\eta_{0}\) by \(\eta_{0}^{\text{slope}}\!=\!\alpha_{HA}/\alpha_{HM}\). The resulting values are listed in Tab. 1.
In Tab. 2 we compare \(n_{HA}\!=\!N_{\lambda<0}\), calculated from \(U_{eff}(\eta_{0})\) for two different \(\eta_{0}\), with the exact \(n_{HM}\!=\!N_{\lambda<0}\) of the HM for two \(U\) values at \(\beta\!=\!40\): \(\eta_{0}^{\text{slope}}\!=\!1.22\) at \(U_{c1}(T)\) (the first \(U\) value where a Mott insulating phase is possible) and \(\eta_{0}\!=\!1.19\), from our considerations in Sec. III.2. We see that \(n_{HA}\) for \(\eta_{0}^{slope}\) provides us a particularly good approximation for the Mott insulating phase of the Hubbard model.
## Appendix D Calculation of intermediate lines
The dashed dotted lines in the left panel of Fig. 6 are intermediate lines interpolating between the \(U_{c1}(T)\), \(U_{c}(T)\), and \(U_{c2}(T)\) lines according to
\[U_{1}^{int}(T) =\frac{1}{2}\left[U_{c1}(T)+U_{c}(T)\right] \tag{12}\] \[U_{2}^{int}(T) =\frac{1}{6}\left[U_{c1}(T)+5\,U_{c}(T)\right]\] (13) \[U_{3}^{int}(T) =\frac{1}{4}\left[3\,U_{c}(T)+U_{c2}(T)\right]\] (14) \[U_{4}^{int}(T) =\frac{1}{2}\left[U_{c}(T)+U_{c2}(T)\right]\] (15) \[U_{5}^{int}(T) =\frac{1}{4}\left[U_{c}(T)+3\,U_{c2}(T)\right]\!. \tag{16}\]
We used the polynomial fits (Fig. 11) to evaluate the corresponding \(N_{\lambda<0}\) along those lines at inverse temperatures \(\beta\!=\!\{40,44,50,60,75,100,133,200\}\) and applied the same fitting routine as along \(U_{c1}(T)\), \(U_{c}(T)\), and \(U_{c2}(T)\) to generate the corresponding dashed dotted lines in the right panel.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \(\beta=40\) & \(n_{HM}\) & \(\frac{n_{HA}}{\eta_{0}}\) at \(U_{eff}\) \\ \hline \(U=3.1\) & 31 & 30 & 30 \\ \hline \(U=3.5\) & 34 & 36 & 34 \\ \hline \end{tabular}
\end{table}
Table 2: Test calculation results for the number of crossed vertex divergence lines and corresponding values according to the Hubbard atom with effective interaction \(U_{eff}\) for \(\eta_{0}\!=\!1.19\) and \(\eta_{0}^{\text{slope}}\!=\!1.22\).
Figure 11: Polynomial fits of \(N_{\lambda<0}(U)\) for several temperatures. Black markers indicate \(U_{c1}(T)\) (triangles), \(U_{c}(T)\) from [43] (circles), and \(U_{c2}(T)\) (squares) for the temperatures of the fits. The inset shows a zoom for low \(\beta\) and \(U\). |
2306.15167 | An Efficient Global Algorithm for One-Bit Maximum-Likelihood MIMO
Detection | There has been growing interest in implementing massive MIMO systems by
one-bit analog-to-digital converters (ADCs), which have the benefit of reducing
the power consumption and hardware complexity. One-bit MIMO detection arises in
such a scenario. It aims to detect the multiuser signals from the one-bit
quantized received signals in an uplink channel. In this paper, we consider
one-bit maximum-likelihood (ML) MIMO detection in massive MIMO systems, which
amounts to solving a large-scale nonlinear integer programming problem. We
propose an efficient global algorithm for solving the one-bit ML MIMO detection
problem. We first reformulate the problem as a mixed integer linear programming
(MILP) problem that has a massive number of linear constraints. The massive
number of linear constraints raises computational challenges. To solve the MILP
problem efficiently, we custom build a light-weight branch-and-bound tree
search algorithm, where the linear constraints are incrementally added during
the tree search procedure and only small-size linear programming subproblems
need to be solved at each iteration. We provide simulation results to
demonstrate the efficiency of the proposed method. | Cheng-Yang Yu, Mingjie Shao, Wei-Kun Chen, Ya-Feng Liu, Wing-Kin Ma | 2023-06-27T02:50:59Z | http://arxiv.org/abs/2306.15167v2 | # An Efficient Global Algorithm for One-Bit Maximum-Likelihood MIMO Detection
###### Abstract
There has been growing interest in implementing massive MIMO systems by one-bit analog-to-digital converters (ADCs), which have the benefit of reducing the power consumption and hardware complexity. One-bit MIMO detection arises in such a scenario. It aims to detect the multiuser signals from the one-bit quantized received signals in an uplink channel. In this paper, we consider one-bit maximum-likelihood (ML) MIMO detection in massive MIMO systems, which amounts to solving a large-scale nonlinear integer programming problem. We propose an efficient _global_ algorithm for solving the one-bit ML MIMO detection problem. We first reformulate the problem as a mixed integer linear programming (MILP) problem that has a massive number of linear constraints. The massive number of linear constraints raises computational challenges. To solve the MILP problem efficiently, we custom build a light-weight branch-and-bound tree search algorithm, where the linear constraints are incrementally added during the tree search procedure and only small-size linear programming subproblems need to be solved at each iteration. We provide simulation results to demonstrate the efficiency of the proposed method.
One-bit MIMO detection, maximum-likelihood, mixed integer linear programming
## I Introduction
When the massive multiple-input multiple-output (MIMO) system is realized by employing dedicated radio-frequency (RF) chains, power consumption and hardware complexity can be prohibitively high. This has become an impediment in practical implementations for 5G systems and beyond. To resolve the above issue, low-resolution, particularly, one-bit analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) can be employed to cut down the power consumption and hardware complexity, because the power consumption of ADCs and DACs increases exponentially with the resolution [1]. Unfortunately, the use of one-bit ADCs/DACs leads to severe quantization distortion on the signals, and this calls for customized quantized signal processing methods.
In this paper, we study the uplink multiuser signal detection problem in a massive MIMO system with one-bit ADCs. Researchers have proposed different detection methods, including linear receivers [2, 3, 4], maximum-likelihood (ML) detection [5, 6, 7] and maximum a posteriori (MAP) detection [8, 9, 10, 11]. Among the existing methods, maximum-likelihood (ML) detection is an important formulation that tries to address the quantization effect [5, 6, 7, 12, 13, 14, 15, 16, 17, 18]. However, the ML formulation involves a large-scale nonlinear integer programming problem, and solving it by brute-force exhaustive search can be computationally too demanding. Researchers have proposed a variety of approximate algorithms to strike a balance between detection performance and computational complexity, by means of relaxation and optimization [5, 6, 7, 12], deep learning [7, 13, 14, 15], statistical inference [15, 16, 17] and coding theory [18]. Unfortunately, there is _no_ efficient global algorithm for one-bit ML MIMO detection in the literature.
In this paper, we propose an efficient global algorithm for one-bit ML MIMO detection. We first transform the one-bit ML MIMO detection problem into a mixed integer linear programming (MILP) problem. The crux lies in that the MILP problem has exponentially many inequality constraints (with respect to the number of users), which results in high computational complexity if it is directly solved by an MILP solver like CPLEX [19]. We propose an incremental optimization strategy to alleviate the high computational burden. It starts with a relaxed MILP problem with only a selected small subset of the inequality constraints. Then, we iteratively and incrementally add the inequality constraints into the relaxed MILP problem. In order to develop a light-weight global algorithm, we solve each relaxed MILP problem inexactly, which is achieved by embedding the incremental optimization strategy into one branch-and-bound procedure. In this way, the algorithm only needs to solve linear programming (LP) subproblems with significantly much smaller problem sizes (compared with the MILP reformulation of the original problem), and is computationally efficient as demonstrated by simulations. The proposed global algorithm offers an important benchmark for performance evaluation of existing approximate algorithms for solving the same ML problem, showing how well they perform compared to the global ML solutions.
## II Signal Model
Consider a multiuser multiple-input single-output (MISO) uplink transmission, where \(\tilde{K}\) single-antenna users concur
rently send their signals to a base station (BS) having \(\tilde{N}\) antennas. At the BS, the received signal can be modeled by
\[\tilde{\mathbf{y}}=\ \tilde{\mathbf{H}}\tilde{\mathbf{x}}+\tilde{\mathbf{v}},\ \ \tilde{\mathbf{r}}=\ \mathcal{Q}(\tilde{\mathbf{y}}), \tag{1}\]
where \(\tilde{\mathbf{x}}\in\mathbb{C}^{\tilde{K}}\) is the multiuser transmit signal vector, drawn from the Quadrature Phase Shift Keying (QPSK) constellation \(\{\pm 1\pm j\}\); \(\tilde{\mathbf{H}}\in\mathbb{C}^{\tilde{N}\times\tilde{K}}\) is the multiuser channel matrix; \(\tilde{\mathbf{v}}\in\mathbb{C}^{\tilde{N}}\) is additive complex Gaussian noise with mean \(\mathbf{0}\) and covariance matrix \(\tilde{\sigma}^{2}\mathbf{I}\); \(\mathcal{Q}(x):=\text{sgn}(\Re(x))+\text{j}\cdot\text{sgn}(\Im(x))\) is the one-bit quantizer for both the real and imaginary parts of \(x\), and
\[\text{sgn}(x)=\begin{cases}1,&\text{if }x\geq 0;\\ -1,&\text{otherwise},\end{cases}\]
is the one-bit quantization function that operates on each element of its argument; \(\tilde{\mathbf{r}}\in\mathbb{C}^{\tilde{N}}\) is the one-bit received signal. The one-bit MIMO detection problem is to detect \(\tilde{\mathbf{x}}\) from the one-bit received signal \(\tilde{\mathbf{r}}\), with given \(\tilde{\mathbf{H}}\).
For convenience of presentation, we consider the following equivalent real-valued model. Define
\[\mathbf{y}=\begin{bmatrix}\Re(\tilde{\mathbf{y}})\\ \Im(\tilde{\mathbf{y}})\end{bmatrix}\in\mathbb{R}^{N},\mathbf{H}=\begin{bmatrix}\Re( \tilde{\mathbf{H}})&-\Im(\tilde{\mathbf{H}})\\ \Im(\tilde{\mathbf{H}})&\Re(\tilde{\mathbf{H}})\end{bmatrix}\in\mathbb{R}^{N\times K},\]
\[\mathbf{x}=\begin{bmatrix}\Re(\tilde{\mathbf{x}})\\ \Im(\tilde{\mathbf{x}})\end{bmatrix}\in\mathbb{R}^{K},\mathbf{r}=\begin{bmatrix}\Re( \tilde{\mathbf{r}})\\ \Im(\tilde{\mathbf{r}})\end{bmatrix}\in\mathbb{R}^{N},\mathbf{v}=\begin{bmatrix}\Re( \tilde{\mathbf{v}})\\ \Im(\tilde{\mathbf{v}})\end{bmatrix}\in\mathbb{R}^{N},\]
where \(N=2\tilde{N}\), \(K=2\tilde{K}\) and \(\mathbf{v}\) follows the standard Gaussian distribution with mean \(\mathbf{0}\) and covariance \(\sigma^{2}\mathbf{I}\). We convert (1) to a real-valued form
\[\mathbf{y}=\mathbf{H}\mathbf{x}+\mathbf{v},\ \mathbf{r}=\text{sgn}(\mathbf{y}), \tag{2}\]
where \(\mathbf{x}\in\{-1,1\}^{K}\).
We consider maximum-likelihood (ML) detection [6], which can be expressed as
\[\min_{\mathbf{x}\in\{-1,1\}^{K}}f(\mathbf{x}):=-\sum_{i=1}^{N}\log\Phi\left(\frac{r_{ i}\mathbf{h}_{i}^{\top}\mathbf{x}}{\sigma}\right), \tag{3}\]
where \(f(\mathbf{x})\) is the negative log-likelihood function and \(\Phi(z)=\int_{-\infty}^{z}\frac{1}{\sqrt{2\pi}}e^{-t^{2}}\ dt\) is the cumulative distribution function of the standard Gaussian distribution. Problem (3) is a nonlinear integer programming problem. To globally solve (3), exhaustive search can be applied, which examines all feasible solutions with a complexity order of \(\mathcal{O}(2^{K})\). To the best of our knowledge, off-the-shelf efficient mixed integer programming solvers such as CPLEX cannot handle \(\Phi(\cdot)\), which is an integral. In the literature, there are many approximate algorithms for solving problem (3) that seek to strike a balance between detection performance and computational complexity [5, 6, 7, 12, 13, 14, 15, 16, 17]. In this paper, we aim to propose an efficient algorithm for globally solving problem (3), which can serve as a benchmark to evaluate the performance of the existing approximate algorithms.
## III An Efficient Global Algorithm
In this section, we propose an efficient _global_ algorithm for solving problem (3).
### _An MILP Reformulation_
We first equivalently reformulate problem (3) as
\[\min_{\mathbf{x},\mathbf{w}} \sum_{i=1}^{N}w_{i}\] (4) s.t. \[w_{i}\geq g_{i}(\mathbf{x}),\ i=1,2,\ldots,N,\] \[\mathbf{x}\in\{-1,1\}^{K},\]
where \(\mathbf{w}=(w_{1},w_{2},\ldots,w_{N})\) and
\[g_{i}(\mathbf{x}):=-\log\Phi\left(\frac{r_{i}\mathbf{h}_{i}^{\top}\mathbf{x}}{\sigma} \right).\]
Note that \(g_{i}\) is a convex function with respect to \(\mathbf{x}\). The following linear inequality
\[g_{i}(\mathbf{x})\geq g_{i}(\hat{\mathbf{x}})+\langle\nabla g_{i}(\hat{\mathbf{x}}),\mathbf{x} -\hat{\mathbf{x}}\rangle,\ \forall\hat{\mathbf{x}}\in\{-1,1\}^{K} \tag{5}\]
is valid, where
\[\nabla g_{i}(\mathbf{x})=-\frac{\phi(r_{i}\mathbf{h}_{i}^{\top}\mathbf{x}/\sigma)}{\Phi(r _{i}\mathbf{h}_{i}^{\top}\mathbf{x}/\sigma)}\frac{r_{i}\mathbf{h}_{i}}{\sigma}\]
is the gradient of \(g_{i}\) at \(\mathbf{x}\), and \(\phi(t)=\frac{1}{\sqrt{2\pi}}e^{-t^{2}}\) is the probability distribution function of the standard Gaussian distribution. Then, with (5), we reformulate problem (4) as
\[(\mathbf{x}^{\star},\mathbf{w}^{\star})=\arg\min_{\mathbf{x},\mathbf{w}} \sum_{i=1}^{N}w_{i}\] s.t. \[w_{i}\geq g_{i}(\hat{\mathbf{x}})+\langle\nabla g_{i}(\hat{\mathbf{x}}), \mathbf{x}-\hat{\mathbf{x}}\rangle, \tag{6a}\] \[i=1,2,\ldots,N,\ \forall\hat{\mathbf{x}}\in\{-1,1\}^{K},\] \[\mathbf{x}\in\{-1,1\}^{K}. \tag{6b}\]
**Fact 1**: _Problems_ (3) _and_ (6) _are equivalent, in the sense that they have the same optimal solution for \(\mathbf{x}\)._
Fact 1 can be obtained by noting that inequality (5) is tight when \(\hat{\mathbf{x}}=\mathbf{x}\), which establishes the equivalence between problems (4) and (6). This, together with the equivalence between (3) and (4), leads to the desired result.
The upshot of problem (6) is that the inequalities (6a) are _linear_ in both \(\mathbf{x}\) and \(\mathbf{w}\). As a result, problem (6) is an MILP problem. In principle, problem (6) can be solved by off-the-shelf MILP solvers such as CPLEX [19]. However, the number of inequality constraints in (6a) is \(N\cdot 2^{K}\), where both \(N\) and \(K\) can be large in massive MIMO systems, which can lead to prohibitively high computational complexity.
### _An Incremental Algorithmic Framework_
To tackle the computational issue, we propose to solve problem (6) through an incremental optimization strategy. We define
\[\mathcal{C}=\{(i,\hat{\mathbf{x}})\ |\ i=1,2,\ldots,N,\ \hat{\mathbf{x}}\in\{-1,1\}^{K}\}\]
and select \(\mathcal{S}\subseteq\mathcal{C}\) as a subset of \(\mathcal{C}\). We consider the following relaxation of problem (6):
\[(\bar{\mathbf{x}},\bar{\mathbf{w}})\in \arg\min_{\mathbf{x},\mathbf{w}}\sum_{i=1}^{N}w_{i}\] (7) s.t. \[w_{i}\geq g_{i}(\bar{\mathbf{x}})+\langle\nabla g_{i}(\hat{\mathbf{x}}), \mathbf{x}-\hat{\mathbf{x}}\rangle,\ (i,\hat{\mathbf{x}})\in\mathcal{S},\] \[\mathbf{x}\in\{-1,1\}^{K}.\]
We have the following result.
**Fact 2**: _Consider problems (6) and (7). The following hold._
1. \(\sum_{i=1}^{N}\bar{w}_{i}\leq\sum_{i=1}^{N}w_{i}^{\star}\)_._
2. _if_ \(\bar{w}_{i}\geq g_{i}(\bar{\mathbf{x}})\) _holds for all_ \(i\)_, then_ \((\bar{\mathbf{x}},\bar{\mathbf{w}})\) _is also optimal to problem (_6_)._
Proof:: Since problem (7) is a relaxed version of problem (6), it holds that \(\sum_{i=1}^{N}\bar{w}_{i}\leq\sum_{i=1}^{N}w_{i}^{\star}\). This proves a).
If \(\bar{w}_{i}\geq g_{i}(\bar{\mathbf{x}})\) for all \(i\), then \((\bar{\mathbf{x}},\bar{\mathbf{w}})\) is a feasible solution to problem (4). Thus, \(\sum_{i=1}^{N}\bar{w}_{i}\geq\sum_{i=1}^{N}w_{i}^{\star}\). This, together with a), implies \(\sum_{i=1}^{N}\bar{w}_{i}=\sum_{i=1}^{N}w_{i}^{\star}\). Thus, \((\bar{\mathbf{x}},\bar{\mathbf{w}})\) is an optimal solution to problem (4), and also problem (6).
Fact 2 offers a hint to the algorithmic design. Specifically, we start from solving problem (7) with an \(\mathcal{S}\subseteq\mathcal{C}\). If \(\bar{w}_{i}\geq g_{i}(\bar{\mathbf{x}})\) holds for all \(i\), then \((\bar{\mathbf{x}},\bar{\mathbf{w}})\) is already optimal to problem (6). Otherwise, if \(\bar{w}_{i}<g_{i}(\bar{\mathbf{x}})\) for some \(i\), then the constraint
\[w_{i}\geq g_{i}(\bar{\mathbf{x}})+\langle\nabla g_{i}(\bar{\mathbf{x}}),\mathbf{x}-\bar{ \mathbf{x}}\rangle \tag{8}\]
is added into problem (7), i.e., adding \((i,\bar{\mathbf{x}})\) into \(\mathcal{S}\). Then, we solve problem (7) again with the new \(\mathcal{S}\). This process is repeated until \(\bar{w}_{i}\geq g_{i}(\bar{\mathbf{x}})\) holds for all \(i\). This incremental optimization framework is described in Algorithm 1.
### _An Efficient Branch-and-Bound Algorithm_
Algorithm 1 requires (possibly) solving multiple MILP problems in the form of (7) (e.g., by branch-and-bound algorithms [20]) and solving each MILP problem can be time-consuming. Therefore, the complexity of Algorithm 1 can still be high (especially when the number of iterations is large). To further reduce the computational complexity of Algorithm 1, we propose to solve each MILP problem (7) inexactly, which is done by embedding the incremental optimization iterations (cf. lines 3-8 in Algorithm 1) into one branch-and-bound algorithm. Branch-and-bound algorithms are tree search methods that recursively partition the feasible region (i.e., a rooted tree) into small subregions (i.e., branches). In particular, our proposed branch-and-bound algorithm solves the LP relaxation in the form of (10) at each iteration and gradually tightens the relaxation by adding appropriate \((i,\hat{\mathbf{x}})\) in the set \(\mathcal{S}\) and fixing more elements of \(\mathbf{x}\) to be \(\{-1,1\}\). The resulting algorithm is still an _global_ algorithm to problem (6). Note that the proposed algorithm only needs to solve an LP problem at each iteration, which is in sharp contrast to solving the MILP problem (7) in Algorithm 1. Below, we present the proposed algorithm in more details.
#### Iv-C1 Subproblems and Their LP Relaxations
Denote \(\mathcal{F}_{+}\) and \(\mathcal{F}_{-}\) as some subsets of \(\{1,2,\ldots,N\}\) such that \(x_{j}=1\) for \(j\in\mathcal{F}_{+}\) and \(x_{j}=-1\) for \(j\in\mathcal{F}_{-}\), and \(\mathcal{F}_{+}\cap\mathcal{F}_{-}=\varnothing\). The subproblem to explore at the branch defined by \(\mathcal{F}_{+}\) and \(\mathcal{F}_{-}\) is given by
\[\min_{\mathbf{x},\mathbf{w}} \sum_{i=1}^{N}w_{i}\] s.t. \[w_{i}\geq g_{i}(\hat{\mathbf{x}})+\langle\nabla g_{i}(\hat{\mathbf{x}} ),\mathbf{x}-\hat{\mathbf{x}}\rangle,\ (i,\hat{\mathbf{x}})\in\mathcal{C}, \tag{9a}\] \[x_{j}=1,\ j\in\mathcal{F}_{+},\ x_{j}=-1,\ j\in\mathcal{F}_{-},\] (9b) \[\mathbf{x}\in\{-1,1\}^{K}. \tag{9c}\]
Also, consider the following LP relaxation of problem (9):
\[\min_{\mathbf{x},\mathbf{w}} \sum_{i=1}^{N}w_{i}\] s.t. \[w_{i}\geq g_{i}(\hat{\mathbf{x}})+\langle\nabla g_{i}(\hat{\mathbf{x}} ),\mathbf{x}-\hat{\mathbf{x}}\rangle,\ (i,\hat{\mathbf{x}})\in\mathcal{S}, \tag{10a}\] \[x_{j}=1,\ j\in\mathcal{F}_{+},\ x_{j}=-1,\ j\in\mathcal{F}_{-},\] (10b) \[\mathbf{x}\in[-1,1]^{K}, \tag{10c}\]
where \(\mathcal{S}\subseteq\mathcal{C}\). Problem (10) is a relaxation of problem (9) by replacing \(\mathcal{C}\) with \(\mathcal{S}\) and by relaxing binary variables \(x_{j}\)'s with \(x_{j}\notin\mathcal{F}_{+}\cup\mathcal{F}_{-}\) to \([-1,1]\). Therefore, solving the LP problem (10) provides a lower bound for the MILP problem (9).
#### Iv-C2 Proposed Algorithm
Now, we present the main steps of the proposed branch-and-bound algorithm based on the LP relaxation in (10). We use \((\hat{\mathbf{x}},\hat{\mathbf{w}})\) to denote the best known feasible solution that provides the smallest objective value at the current iteration and use \(U\) to denote its objective value (called the _upper bound_ of problem (6)). In addition, we use \((\mathcal{F}_{+},\mathcal{F}_{-},\mathcal{S})\) to denote subproblem (9) where \(\mathcal{S}\subseteq\mathcal{C}\) relates to its current LP relaxation (10), and \(\mathcal{P}\) to denote the problem set of the current unprocessed subproblems. At the beginning, we initialize \(\mathcal{P}\leftarrow\{(\varnothing,\varnothing,\mathcal{S})\}\) for some \(\mathcal{S}\subseteq\mathcal{C}\). At each iteration, we pick a subproblem \((\mathcal{F}_{+},\mathcal{F}_{-},\mathcal{S})\) from \(\mathcal{P}\), and solve problem (10) to obtain its solution \((\mathbf{x}_{\mathsf{LP}},\mathbf{w}_{\mathsf{LP}})\) and objective value \(f_{\mathsf{LP}}=\sum_{i=1}^{N}[\mathbf{w}_{\mathsf{LP}}]_{i}\). Then, we have the following cases:
1. If \(f_{\mathsf{LP}}\geq U\), then problem (9) cannot contain a feasible solution that provides an objective value better than \(U\) (and this subproblem does not need to be explored).
2. If \(f_{\mathsf{LP}}<U\) and \(\mathbf{x}_{\mathsf{LP}}\in\{-1,1\}^{K}\), there are two subcases. (2.1) If \([\mathbf{w}_{\mathsf{LP}}]_{i}\geq g_{i}(\mathbf{x}_{\mathsf{LP}})\) for all \(i=1,2,\ldots,N\), then \((\mathbf{x}_{\mathsf{LP}},\mathbf{w}_{\mathsf{LP}})\) must be an optimal solution to problem (9). We update \((\hat{\mathbf{x}},\hat{\mathbf{w}})\leftarrow(\mathbf{x}_{\mathsf{LP}},\mathbf{w}_{\mathsf{LP}})\) and \(U\leftarrow\hat{f}_{\mathsf{LP}}\).
3. Otherwise, we apply the incremental optimization strat
egy by adding \((i,\mathbf{x}_{\text{LP}})\) with all \([\mathbf{w}_{\text{LP}}]_{i}<g_{i}(\mathbf{x}_{\text{LP}})\) into \(\mathcal{S}\) to obtain a tightened problem (10).
3. If \(f_{\text{LP}}<U\) and \(\mathbf{x}_{\text{LP}}\notin\{-1,1\}^{K}\), then we choose an index \(j\) with \(-1<[\mathbf{x}_{\text{LP}}]_{j}<1\) and branch on variable \(x_{j}\) by partitioning problem \((\mathcal{F}_{+},\mathcal{F}_{-},\mathcal{S})\) into two new subproblems \((\mathcal{F}_{+}\cup\{j\},\mathcal{F}_{-},\mathcal{S})\) and \((\mathcal{F}_{+},\mathcal{F}_{-}\cup\{j\},\mathcal{S})\). We add the two subproblems into the problem set \(\mathcal{P}\).
The above process is repeated until \(\mathcal{P}=\varnothing\). The whole procedure is summarized as Algorithm 2.
```
1:input: Initialize \(\mathcal{P}=\{(\varnothing,\varnothing,\mathcal{S})\}\) for some \(\mathcal{S}\subseteq\mathcal{C}\) and \(U\leftarrow+\infty\).
2:while\(\mathcal{P}\neq\varnothing\)do
3: Choose a subproblem \((\mathcal{F}_{+},\mathcal{F}_{-},\mathcal{S})\in\mathcal{P}\) and set \(\mathcal{P}\leftarrow\mathcal{P}\backslash\{(\mathcal{F}_{+},\mathcal{F}_{-}, \mathcal{S})\}\);
4:loop
5: Solve the LP problem (10) to obtain its optimal solution \((\mathbf{x}_{\text{LP}},\mathbf{w}_{\text{LP}})\) and objective value \(f_{\text{LP}}=\sum_{i=1}^{N}[\mathbf{w}_{\text{LP}}]_{i}\);
6:if\(f_{\text{LP}}\geq U\)then
7:break;//case(1)
8:elseif\(\mathbf{x}_{\text{LP}}\in\{-1,1\}^{K}\)then
9:if\([\mathbf{w}_{\text{LP}}]_{i}\geq g_{i}(\mathbf{x}_{\text{LP}})\) for all \(i=1,2,\ldots,N\)then
10: Update \((\tilde{\mathbf{x}},\tilde{\mathbf{w}})\leftarrow(\mathbf{x}_{\text{LP}},\mathbf{w}_{\text{LP}})\) and \(U\gets f_{\text{LP}}\);
11:break;//case(2.1)
12:else
13:\(\mathcal{S}\leftarrow\mathcal{S}\cup\{(i,\mathbf{x}_{\text{LP}})\mid[\mathbf{w}_{\text {LP}}]_{i}<g_{i}(\mathbf{x}_{\text{LP}}),i=1,2,\ldots,N\}\);//case(2.2)
14:endif
15:else
16: Choose an index \(j\) such that \(-1<[\mathbf{x}_{\text{LP}}]_{j}<1\);
17: Add two new subproblems \((\mathcal{F}_{+}\cup\{j\},\mathcal{F}_{-},\mathcal{S})\) and \((\mathcal{F}_{+},\mathcal{F}_{-}\cup\{j\},\mathcal{S})\) into \(\mathcal{P}\);//case(3)
18:break;
19:endif
20:endloop
21:endwhile
22:output\((\mathbf{x}^{*},\mathbf{w}^{*})\leftarrow(\tilde{\mathbf{x}},\tilde{\mathbf{w}})\).
```
**Algorithm 2** A Global Algorithm for Solving Problem (6)
In lines 3 and 16 of Algorithm 2, there exist different strategies to choose a subproblem \((\mathcal{F}_{+},\mathcal{F}_{-},\mathcal{S})\) from set \(\mathcal{P}\) and to choose a branching variable index \(j\)[20]. It is worthwhile to remark that Algorithm 2 can be embedded into state-of-the-art MILP solvers like CPLEX through the so-called _callback routine_[19], which uses the (default) fine-tune subproblem selection and branching strategies of MILP solvers.
## IV Simulation Results
In this section, we provide simulation results to illustrate the efficiency of the proposed global algorithm for solving the one-bit MIMO detection problem. The simulation settings are described as follows. The channel \(\tilde{\mathbf{H}}\) is generated by element-wise i.i.d. circular Gaussian distribution with mean zero and unit variance. The symbols \(\tilde{\mathbf{x}}\) are independently and identically distributed drawn from the QPSK constellation \(\{\pm 1\pm\text{j}\}\). The signal-to-noise ratio (SNR) is defined as \(\text{SNR}=\frac{\|\tilde{\mathbf{H}}\hat{\mathbf{x}}\|_{2}^{2}}{\|\hat{\mathbf{w}}\|_{2}^{2}}\). Algorithm 2, we initialize \(\mathcal{S}\) by setting \(\hat{\mathbf{x}}\) as a zero-forcing (ZF) solution, i.e., \(\hat{\mathbf{x}}=\text{sgn}(\mathbf{H}^{\dagger}\mathbf{r})\) with \({}^{\dagger}\) denoting the matrix pseudo-inverse. A number of 1,000 Monte-Carlo trials were run to obtain the bit-error rates of our algorithm and the benchmarked algorithms.
We first demonstrate the bit-error rate (BER) performance. We name Algorithm 2 Global One-Bit MIMO Detection (GOBMD). We also show state-of-the-art algorithms that are designed to handle problem (3), including the nML and two-stage nML in [6] and HOTML in [7]. Fig. 1 shows the BER performance under different MIMO sizes. It is seen from Fig. 1(a) that GOBMD achieves the same BER performance as exhaustive search, as both of them globally solve the ML problem (3); in Fig. 1(b), exhaustive search is computationally too demanding to complete the job. GOBMD provides an ML BER benchmark for the other algorithms for solving the same problem.
Fig. 2 shows the runtime comparison between GOBMD and exhaustive search. When the problem size is small, GOBMD and exhaustive search are computationally comparable. However, the computational complexity of exhaustive search grows rapidly with the problem size, while that of GOBMD increases with a much slower rate.
Fig. 3 shows the average ratio \(|\mathcal{S}|/|\mathcal{C}|\) when Algorithm 2 converges under fixed \(N=36\) and SNR =10 dB. It is seen that \(|\mathcal{S}|/|\mathcal{C}|\) is lower than 1%. In other words, Algorithm 2 only needs to solve LP problems that have 99% less inequality
Fig. 1: BER performance under different problem sizes.
constraints than problem (6). Also, we see that the ratio \(|\mathcal{S}|/|\mathcal{C}|\) decreases when \(K\) increases, which indicates that GOBMD has good scalability for massive systems with many users.
Finally, as a fundamental investigation and also future work, we are interested in whether and when the ML formulation (3) can exactly recover the user transmitted signals. Intuitively, when the ratio between the numbers of antennas and users \(N/K\) is large, and when the noise power \(\sigma^{2}\) is small, solving the ML detection problem will recover the user transmitted symbols (with a high probability). The simulation result in Fig. 4 supports this intuition. The colorbar illustrates the BER level: the darker the color, the higher the BER. We see a clear phase transition from the left-bottom region with high BER to the right-top region with low BER. In the future, we will quantitatively analyze the conditions under which the ML solution can exactly identify the user signals. We will also extend the study to account for multi-bit quantization and higher-order modulations.
|
2302.07094 | Andreev reflection in Euler materials | Many previous studies of Andreev reflection have demonstrated that unusual
effects can occur in media which have a nontrivial bulk topology. Following
this line of investigation, we study Andreev reflection in topological Euler
materials by analysing a simple model of a bulk node with a generic winding
number $n\geq 0$. We find that the magnitudes of the resultant reflection
coefficients depend strongly on whether the winding is even or odd. Moreover
this parity dependence is reflected in the differential conductance curves,
which are highly suppressed for $n$ even but not $n$ odd. This gives a possible
route through which the recently discovered Euler topology could be probed
experimentally. | Arthur S. Morris, Adrien Bouhon, Robert-Jan Slager | 2023-02-14T14:47:55Z | http://arxiv.org/abs/2302.07094v1 | # Andreev reflection in Euler materials
###### Abstract
Many previous studies of Andreev reflection have demonstrated that unusual effects can occur in media which have a nontrivial bulk topology. Following this line of investigation, we study Andreev reflection in topological Euler materials by analysing a simple model of a bulk node with a generic winding number \(n\geq 0\). We find that the magnitudes of the resultant reflection coefficients depend strongly on whether the winding is even or odd. Moreover this parity dependence is reflected in the differential conductance curves, which are highly suppressed for \(n\) even but not \(n\) odd. This gives a possible route through which the recently discovered Euler topology could be probed experimentally.
## I Introduction
The study of topological materials is currently a very active field of research. Investigations in this area range from broad theoretical analyses to direct experimental investigation of the properties of particular materials. Recent advancements in the characterization of topological band structures using symmetry eigenvalue methods have been significant [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. However, there is growing interest in multi-gap topological phases [11] since they generically cannot be explained within this paradigm.
A prominent example entails Euler topology [12; 13; 14; 15; 7], which depends on the conditions between multiple gaps in the band structure of a \(\mathcal{C}_{2}\mathcal{T}\) or \(\mathcal{PT}\) symmetric material. Under these conditions, band nodes in two-dimensional materials carry non-Abelian 'frame charges' in momentum space [16; 17; 12], akin to \(\pi\) disclination defects in bi-axial nematics [18; 19; 6]; phases with a non-trivial Euler topology can be formed by braiding such degeneracies between successive bands [16; 13; 12; 15]. The ability of such nodes lying in a patch of the Brillouin zone, \(\mathcal{D}\subseteq\mathrm{BZ}\), to annihilate is encoded in the \(\mathbb{Z}\)-valued Euler class invariant \(\chi\), which is the real counterpart of the Chern number of a complex vector bundle [11; 12]. Since a region containing no nodes has \(\chi=0\), a non-zero value of \(\chi\) indicates a topological obstruction to the possibility of the nodes gapping out in this region. Around each node, a non-zero Euler class \(\chi\) manifests itself as a winding \(w=2\chi\) in the two band subspace carrying the node. The braiding of nodes in reciprocal space therefore provides a means through which nodes carrying higher winding numbers could be realised in real materials. Importantly, such multi-gap phases are increasingly being related to novel physical effects. Examples include monopole-anti-monopole generation [14] that have been seen in trapped-ion experiments [20] or novel anomalous phases [21], and are increasingly gaining attention in different contexts that range from phononic systems and cold atom simulators to acoustic and photonic metamaterials [22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35].
One way in which the unusual properties of topological materials can become manifest is in their Andreev reflection characteristics. Andreev reflection is the process through which an electronic excitation in a normal metal can scatter into a hole and/or Cooper pair when incident on the boundary between the normal metal and a superconductor. In the historically well-understood case of a quadratic band dispersion, an incident electron which scatters into a hole is retro-reflected, and travels away from the boundary along the same direction from which it arrived [36]. In contrast, in graphene an electron incident on a normal-superconductor boundary can in addition undergo specular Andreev reflection. Such unusual scattering characteristics of the single-particle excitations in graphene can ultimately be attributed to the properties of the nodes (Dirac cones) within the band structure of monolayer graphene. The nearest-neighbour tight-binding model of this material famously exhibits degeneracies at two points \(\mathbf{K}\) and \(\mathbf{K}^{\prime}\) in the Brillouin zone; low-energy excitations about these points behave like relativistic particles with an approximately linear dispersion relation and, due to the non-zero winding numbers carried by the nodes, these excitations are chiral [37; 38]. These properties enable a condensed matter realisation of the Klein paradox, which in some cases can lead to perfect Andreev reflection [39; 40]. Studies have also shown that Rashba spin-orbit interactions can play an important role in Andreev reflection in graphene [41; 42].
In some cases it has been found that the Andreev scattering matrix can be directly related to the topological invariants in the bulk of a material. For example, in topological superconductors with chiral symmetry, the quantum number \(Q\in\mathbb{Z}\) of the BDI class is equal to the trace of the Andreev S-matrix \(r_{\mathrm{he}}\)[43; 44; 45]. Moreover, in certain topological superconductors the interaction of Majorana bound states with electronic excitations can have a significant impact on the Andreev reflection characteristics [46; 47; 48; 49; 50]. In such materials the differential conductance curves depend on the number of vortices present, and differ in the cases that this number is even and odd [51; 52; 53]. The investigation of Andreev reflection in Weyl semimetals has also shown that the (pseudo-)spin structure of the Hamiltonian in the vicinity of a band node can strongly influence the directional dependence of Andreev reflection [54; 55; 56].
Motivated by the unusual Andreev reflection proper
ties of topological materials as described above, in this work we explore low-energy Andreev reflection in Euler materials. To do so, we make use of a simple model of a node which carries a generic integer winding number \(n\geq 0\) since, as previously mentioned, this is precisely the property exhibited by Euler materials around a band node. (Note that graphene has band nodes with winding numbers \(w=\pm 1\); our results agree with previous findings in this limit.) We find that, when \(n\) is even, the ability of an electron to scatter into a hole is strongly suppressed at all angles, while for \(n\) odd the probability of Andreev reflection at normal incidence is always exactly unity. Moreover, for \(n\) even the degree of suppression increases with the strength of an externally applied gate potential \(U_{0}\) (though this effect is less pronounced for large \(n\)). The differential conductance curves resulting from this behaviour are distinctive and could be measured experimentally and used as an indicator of the presence of Euler topology.
## II Model
We now describe and make use of a model to investigate the properties of Andreev reflection in topological Euler materials. Although the braiding of band nodes carrying non-Abelian frame charges requires the Bloch Hamiltonian to have three or more bands, we can obtain an effective description of the low energy physics around any given node by projecting the Hamiltonian to the two-band subspace containing the degeneracy. In this space, a non-zero Euler class is manifested as a winding of the Bloch Hamiltonian around the node. More precisely, the Euler class of a generic real two-band Bloch Hamiltonian in two dimensions
\[H(\mathbf{k})=a(\mathbf{k})\mathbb{I}+r(\mathbf{k})\cos(\theta(\mathbf{k})) \sigma_{x}+r(\mathbf{k})\sin(\theta(\mathbf{k}))\sigma_{z}, \tag{1}\]
where \(\mathbf{k}=(k_{x},k_{y})\in\mathrm{B.Z.}\), is given by \(\chi=-w/2\), where the total winding number
\[w=\frac{1}{2\pi}\int_{\cup_{i}\partial\mathcal{D}_{i}}\mathrm{d}\mathbf{k} \cdot\mathbf{\nabla}\theta(\mathbf{k})\in\mathbb{Z} \tag{2}\]
and the region \(\mathcal{D}_{i}\subset\mathrm{B.Z.}\) contains the \(i^{\mathrm{th}}\) node of \(H\). More generally, \(\chi=\int_{\mathbb{D}}Eu-\oint_{\partial\mathcal{D}}a\), where the Euler form \(Eu\) and the connection \(a\) may be computed from the Berry-Wilczek-Zee connection [11; 13; 29].
With regard to the above Hamiltonian we note previous insightful reports that considered the above Hamiltonian in the context of stable winding and band degeneracies [57] as well as its use as the simplest ansatz supporting a finite Euler class [13]. The point from a multi-gap perspective [11; 12; 14; 29] is however that the nodes formed by a two-band subspace on an isolated patch of the Brillouin zone are stable as long as these bands remain disconnected from the other bands of the many-band context. Indeed, to induce a finite Euler class in a lattice model one necessarily needs a multi-gap structure, as can be directly analyzed [12] also in agreement with general homotopical arguments [11; 15], and perform a braid between the nodes of adjacent energy gaps. We stress that this general multi-gap perspective sheds light on addressing physical features as recently discovered [24; 29; 35; 14].
A minimal model of a single node with winding number \(w=n\), which for simplicity we choose to be at the origin \(\mathbf{k}=\mathbf{0}\), is given by setting \(\theta(\mathbf{k})=n\arg(z)=\arg(z^{n})\) and \(r(\mathbf{k})=|z|^{n}=|\mathbf{k}|^{n}\), where \(z=k_{x}+\mathrm{i}k_{y}\). This makes the components of \(H\) homogeneous polynomials in \(k_{x},k_{y}\), since it then follows that
\[r\cos(\theta)= \mathrm{Re}\{(k_{x}+\mathrm{i}k_{y})^{n}\}=:P_{n}^{+}(k_{x},k_{y }), \tag{3a}\] \[r\sin(\theta)= \mathrm{Im}\{(k_{x}+\mathrm{i}k_{y})^{n}\}=:P_{n}^{-}(k_{x},k_{y }). \tag{3b}\]
The first few polynomials are \(P_{1}^{+}=k_{x},P_{2}^{+}=k_{x}^{2}-k_{y}^{2},P_{3}^{+}=k_{x}^{3}-3k_{x}k_{y}^{2}\) and \(P_{1}^{-}=k_{y},P_{2}^{-}=2k_{x}k_{y},P_{3}^{-}=3k_{x}^{2}k_{y}-k_{y}^{3}\). In particular, when \(n=1\) the Hamiltonian is \(H_{1}=k_{x}\sigma_{x}+k_{y}\sigma_{z}\), which is related to the low-energy graphene Hamiltonians \(H_{\pm\mathbf{K}}=k_{x}\sigma_{x}\pm k_{y}\sigma_{y}\) via a unitary transformation.
Although the Euler class of the entire Brillouin zone must be quantised to an integer, it is possible to have isolated nodes with arbitrary winding numbers. For this reason, in the following we will allow \(n\) to be any integer rather than restricting it to be even.
We now suppose that we have a material which contains a node with a generic winding number \(n\) within its bulk band structure; from now on, we will refer to this as an Euler material. If, in addition, a position dependent superconducting pairing potential \(\Delta(\mathbf{x})\) is induced in the material, for example via the proximity effect, then the quasiparticle excitations in the system can be described by a Bogoliubov-de Gennes (BdG) Hamiltonian of the form
\[H_{\mathrm{BdG},n}(\mathbf{k})=\begin{pmatrix}H_{n}(\mathbf{k})+U(\mathbf{x})- E_{\mathrm{F}}&\Delta(\mathbf{x})\\ \Delta(\mathbf{x})^{\dagger}&E_{\mathrm{F}}-H_{n}(\mathbf{k})-U(\mathbf{x}) \end{pmatrix}, \tag{4}\]
Figure 1: Low energy excitation spectrum of the Hamiltonian \(H_{n}(\mathbf{k})\) of Eq. 1 for \(n=1\) (left) and \(n=2\) (right). The field in the \(k_{x}\)-\(k_{y}\) plane displays the vector \((\cos\theta(\mathbf{k}),\sin\theta(\mathbf{k}))\), which winds \(n\) times around the origin.
where \(H_{n}(\mathbf{k})=P_{n}^{+}(\mathbf{k})\sigma_{x}+P_{n}^{-}(\mathbf{k})\sigma_{z}\) is the two-band Euler Hamiltonian with winding number \(n\) described above. In Eq. (4) we have also allowed for the possibility of an externally applied electrostatic potential \(U(\mathbf{x})\). When the pairing and electrostatic potentials vary as a function of position, the momentum \(\mathbf{k}\) should be interpreted as a derivative in real space, and the excitations of the system may be determined by solving the PDE
\[H_{\mathrm{BdG},n}(-\mathrm{i}\partial_{\mathbf{x}})\psi(\mathbf{x})=\varepsilon \psi(\mathbf{x}) \tag{5}\]
for positive eigenvalues \(\varepsilon\geq 0\), subject to appropriate boundary conditions.
Suppose now that a normal Euler material fills the semi-infinite plane \(x<0\), while in the region \(x>0\) both the pairing and electrostatic potentials are non-zero and the material is superconducting. In particular, suppose that \(\Delta\) and \(U\) are uniform in each of these regions,
\[\Delta(x,y) =\begin{cases}\Delta_{0}\mathrm{e}^{\mathrm{i}\phi}&x>0\\ 0&x<0\end{cases} \tag{6}\] \[U(x,y) =\begin{cases}-U_{0}&x>0\\ 0&x<0\end{cases}. \tag{7}\]
When an electron propagating in the normal region is incident on the boundary \(x=0\), it may scatter into an electron or a hole in the normal region, or a Cooper pair in the superconducting region, each with a certain probability. To determine the amplitudes for these various processes, it is necessary to solve the real-space BdG equation in the normal and superconducting regions. By finding the eigenstates of this equation in the normal and superconducting regions, and then matching these solutions at the boundary \(x=0\), we determine the reflection and transmission coefficients. The results of this calculation are shown in the Appendix.
In Fig. 2 the probability \(R=|r|^{2}\) for an electron to reflect off the boundary is shown as a function of the incident angle \(\alpha\), the winding number \(n\), and the strength \(U_{0}\) of the applied electrostatic potential. The energy \(\varepsilon\) of the incoming electron is here less than the superconducting gap, \(\varepsilon<\Delta_{0}\), so the sum of the reflection and Andreev reflection coefficients \(R\) and \(R_{\mathrm{A}}\) is exactly equal to \(1\). This is because the excitation energy \(\varepsilon\) is less than the energy \(\Delta_{0}\) required to create a Cooper pair out of the vacuum, so the electron can scatter only into states on the normal side. Above the critical angle
\[\alpha_{\mathrm{c}}=\arcsin\Biggl{(}\left[\frac{|\varepsilon-E_{\mathrm{F}}|} {\varepsilon+E_{\mathrm{F}}}\right]^{1/n}\Biggr{)} \tag{8}\]
there are no hole states available for the electron to scatter into, and the probability of Andreev drops to zero (equivalently, \(R=1\)). We note that, for fixed \(\varepsilon\), \(E_{\mathrm{F}}\) the angle \(\alpha_{\mathrm{c}}\) increases monotonically with \(n\), so that the range of angles over which Andreev reflection can take place is larger for greater values of \(n\).
Interestingly, the form of the angular dependence of the reflection probability is qualitatively different when the winding number \(n\) is even or odd. For \(n\) odd, the probability for Andreev reflection is always exactly \(1\) at normal incidence (\(\alpha=0\)), independent of the strength of the potential \(U_{0}\). As \(U_{0}\) is varied, the value of \(R\) increases uniformly across \(\alpha\). However, even for large changes in
Figure 3: In addition to the propagating wave solutions that describe electron- and hole- like quasiparticle excitations inside the bulk of the normal and superconducting Euler materials, when the winding number is \(n\) there exist \(2(n-1)\) modes localised at the boundary.
Figure 2: Variation of the reflection and Andreev reflection coefficients \(R=|r|^{2}\) and \(R_{\mathrm{A}}=|r_{\mathrm{A}}|^{2}\) with incident electron angle and applied potential \(U\), with \(\varepsilon=0.5\), \(\Delta_{0}=1\), and \(E_{\mathrm{F}}=1.5\). The energy \(\varepsilon\) of the incoming electron is here less than the superconducting gap, \(\varepsilon<\Delta_{0}\), so the sum of the reflection and Andreev reflection coefficients \(R\) and \(R_{\mathrm{A}}\) is exactly equal to \(1\).
\(U_{0}\), the change effected in \(R\) is small. On the other hand, for \(n\) even the value of \(R\) is not fixed to zero at \(\alpha=0\), i.e. there is a non-zero probability for the electron to reflect as an electron even at normal incidence. Moreover, \(R\) shows a significant variation with the applied potential: as \(U_{0}\) is increased, the value of \(R\) increases and tends towards the value \(R(\alpha)=1\) as \(U_{0}\rightarrow\infty\). The speed at which \(R\) increases depends on the magnitude of \(n\): for larger \(n\), the suppression of \(R_{\mathrm{A}}\) is smaller and it requires a stronger potential to send \(R\to 1\). These unusual features are qualitatively reproduced in the super-gap region \(\varepsilon>\Delta_{0}\), though in this latter case it no longer holds that \(R+R_{\mathrm{A}}=1\).
Another interesting feature of the problem is that, in addition to the propagating wave states that, far into the bulk, represent the usual electron- and hole- excitations, for \(n>1\) there are \(2(n-1)\) evanescent solutions that are localised at the boundary \(x=0\). Though they do not influence the properties of the propagating wave states away from the boundary, these modes nonetheless make a significant contribution to the dynamics of scattering. Moreover, since these boundary modes exist only for \(n>1\), they are a signature of the higher bulk winding number of the Euler material.
The differential conductance of the interface may be calculated using the Blonder-Tinkham-Klapwijk formalism as described in [36]:
\[\frac{1}{g_{0}(V)}\frac{\partial I}{\partial V}=\int_{0}^{\pi/2}\mathrm{d} \alpha\left[1-R(\alpha)+R_{\mathrm{A}}(\alpha)\right]\cos\alpha. \tag{9}\]
The results of this computation for \(n=1,\ldots,6\) are shown for a range of Fermi and excitation energies in Fig. 4. Note that, as it must, the case \(n=1\) agrees with Fig. 4 of [37]. Most notable however is the result for \(n=2\), which shows highly suppressed differential conductance over a wide range of excitation energies; indeed, it is only notably different from zero when the Fermi energy is very large compared to the superconducting gap and the excitation energy is slightly above this same energy \(\Delta_{0}\).
It is qualitatively clear from Fig. 4 that the parity dependence displayed in the Andreev reflection in an Euler material (Fig. 2) is also manifest in the physically measurable quantities \(\partial I/\partial V\). For example, on the \(n=3\) graph in Fig. 4 it can be seen that the magnitude of the differential conductance tends towards approximately the same value in the limits \(\varepsilon\to 0\) and \(\varepsilon\gg\Delta_{0}\). In contrast, the differential conductance in the case \(n=6\) tends towards to quite different values on either side of the line \(\varepsilon=\Delta_{0}\).
The plots shown in Fig. 4 are for a very large applied gate potential \(U_{0}=10^{8}\). For \(U_{0}\lesssim 10^{4}\) the corresponding plots are qualitatively similar, but the suppression in the case of \(n\) even is less severe.
## III Conclusion
We have demonstrated that the parity of the winding number of a band node has a strong influence on the scattering properties of the quasiparticle excitations at the boundary between a normal and a superconducting region. The suppression of the Andreev reflection coefficients in Euler materials with nodes carrying even winding numbers leads to a clear signature in the differential conductance curves that could in principle be probed experimentally.
## IV Acknowledgements
A. S. M. is funded by an EPSRC PhD studentship (Project reference 2606546). A. B. was funded by a Marie-Curie fellowship, grant no. 101025315. R. J. S acknowledges funding from a New Investigator Award, EPSRC grant EP/W00187X/1, as well as Trinity college, Cambridge. The authors would like to thank X. Feng for helpful discussions on Andreev reflection, and for reading and providing feedback on a draft of the manuscript.
Figure 4: Variation of the differential conductance \(\partial I/\partial V\) with the Fermi and excitation energies \(E_{\mathrm{F}}\) and \(\varepsilon\) with a large external gate potential \(U_{0}=10^{8}\). |
2302.00923 | Multimodal Chain-of-Thought Reasoning in Language Models | Large language models (LLMs) have shown impressive performance on complex
reasoning by leveraging chain-of-thought (CoT) prompting to generate
intermediate reasoning chains as the rationale to infer the answer. However,
existing CoT studies have primarily focused on the language modality. We
propose Multimodal-CoT that incorporates language (text) and vision (images)
modalities into a two-stage framework that separates rationale generation and
answer inference. In this way, answer inference can leverage better generated
rationales that are based on multimodal information. Experimental results on
ScienceQA and A-OKVQA benchmark datasets show the effectiveness of our proposed
approach. With Multimodal-CoT, our model under 1 billion parameters achieves
state-of-the-art performance on the ScienceQA benchmark. Our analysis indicates
that Multimodal-CoT offers the advantages of mitigating hallucination and
enhancing convergence speed. Code is publicly available at
https://github.com/amazon-science/mm-cot. | Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, Alex Smola | 2023-02-02T07:51:19Z | http://arxiv.org/abs/2302.00923v5 | # Multimodal Chain-of-Thought Reasoning in Language Models
###### Abstract
Large language models (LLMs) have shown impressive performance on complex reasoning by leveraging chain-of-thought (CoT) prompting to generate intermediate reasoning chains as the rationale to infer the answer. However, existing CoT studies have focused on the language modality. We propose Multimodal-CoT that incorporates language (text) and vision (images) modalities into a two-stage framework that separates rationale generation and answer inference. In this way, answer inference can leverage better generated rationales that are based on multimodal information. With Multimodal-CoT, our model under 1 billion parameters outperforms the previous state-of-the-art LLM (GPT-3.5) by 16 percentage points (75.17%\(\rightarrow\)91.68% accuracy) and even surpasses human performance on the ScienceQA benchmark. Code is publicly available.1
Footnote 1: Shanghai Jiao Tong University \({}^{2}\)Amazon Web Services. Correspondence to: Zhuosheng Zhang (work done at Amazon Web Services) \(<\)[email protected]\(>\), Aston Zhang \(<\)[email protected]\(>\).
## 1 Introduction
Imagine reading a textbook with no figures or tables. Our ability to knowledge acquisition is greatly strengthened by jointly modeling diverse data modalities, such as vision, language, and audio. Recently, large language models (LLMs) (Brown et al., 2020; Thoppilan et al., 2022; Rae et al., 2021; Chowdhery et al., 2022) have shown impressive performance in complex reasoning by generating intermediate reasoning steps before inferring the answer. The intriguing technique is called chain-of-thought (CoT) reasoning (Wei et al., 2022; Kojima et al., 2022; Zhang et al., 2022).
However, existing studies related to CoT reasoning are largely isolated in the language modality (Wang et al., 2022; Zhou et al., 2022; Lu et al., 2022; Fu et al., 2022), with little consideration of multimodal scenarios. To elicit CoT reasoning in multimodality, we advocate a Multimodal-CoT paradigm. Given the inputs in different modalities, Multimodal-CoT decomposes multi-step problems into intermediate reasoning steps (rationale) and then infers the answer. Since vision and language are the most popular modalities, we focus on those two modalities in this work. An example is shown in Figure 1. In general, there are two ways to elicit Multimodal-CoT reasoning as follows: (i) prompting LLMs and (ii) fine-tuning small models.2
Footnote 2: In this work, we refer to small models as models with less than 1 billion parameters (hereinafter dubbed as 1B-models).
The most immediate way to perform Multimodal-CoT is to transform the input of different modalities into one modality and prompt LLMs to perform CoT. For example, it is possible to extract the caption of an image by a captioning model and then concatenate the caption with the original language input to be fed into LLMs (Lu et al., 2022). However, there is severe information loss in the captioning process; thus, using the captions (as opposed to vision features) may suffer from a lack of mutual synergy in the representation space of different modalities.
To facilitate the interaction between modalities, another potential solution is to fine-tune smaller language models (LMs) by fusing multimodal features (Zhang et al., 2023). As this approach allows the flexibility of adjusting model architectures to incorporate multimodal features, we study fine-tuning models in this work instead of prompting LLMs. The key challenge is that language models under 100 billion parameters tend to generate hallucinated rationales that mislead the answer inference (Ho et al., 2022; Magister et al.,
Figure 1: Example of the multimodal CoT task.
2022; Ji et al., 2022).
To mitigate the challenge of hallucination, we propose Multimodal-CoT that incorporates language (text) and vision (images) modalities into a two-stage framework that separates rationale generation and answer inference. In this way, answer inference can leverage better generated rationales that are based on multimodal information. Our experiments are conducted on the ScienceQA benchmark (Lu et al., 2022), which is the latest multimodal reasoning benchmark with annotated reasoning chains. Experimental results show that our method surpasses the previous state-of-the-art GPT-3.5 model by +16% (75.17%\(\rightarrow\)91.68%) on the benchmark. Our contributions are summarized as follows:
(i) To the best of our knowledge, this work is the first to study CoT reasoning in different modalities.
(ii) We propose a two-stage framework by fine-tuning language models to fuse vision and language representations to perform Multimodal-CoT. The model is able to generate informative rationales to facilitate inferring final answers.
(iii) Our method achieves new state-of-the-art performance on the ScienceQA benchmark, outperforming accuracy of GPT-3.5 by 16% and even surpassing human performance.
## 2 Background
This section reviews recent progress of eliciting CoT reasoning by prompting and fine-tuning language models.
### CoT Reasoning with LLMs
Recently, CoT has been widely used to elicit the multi-step reasoning abilities of LLMs (Wei et al., 2022). Concretely, CoT techniques encourage the LLM to generate intermediate reasoning chains for solving a problem. Studies have shown that LLMs can perform CoT reasoning with two major paradigms of techniques: Zero-Shot-CoT (Kojima et al., 2022) and Few-Shot-CoT (Wei et al., 2022; Zhang et al., 2022). For Zero-Shot-CoT, Kojima et al. (2022) showed that LLMs are decent zero-shot reasoners by adding a prompt like "Let's think step by step" after the test question to invoke CoT reasoning. For Few-Shot-CoT, a few step-by-step reasoning demonstrations are used as conditions for inference. Each demonstration has a question and a reasoning chain that leads to the final answer. The demonstrations are commonly obtained by hand-crafting or automatic generation. The corresponding techniques are thus referred to as Manual-CoT (Wei et al., 2022) and Auto-CoT (Zhang et al., 2022).
With effective demonstrations, Few-Shot-CoT often achieves stronger performance than Zero-Shot-CoT and has attracted more research interest. Therefore, most recent studies focused on how to improve Few-Shot-CoT. Those studies are categorized into two major research lines: (i) optimizing the demonstrations; (ii) optimizing the reasoning chains. Table 1 compares typical CoT techniques.
Optimizing DemonstrationsThe performance of Few-Shot-CoT relies on the quality of demonstrations. As reported in Wei et al. (2022), using demonstrations written by different annotators results in dramatic accuracy disparity in a symbolic reasoning task. Beyond hand-crafting the demonstrations, recent studies have investigated ways to optimize the demonstration selection process. Notably, Rubin et al. (2022) retrieved the semantically similar demonstrations with the test instance. However, this approach shows a degraded performance when there are mistakes in the reasoning chains (Zhang et al., 2022). To address the limitation, Zhang et al. (2022) found that the key is the diversity of demonstration questions and proposed Auto-CoT: (i) partition questions of a given dataset into a few clusters; (ii) sample a representative question from each cluster and generate its reasoning chain using Zero-Shot-CoT with simple heuristics. In addition, reinforcement learning (RL) and
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Models** & **Multimodal** & **w/o LLM** & **Model / Engine** & **Training** & **CoT Role** & **CoT Source** \\ \hline Zero-Shot-CoT (Kojima et al., 2022) & ✗ & ✗ & GPT-3.5 (175B) & ICL & Reasoning & Template \\ Few-Shot-CoT (Wei et al., 2022) & ✗ & ✗ & PaLM (540B) & ICL & Reasoning & Hand-crafted \\ Self-Consistency-CoT (Wang et al., 2022) & ✗ & ✗ & Codec (175B) & ICL & Reasoning & Hand-crafted \\ Least-to-Most Prompting (Zhou et al., 2022) & ✗ & ✗ & Codec (175B) & ICL & Reasoning & Hand-crafted \\ Retrieval-CoT (Zhang et al., 2022) & ✗ & ✗ & GPT-3.5 (175B) & ICL & Reasoning & Auto-generated \\ PromptPG-CoT (Lu et al., 2022) & ✗ & ✗ & GPT-3.5 (175B) & ICL & Reasoning & Hand-crafted \\ Auto-CoT (Zhang et al., 2022) & ✗ & ✗ & Codec (175B) & ICL & Reasoning & Auto-generated \\ Complexity-CoT (Fu et al., 2022) & ✗ & ✗ & GPT-3.5 (175B) & ICL & Reasoning & Hand-crafted \\ Few-Shot-PoT (Chen et al., 2022) & ✗ & ✗ & GPT-3.5 (175B) & ICL & Reasoning & Hand-crafted \\ \hline UnifiedQA (Lu et al., 2022) & ✗ & ✓ & TS (770M) & FT & Explanation & Crawled \\ Fine-Tuned TS XXL (Magister et al., 2022) & ✗ & ✗ & TS (11B) & KD & Reasoning & LLM-generated \\ Fine-Tone-CoT (Ho et al., 2022) & ✗ & ✗ & GPT-3 (6.7B) & KD & Reasoning & LLM-generated \\ Multimodal-CoT (our work) & ✓ & ✓ & TS (770M) & FT & Reasoning & Crawled \\ \hline \hline \end{tabular}
\end{table}
Table 1: Typical CoT techniques (FT: fine-tuning; KD: knowledge distillation). Segment 1: in-context learning techniques; Segment 2: fine-tuning techniques. To the best of our knowledge, our work is the first to study CoT reasoning in different modalities. Besides, we focus on 1B-models, without relying on the outputs of LLMs.
complexity-based selection strategies were also proposed to obtain effective demonstrations. Fu et al. (2022) chose examples with complex reasoning chains (i.e., with more reasoning steps) as the demonstrations. Lu et al. (2022b) trained an agent to find optimal in-context examples from a candidate pool and maximize the prediction rewards on given training examples when interacting with GPT-3.5.
Optimizing Reasoning ChainsA notable way to optimize reasoning chains is problem decomposition. Zhou et al. (2022) proposed least-to-most prompting to decompose complex problems into sub-problems and then solve these sub-problems sequentially. As a result, solving a given sub-problem is facilitated by the answers to previously solved sub-problems. Similarly, Khot et al. (2022) used diverse decomposition structures and designed different prompts to answer each sub-question. In addition to prompting the reasoning chains as natural language texts, Chen et al. (2022) proposed program-of-thoughts (PoT), which modeled the reasoning process as a program and prompted LLMs to derive the answer by executing the generated programs. Another trend is to vote over multiple reasoning paths for a test question. Wang et al. (2022a) introduced a self-consistency decoding strategy to sample multiple outputs of LLMs and then took a majority over the final answers. Wang et al. (2022b) and Li et al. (2022b) introduced randomness in the input space to produce more diverse outputs for voting.
### Eliciting CoT Reasoning by Fine-Tuning Models
A recent interest is eliciting CoT reasoning by fine-tuning language models. Lu et al. (2022a) fine-tuned the encoder-decoder T5 model on a large-scale dataset with CoT annotations. However, a dramatic performance decline is observed when using CoT to infer the answer, i.e., generating the reasoning chain before the answer (reasoning). Instead, CoT is only used as an explanation after the answer. Magister et al. (2022) and Ho et al. (2022) employed knowledge distillation by fine-tuning a student model on the chain-of-thought outputs generated by a larger teacher model. The proposed methods showed performance gains in arithmetic, commonsense, and symbolic reasoning tasks.
There is a key challenge in training 1B-models to be CoT reasoners. As observed by Wei et al. (2022b), models under 100 billion parameters tend to produce illogical CoT that leads to wrong answers. In other words, it might be harder for 1B-models to generate effective CoT than directly generating the answer. It becomes even more challenging in a multimodal setting where answering the question also requires understanding the multimodal inputs. In the following part, we will explore the challenge of Multimodal-CoT and investigate how to perform effective multi-step reasoning.
## 3 Challenge of Multimodal-CoT
Existing studies have suggested that the CoT reasoning ability may emerge in language models at a certain scale, e.g., over 100 billion parameters (Wei et al., 2022a). However, it remains an unresolved challenge to elicit such reasoning abilities in 1B-models, let alone in the multimodal scenario. This work focuses on 1B-models as they can be fine-tuned and deployed with consumer-grade GPUs (e.g., 32G memory). In this section, we will investigate why 1B-models fail at CoT reasoning and study how to design an effective approach to overcome the challenge.
### Towards the Role of CoT
To begin with, we fine-tune a text-only baseline for CoT reasoning on the ScienceQA benchmark (Lu et al., 2022a). Following Lu et al. (2022a), we adopt UnifiedQA\({}_{\texttt{Base}}\)(Khashabi et al., 2020) as the backbone language model.3 Our task is modeled as a text generation problem, where the model takes the textual information as the input and generates the output sequence that consists of the rationale and the answer. As an example shown in Figure 1, the model takes the concatenation of tokens of the question text (Q), the context text (C), and multiple options (M) as the input. To study the effect of CoT, we compare the performance with three variants: (i) No-CoT which predicts the answer directly (QCM\(\rightarrow\)A); (ii) Reasoning where answer inference is conditioned to the rationale (QCM\(\rightarrow\)RA); (iii) Explanation where the rationale is used for explaining the answer inference (QCM\(\rightarrow\)AR).
Footnote 3: UnifiedQA (Khashabi et al., 2020) is adopted as it is the best fine-tuning model in Lu et al. (2022a). Model information and implementation details are presented in Appendix B.1.
Surprisingly, we observe a \(\downarrow\)12.54% accuracy decrease (80.40%\(\rightarrow\)67.86%) if the model predicts rationales before answers (QCM\(\rightarrow\)RA). The results imply that the rationales might not necessarily contribute to predicting the right answer. A similar phenomenon was observed in Lu et al. (2022a), where the plausible reason might be that the model exceeds the maximum token limits before obtaining the required answer or stops generating the prediction early. However, we find that the maximum length of the generated outputs (RA) is always less than 400 tokens, which is below the length limit of language models (i.e., 512 in UnifiedQA\({}_{\texttt{Base}}\)). Therefore, it deserves a more in-depth investigation into why the rationales harm answer inference.
\begin{table}
\begin{tabular}{c c c} \hline \hline Method & Format & Accuracy \\ \hline No-CoT & QCM\(\rightarrow\)A & 80.40 \\ \hline Reasoning & QCM\(\rightarrow\)RA & 67.86 \\ Explanation & QCM\(\rightarrow\)AR & 69.77 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Effects of CoT in the one-stage setting.
### Misleading by Hallucinated Rationales
To dive into how the rationales affect the answer prediction, we separate the CoT problem into two stages, _rationale generation_ and _answer inference_. We report the RougeL score and accuracy for the rationale generation and answer inference, respectively. Table 3 shows the results based on the two-stage framework. Although the two-stage baseline model achieves a 91.76 RougeL score of the rationale generation, the answer inference accuracy is only 70.53%. Compared with the QCM\(\rightarrow\)A variant (80.40%) in Table 2, the result shows that the generated rationale in the two-stage framework does not improve answer accuracy.
Then, we randomly sample 50 error cases and find that the model tends to generate hallucinated rationales that mislead the answer inference. As an example shown in Figure 2, the model (left part) hallucinates that, "_The south pole of one magnet is closest to the south pole of the other magnet_", due to the lack of reference to the vision content. We find that such mistakes occur at a ratio of 64% among the error cases (Figure 3(a)).
### Multimodality Contributes to Effective Rationales
We speculate that such a phenomenon of hallucination is due to a lack of necessary vision contexts for performing effective Multimodal-CoT. To inject vision information, a simple way is to transform the paired image into a caption (Lu et al., 2022) and then append the caption in the input of both stages. However, as shown in Table 3, using captions only yields marginal performance gains (\(\uparrow\)0.59%). Then, we explore an advanced technique by incorporating vision features into the language model. Concretely, we feed the paired image to the DETR model (Carion et al., 2020) to extract vision features. Then we fuse the vision features
Figure 3: The ratio of hallucination mistakes (a) and correction rate w/ vision features (b).
\begin{table}
\begin{tabular}{c c c} \hline \hline Method & (i) QCM\(\rightarrow\) R & (ii) QCM\(\rightarrow\) A \\ \hline Two-Stage Framework & 91.76 & 70.53 \\ \hline w/ Captions & 91.85 & 71.12 \\ w/ Vision Features & 96.97 & 84.91 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Two-stage setting of (i) rationale generation (RougeL) and (ii) answer inference (Accuracy).
Figure 2: Example of the two-stage framework without vision features (baseline) and with vision features (ours) for generating rationales and predicting answers. The upper part presents the problem details with a gold rationale, and the lower part shows the outputs of the baseline and our method incorporated with vision features. We observe that the baseline fails to predict the right answer due to the misleading by hallucinated rationales. More examples are shown in Appendix A.1.
with the encoded language representations before feeding to the decoder (more details will be presented in Section 4). Interestingly, with vision features, the RougeL score of the rationale generation has boosted to 96.97% (QCM\(\rightarrow\)R), which correspondingly contributes to better answer accuracy of 84.91% (QCMR\(\rightarrow\)A). With those effective rationales, the phenomenon of hallucination is mitigated -- 62.5% hallucination mistakes in Section 3.2 have been corrected (Figure 3(b)), as an example shown in Figure 2 (right part).4 The analysis so far compellingly shows that vision features are indeed beneficial for generating effective rationales and contributing to accurate answer inference. As the two-stage method (QCMR\(\rightarrow\)A) in Table 3 achieves better performance than all the one-stage method in Table 2, we choose the two-stage method in our Multimodal-CoT framework.
Footnote 4: The left mistakes are mainly about map understanding, which requires more advanced vision features. We will discuss them in Section 6.4.
## 4 Multimodal-CoT
Based on the observations and discussions in Section 3, we propose Multimodal-CoT to incorporate language (text) and vision (images) modalities into a two-stage framework. In this section, we will first overview the procedure of the framework and then elaborate on the technical design of the model architecture.
### Framework Overview
Multimodal-CoT consists of two training stages: (i) rationale generation and (ii) answer inference. Both stages share the same model architecture but differ in the input \(X\) and output \(Y\). The overall procedure is illustrated in Figure 4. We will take vision-language as an example to show how Multimodal-CoT works.
In the rationale generation stage, we feed the model with \(X=\{X^{1}_{\text{language}},X_{\text{vision}}\}\) where \(X^{1}_{\text{language}}\) represents the language input in the first stage and \(X_{\text{vision}}\) represents the vision input, i.e., the image. For example, \(X\) can be instantiated as a concatenation of question, context, and options of a multiple choice reasoning problem (Lu et al., 2022) as shown in Figure 4. The goal is to learn a rationale generation model \(R=F(X)\) where \(R\) is the rationale.
In the answer inference stage, the rationale \(R\) is appended to the original language input \(X^{1}_{\text{language}}\) to construct the language input in the second stage, \(X^{2}_{\text{language}}=X^{1}_{\text{language}}\circ R\) where \(\circ\) denotes concatenation. Then, we feed the updated input \(X^{\prime}=\{X^{2}_{\text{language}},X_{\text{vision}}\}\) to the answer inference model to infer the final answer \(A=F(X^{\prime})\).
In both stages, we train two models with the same architecture independently. They take the annotated elements (e.g., \(X\to R\), \(XR\to A\), respectively) from the training set for supervised learning. During inference, given \(X\), the rationales for the test sets are generated using the model trained in the first stage; they are used in the second stage for answer inference.
### Model Architecture
Given the language input \(X_{\text{language}}\in\{X^{1}_{\text{language}},X^{2}_{\text{language}}\}\) and the vision input \(X_{\text{vision}}\), we compute the probability of generating target text \(Y\) (either the rationale or the answer in Figure 4) of length \(N\) by
\[p(Y|X_{\text{language}},X_{\text{vision}})=\prod_{i=1}^{N}p_{\theta}\left(Y_ {i}\mid X_{\text{language}},X_{\text{vision}},Y_{<i}\right), \tag{1}\]
where \(p_{\theta}\left(Y_{i}\mid X_{\text{language}},X_{\text{vision}},Y_{<i}\right)\) is implemented with a Transformer-based network (Vaswani et al., 2017). The network has three major procedures: encoding, interaction,
Figure 4: Overview of our Multimodal-CoT framework. Multimodal-CoT consists of two stages: (i) rationale generation and (ii) answer inference. Both stages share the same model architecture but differ in the input and output. In the first stage, we feed the model with language and vision inputs to generate rationales. In the second stage, we append the original language input with the rationale generated from the first stage. Then, we feed the updated language input with the original vision input to the model to infer the answer.
and decoding. Specifically, we feed the language text into a Transformer encoder to obtain a textual representation, which is then interacted and fused with the vision representation before being fed into the Transformer decoder.
EncodingThe model \(F(X)\) takes both the language and vision inputs and obtains the text representation \(H_{\text{language}}\) and the image feature \(H_{\text{vision}}\) by the following functions:
\[H_{\text{language}} = \text{LanguageEncoder}(X_{\text{language}}), \tag{2}\] \[H_{\text{vision}} = W_{h}\cdot\text{VisionExtractor}(X_{\text{vision}}), \tag{3}\]
where LanguageEncoder(\(\cdot\)) is implemented as a Transformer model. We use the hidden states of the last layer in the Transformer encoder as the language representation \(H_{\text{language}}\in\mathbb{R}^{n\times d}\) where \(n\) denotes the length of the language input, and \(d\) is the hidden dimension. Meanwhile, VisionExtractor(\(\cdot\)) is used to vectorize the input image into vision features. Inspired by the recent success of Vision Transformers (Dosovitskiy et al., 2021), we fetch the patch-level features by off-the-shelf vision extraction models,5 such as DETR (Carion et al., 2020). After obtaining the patch-level vision features, we apply a learnable projection matrix \(W_{h}\) to convert the shape of \(\text{VisionExtractor}(X_{\text{vision}})\) into that of \(H_{\text{language}}\); thus we have \(H_{\text{vision}}\in\mathbb{R}^{m\times d}\) where \(m\) is the number of patches.
Footnote 5: The parameters of the vision extraction are frozen.
InteractionAfter obtaining language and vision representations, we use a single-head attention network to correlate text tokens with image patches, where the query (\(\mathcal{Q}\)), key (\(\mathcal{K}\)) and value (\(\mathcal{V}\)) are \(H_{\text{language}}\), \(H_{\text{vision}}\) and \(H_{\text{vision}}\), respectively. The attention output \(H_{\text{vision}}^{\text{attn}}\in\mathbb{R}^{n\times d}\) is defined as:
\[H_{\text{vision}}^{\text{attn}} = \text{Softmax}(\frac{\mathcal{Q}\mathcal{K}^{\top}}{\sqrt{d_{k}} })\mathcal{V}, \tag{4}\]
where \(d_{k}\) is the same as the dimension of \(H_{\text{language}}\) because a single head is used.
Then, we apply the gated fusion mechanism (Zhang et al., 2020; Wu et al., 2021; Li et al., 2022) to fuse \(H_{\text{language}}\) and \(H_{\text{vision}}\). The fused output \(H_{\text{fuse}}\in\mathbb{R}^{n\times d}\) is obtained by:
\[\lambda = \text{Sigmoid}(W_{l}H_{\text{language}}+W_{v}H_{\text{vision} }^{\text{attn}}), \tag{5}\] \[H_{\text{fuse}} = (1-\lambda)\cdot H_{\text{language}}+\lambda\cdot H_{\text{ vision}}^{\text{attn}}, \tag{6}\]
where \(W_{l}\) and \(W_{v}\) are learnable parameters.
DecodingFinally, the fused output \(H_{\text{fuse}}\) is fed into the Transformer decoder to predict the target \(Y\). The complete procedure of Multimodal-CoT is shown in Algorithm 1.
## 5 Experiments
This section will present the benchmark dataset, the implementation of our technique, and the baselines for comparisons. Then, we will report our main results and findings.
### Dataset
Our method is evaluated on the ScienceQA benchmark (Lu et al., 2022). ScienceQA is the first large-scale multimodal science question dataset that annotates the answers with detailed lectures and explanations. It contains 21k multimodal multiple choice questions with rich domain diversity across 3 subjects, 26 topics, 127 categories, and 379 skills. The benchmark dataset is split into training, validation, and test splits with 12726, 4241, and 4241 examples, respectively.
### Implementation
The following part presents the experimental settings of Multimodal-CoT and the baseline methods.
Experimental SettingsAs the Multimodal-CoT task requires generating the reasoning chains and leveraging the vision features, we use the T5 encoder-decoder architecture (Raffel et al., 2020). Specifically, we adopt UnifiedQA (Khashabi et al., 2020) to initialize our models in the two stages because it achieves the best fine-tuning results in Lu et al. (2022). To verify the generality of our approach across different LMs, we also employ FLAN-T5 (Chung et al., 2022) as the backbone in Section 6.3. As using image captions does not yield significant performance gains in Section 3.3, we did not use the captions. We fine-tune the models up to 20 epochs, with a learning rate of 5e-5. The
maximum input sequence length is 512. The batch sizes for the base and large models are 16 and 8, respectively. Our experiments are run on 4 NVIDIA Tesla V100 32G GPUs.
Baseline ModelsFollowing Lu et al. (2022), our baselines include (i) Visual question answering (VQA) models (Anderson et al., 2018; Kim et al., 2018; Yu et al., 2019; Gao et al., 2019; Kim et al., 2021; Lu et al., 2021; Li et al., 2019); (ii) Text-to-text LM models. (Khashabi et al., 2020); (iii) GPT-3.5 models (Chen et al., 2020). More details are presented in Appendix B.1.
### Main Results
Table 4 shows the main results. Mutimodal-CoT\({}_{\text{Large}}\) outperforms GPT-3.5 by 16.51% (75.17%\(\rightarrow\)91.68%) and surpasses human performance. Specifically, among the 8 question classes, Mutimodal-CoT\({}_{\text{Large}}\) achieves a 21.37% (67.43%\(\rightarrow\)88.80%) performance gain for the questions with paired images (IMG). Compared with existing UnifiedQA and GPT-3.5 methods that leverage image captions in the context to provide vision semantics, the results indicate that using image features is more effective. In addition, our two-stage framework contributes to the superior results according to our ablation study results in Table 5. Overall, the results verify the effectiveness of multimodality and the potential of achieving CoT reasoning with 1B-models via our two-stage framework.
## 6 Analysis
The following analysis will investigate how Multimodal-CoT works and discuss contribution factors and limitations. We use models under the base size for analysis unless otherwise stated.
### Multimodality Boosts Convergence
Figure 5 shows the evaluation accuracy curve of the baseline and Multimodal-CoT in different training epochs. "One-stage" is based on the QCM\(\rightarrow\)A input-output format as it
\begin{table}
\begin{tabular}{l|r|r r r r r r r r|r} \hline \hline Model & Size & NAT & SOC & LAN & TXT & IMG & NO & G1-6 & G7-12 & Avg \\ \hline Human & - & 90.23 & 84.97 & 87.48 & 89.60 & 87.50 & 88.10 & 91.59 & 82.42 & 88.40 \\ \hline MCAN (Yu et al., 2019) & 95M & 56.08 & 46.23 & 58.09 & 59.43 & 51.17 & 55.40 & 51.65 & 59.72 & 54.54 \\ Top-Down (Anderson et al., 2018) & 70M & 59.50 & 54.33 & 61.82 & 62.90 & 54.88 & 59.79 & 57.27 & 62.16 & 59.02 \\ BAN (Kim et al., 2018) & 112M & 60.88 & 46.57 & 66.64 & 62.61 & 52.60 & 65.51 & 56.83 & 63.94 & 59.37 \\ DFAF (Gao et al., 2019) & 74M & 64.03 & 48.82 & 63.55 & 65.88 & 54.49 & 64.11 & 57.12 & 67.17 & 60.72 \\ ViLT (Kim et al., 2021) & 113M & 60.48 & 63.89 & 60.27 & 63.20 & 61.38 & 57.00 & 60.72 & 61.90 & 61.14 \\ Patch-TRM (Lu et al., 2021) & 90M & 65.19 & 46.79 & 65.55 & 66.96 & 55.28 & 64.95 & 58.04 & 67.50 & 61.42 \\ VisualBERT (Li et al., 2019) & 111M & 59.33 & 69.18 & 61.18 & 62.71 & 62.17 & 58.54 & 62.96 & 59.92 & 61.87 \\ \hline UnifiedQA\({}_{\text{Base}}\)(Khashabi et al., 2020) & 223M & 68.16 & 69.18 & 74.91 & 63.78 & 61.38 & 77.84 & 72.98 & 65.00 & 70.12 \\ UnifiedQA\({}_{\text{Base}}\) w/ CoT (Lu et al., 2022) & 223M & 71.00 & 76.04 & 78.91 & 66.42 & 66.53 & 81.81 & 77.06 & 68.82 & 74.11 \\ \hline GPT-3.5 (Chen et al., 2020) & 175B & 74.64 & 69.74 & 76.00 & 74.44 & 67.28 & 77.42 & 76.80 & 68.89 & 73.97 \\ GPT-3.5 w/ CoT (Lu et al., 2022) & 175B & 75.44 & 70.87 & 78.09 & 74.68 & 67.43 & 79.93 & 78.23 & 69.68 & 75.17 \\ \hline Mutimodal-CoT\({}_{\text{Base}}\) & 223M & 87.52 & 77.17 & 85.82 & 87.88 & 82.90 & 86.83 & 84.65 & 85.37 & 84.91 \\ Multimodal-CoT\({}_{\text{Large}}\) & 738M & **95.91** & **82.00** & **90.82** & **95.26** & **88.80** & **92.89** & **92.44** & **90.31** & **91.68** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Main results (%). Size = backbone model size. Question classes: NAT = natural science, SOC = social science, LAN = language science, TXT = text context, IMG = image context, NO = no context, G1-6 = grades 1-6, G7-12 = grades 7-12. Results except ours are taken from Lu et al. (2022). Segment 1: Human performance; Segment 2: VQA baselines; Segment 3: UnifiedQA baselines; Segment 4: GPT-3.5 baselines; Segment 5: Our Multimodal-CoT results. Results in **bold** are the best performance.
Figure 5: Accuracy curve of the No-CoT baseline and Multimodal-CoT variants across epochs.
\begin{table}
\begin{tabular}{l|r r r r r r r|r} \hline \hline Model & NAT & SOC & LAN & TXT & IMG & NO & G1-6 & G7-12 & Avg \\ \hline Multimodal-CoT & 87.52 & 77.17 & 85.82 & 87.88 & 82.90 & 86.83 & 84.65 & 85.37 & 84.91 \\ w/o Two-Stage Framework & 80.99 & 87.40 & 81.91 & 80.25 & 78.83 & 83.62 & 82.78 & 82.20 & 82.57 \\ w/o Vision Features & 71.09 & 70.75 & 69.18 & 71.16 & 65.84 & 71.57 & 71.00 & 69.68 & 70.53 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation results of Multimodal-CoT.
achieves the best performance in Table 2 and "Two-stage" is our two-stage framework. We find that the two-stage methods achieve relatively higher accuracy at the beginning than the one-stage baselines that generate the answer directly without CoT. However, without the vision features, the two-stage baseline could not yield better results as the training goes on due to the low-quality rationales (as observed in Section 3). In contrast, using vision features helps generate more effective rationales that contribute to better answer accuracy in our two-stage multimodal variant.
### Using Different Vision Features
Different vision features may affect the model performance. We compare three widely-used types of vision features, CLIP (Radford et al., 2021), DETR (Carion et al., 2020), and ResNet (He et al., 2016). CLIP and DETR are patch-like features where DETR is based on object detection. For the ResNet features, we repeat the pooled features of ResNet-50 to the same length with the text sequence to imitate the patch-like features, where each patch is the same as the pooled image features. More details of the vision features are presented in Appendix B.2.
Table 6 shows the comparative results of vision features. We observe that using vision features generally achieves better performance than the language only baseline. Specifically, DETR achieves relatively better performance in general. Therefore, we use DETR by default in Multimodal-CoT.
### General Effectiveness Across Backbone Models
To test the generality of the benefits of our approach to other backbone models, we alter the underlying LMs to other variants in different sizes or types. As shown in Table 7, our approach is generally effective for the widely-used backbone models.
### Error Analysis
To better understand the behavior of Multimodal-CoT and facilitate future studies, we manually investigate randomly selected examples generated by our approach. Table 8 summarizes the categorization results generated by Multimodal-CoT. We randomly picked up 50 samples whose answers were correct and 50 samples whose answers were incorrect. The corresponding examples from each category are presented in Appendix C.
We find that the correct samples (i.e., whose answers are correct) contain a certain amount of incorrect chain-of-thought (10%). The results indicate that CoT may not always benefit the answer inference, and the model is robust to some extent -- it can predict the correct answer by ignoring incorrect rationales. For incorrect samples (i.e., whose answers are incorrect), commonsense mistake in the CoT is the most frequent error type (88%). The model often makes commonsense mistakes when answering the questions requires commonsense knowledge, e.g., understand maps and counting numbers in the images (Figure 9), and utilizing the alphabet (Figure 10). The other type of mistake is a logical mistake (12%), with contradictions in the reasoning chains (Figure 11). In addition, there are cases with incorrect answers while their CoT are correct (6%) but might not be necessarily related to answer options (Figure 12).
The analysis indicates that there are prospective directions for future studies. It is possible to improve Multimodal-CoT by (i) incorporating more informative vision features and improving language-vision interaction to be capable of understanding maps and counting numbers; (ii) injecting commonsense knowledge; (iii) applying a filtering mechanism, e.g., using only the effective CoT to infer the answer and get rid of irrelevant CoT.
## 7 Conclusion
We formally study the problem of multimodal CoT. We propose Multimodal-CoT that incorporates language and vision modalities into a two-stage framework that separates rationale generation and answer inference, so answer inference can leverage better generated rationales from multimodal information. With Multimodal-CoT, we show that our method surpasses GPT-3.5 by 16 percentage points in accuracy on the ScienceQA benchmark. Our error analysis shows that it is the potential to leverage more effective vision features, inject commonsense knowledge, and apply filtering mechanisms to improve CoT reasoning in future studies.
\begin{table}
\begin{tabular}{c c c} \hline \hline Method & One-stage & Two-Stage \\ \hline w/ CLIP & 81.21 & 84.81 \\ w/ DETR & 82.57 & 84.91 \\ w/ ResNet & 80.97 & 84.77 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Accuracy (%) of using different vision features.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Size & Language Only & Multimodal-CoT \\ \hline UnifiedQA\({}_{\texttt{Base}}\) & 223M & 80.40 & 84.91 \\ UnifiedQA\({}_{\texttt{Large}}\) & 738M & 83.60 & 91.68 \\ \hline FLAN-T5\({}_{\texttt{Base}}\) & 248M & 83.42 & 85.85 \\ FLAN-T5\({}_{\texttt{Large}}\) & 783M & 85.19 & 93.02 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Accuracy (%) with different backbone language models.
\begin{table}
\begin{tabular}{l l c} \hline \hline Answer & CoT Category & Percentage (\%) \\ \hline \multirow{2}{*}{Correct} & CoT is correct & 90 \\ & CoT is incorrect & 10 \\ \hline \multirow{3}{*}{Incorrect} & Commonsense Mistake & 82 \\ & Logical Mistake & 12 \\ \cline{1-1} & CoT is correct & 6 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Categorization analysis of Multimodal-CoT. |
2305.15437 | Reconstruction schemes of scalar field models for the Power Law Entropy
Corrected Holographic Dark Energy model with Ricci scalar cut-off | In this work, we examine the cosmological characteristics of the Power Law
Entropy Corrected Holographic Dark Energy (PLECHDE) model with infrared (IR)
cut-off, which is determined by the curvature parameter $k$, the time
derivative of $H$, and the average radius of the Ricci scalar curvature $R$,
which varies with the Hubble parameter $H$ squared. We obtain the deceleration
parameter $q$ and the Equation of State (EoS) parameter of Dark Energy (DE)
$\omega_D$. Additionally, we derive the Hubble parameter $H$ and the scale
factor $a$ expressions as functions of the cosmic time $t$. Additionally, we
examine the limiting scenario that pertains to a flat Dark Dominated Universe.
Furthermore, we establish a correspondence between the DE model considered and
some scalar fields, in particular the Generalized Chaplygin Gas, the Modified
Chaplygin Gas, the Modified Variable Chaplygin Gas, the New Modified Chaplygin
Gas, the Viscous Generalized Chaplygin Gas, the Dirac-Born-Infeld, the
Yang-Mills, and the Non Linear Electrodynamics scalar field models. | Antonio Pasqua, Surajit Chattopadhyay, Irina Radinschi, Azzah Aziz Alshehri, Abdel Nasser Tawfik | 2023-05-23T15:08:55Z | http://arxiv.org/abs/2305.15437v2 | # Reconstruction of scalar field models for the PLECHDE model with Ricci scalar cut-off
###### Abstract
In this paper, we study the cosmological properties of the Power Law Entropy Corrected Dark Energy (PLECHDE) model with infrared (IR) cut-off given by the average radius of the Ricci scalar curvature \(R\) (varying with the Hubble parameter \(H\) squared), of the time derivative of \(H\), and the curvature parameter \(k\). We derive an Equation of State (EoS) parameter of Dark Energy (DE) \(\omega_{D}\) and the deceleration parameter \(q\). We also obtain the expressions of the scale factor \(a\) and the Hubble parameter \(H\) as functions of the cosmic time \(t\). Moreover, we study the limiting case corresponding to a flat Dark Dominated Universe. Furthermore, we establish a correspondence between the DE model considered and some scalar fields, in particular the Generalized Chaplygin Gas, the Modified Chaplygin Gas, the Modified Variable Chaplygin Gas, the New Modified Chaplygin Gas, the Viscous Generalized Chaplygin Gas, the Dirac-Born-Infeld, the Yang-Mills, and the Non Linear Electrodynamics scalar field models.
**I. Introduction**
## II Non interacting model in a non-flat universe
### Correspondence between the R-Plechde model and scalar fields
###### Contents
* 1 Introduction
* 2 The cosmological constant \(\Lambda\),
* 3 Dark Energy (DE) models, and
[MISSING_PAGE_POST]
## III Correspondence between the R-Plechde model and scalar fields
### Generalized Chaplygin Gas (GCG) Model
[MISSING_PAGE_POST]
The cosmological constant \(\Lambda\) is the simplest candidate of DE, having constant Equation of State (EoS) parameter \(\omega=-1\). For the sake on completeness, the equation of state of ordinary matter is recently reported [15; 16; 17]. In order to explain the observational evidences of accelerated expansion of the Universe, the \(\Lambda\) represents the earliest and simplest candidate. Although it is having EoS parameter \(\omega=p/\rho=-1\), which is consistent with the observation but it cannot give the time evolution of the EoS parameter. It suffers from two main difficulties [18]
1. the fine-tuning: It asks why the vacuum energy density is so small (about \(10^{123}\) lower than what we observe) and
2. the cosmic coincidence problems: It says why vacuum energy and DM are nearly equal today.
It is conjectured that vacuum energy and dark matter (DM) have evolved independently from different mass scales [26; 27]. Till date many attempts have been made to find a solution to the coincidence problem [19; 20; 21; 22; 23; 24; 25].
The second class of models proposed to describe the accelerated expansions of the Universe are DE models. DE is an exotic matter characterized by negative pressure and is thought to be responsible for driving this acceleration. The cosmic acceleration evidence suggests that if Einstein's theory of General Relativity is valid on cosmological scales, the Universe must be dominated by a mysterious and unknown kind of missing component with some peculiar features, for example it must not be clustered on large length scales and its pressure \(p\) must be negative enough to be able to drive the accelerated expansion. To study cosmic acceleration, we can observe that it can be described by a perfect fluid with pressure \(p\) and energy density \(\rho\) satisfy the relation \(\rho+3p<0\). This kind of fluid with negative pressure is dubbed as DE. It has EoS parameter \(\omega\) (defined as \(p/\rho\)) must obey the condition \(\omega<-1/3\). Unlike EoS parameter of \(\Lambda\), it is a difficult task to constrain its exact value from an observational point of view. The DE model has quintessence behaviour if \(\omega>-1\)and the model has phantom behaviour if \(\omega<-1\). If \(\omega\) transits from quintessence to phantom then it is quantum. The largest amount of the total cosmic energy density \(\rho_{tot}\) is contained in the dark sectors, i.e. Dark Energy (DE) and Dark Matter (DM). From the references [28; 29], currently the DE is \(68.3\%\) of the \(\rho_{tot}\). DM contributes about \(26.8\%\) of the \(\rho_{tot}\) of the present Universe. The ordinary baryonic matter are able to observe with the scientific instruments and suggests that it contributes only \(4.9\%\) of \(\rho_{tot}\). Moreover, radiation contributes to the total cosmic energy density in a practically negligible way. In the literature, other candidates of DE have been proposed, and those having time-varying EoS parameters. Some of them include tachyon, quintessence, k-essence, quantum, Chaplygin gas, Agegraphic DE (ADE) and phantom [30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71].
In literature, one of the most studied DE candidates is the Holographic DE (HDE) [72; 73; 74; 75; 76; 77; 78]. HDE is based on the Holographic Principle (HP) [79; 80; 81; 82; 83; 84]. The HDE model fits the cosmological data obtained from CMB radiation anisotropies and SNeIa [85; 86; 87; 88; 89]. It was shown by Cohen _et al._[90] that in Quantum Field Theory (QFT), the ultraviolet (UV) cut-off, indicated with \(\Lambda_{UV}\), should be related to the IR cut-off \(L\) due to limit set by forming a black hole. If the vacuum energy density caused by UV cut-off is given by \(\rho_{D}=\Lambda_{UV}^{4}\), then the total energy of size \(L\) should not be greater than the mass of the system-size black hole, i.e.
\[E_{D}\leq E_{BH},, \tag{1}\]
which yields
\[L^{3}\rho_{D}\leq M_{p}^{2}L,, \tag{2}\]
where the quantity \(M_{p}=(8\pi G)^{-1/2}\approx 10^{18}\) GeV represents the reduced Planck mass (with \(G\) being the Newton's gravitational constant). If the largest possible cut-off \(L\) is the one which saturate the inequality given in Eq. (2), we obtain the energy density of HDE \(\rho_{D}\) as follows.
\[\rho_{D}=3c^{2}M_{p}^{2}L^{-2}, \tag{3}\]
where \(c\) represents a dimensionless numerical constant which value can be evinced by observational data. For a flat Universe \(c=0.818^{+0.113}_{-0.097}\) and in the case of a non-flat Universe, we have \(c=0.815^{+0.179}_{-0.139}\)[92; 91]. According to a recent work made by Guberina _et al._[93], the HDE model based on the entropy bound can be derived in an alternative way. In the black hole thermodynamics [94; 95], a maximum entropy in a box with size \(L\), known as Bekenstein-Hawking entropy bound, exists and it is given by \(S_{BH}\approx M_{p}^{2}L^{2}\); moreover, it scales as the area of the box (i.e. \(A\approx L^{2}\)) rather than its volume (i.e. \(V\approx L^{3}\)). Moreover, for a macroscopic system with self-gravitation effects which can not be ignored, the Bekenstein entropy bound \(S_{B}\) can be obtained by multiplying the energy and the linear size \(L\) of the system \(E\approx\rho_{D}L^{3}\). If we require that the Bekenstein entropy bound is smaller than the Bekenstein-Hawking entropy (i.e. \(S_{B}\leq S_{BH}\), which implies \(EL\leq M_{p}^{2}L^{2}\)), it is possible to obtain the same result obtained from energy bound argument, i.e.
\[\rho_{D}\leq M_{p}^{2}L^{-2}. \tag{4}\]
The HDE model has been studied widely and investigated in literature in many ways till now. Authors obtained that EoS parameter of HDE model can be significantly reconstruct in the low
redshift limit. Chen _et al._[96] used the HDE model in order to drive inflation in the early evolutionary phases of the Universe. Jamil _et al._[97] studied the EoS parameter \(\omega_{D}\) of the HDE model considering a time-varying Newton's gravitational constant, i.e. \(G\equiv G\left(t\right)\).
The HDE models studied by modifying infrared (IR) cut-off and with different IR cut-offs also [98; 99; 100; 101; 102; 103; 104; 105], for example by taking correspondence of IR cut-off with the particle horizon, the future event horizon and the Hubble horizon. Recently, the correspondence between HDE models and other scalar field models have been proposed [106; 107; 108]. The late time acceleration of the Universe has also been investigated by the concept of modified gravity [109; 110]. Modified Gravity is predicted by string/M theory. It gives the natural gravitational alternative to the idea of the presence of exotic components. The explanation of the phantom, non-phantom and quantum phases of the Universe can be well described using modified gravity without the necessity of the introduction of a negative kinetic term in DE models.
Many dynamical features of HDE model have been studied in the flat/non-flat Friedmann-Lemaitre-Robertson-Walker (FLRW) background, for example the cosmic coincidence problem, the quantum behavior, the phantom crossing at the present time, the EoS parameter \(\omega_{D}\) and the deceleration parameter \(q\).
The definition of HDE can be modified due to the power-law corrections to entropy which appear in dealing with the entanglement of quantum fields in and out the horizon [111]. The entanglement entropy of the ground state obeys the "Hawking area law". Only the excited state gives a contribution to the correction and more excitation produces more deviations from the area law [112; 113]. The power-law corrected entropy has the following form [111; 114]:
\[S\left(A\right)_{pl}=c_{0}\left(\frac{A}{a_{1}^{2}}\right)\left[1+c_{1}f\left( A\right)\right], \tag{5}\]
where the term \(f(A)\) is given by
\[f\left(A\right)=\left(\frac{A}{a_{1}^{2}}\right)^{-\nu}, \tag{6}\]
i.e., \(f(A)\) has a power-law dependence from the area \(A\). The two quantities \(c_{0}\) and \(c_{1}\) are two constant parameters of the order of unity. The UV cut-off at the horizon is denoted by \(a_{1}\) and \(\nu\) indicates an exponent which depends on the amount of mixing of ground and excited states. \(A=4\pi R_{h}^{2}\) indicates the area of the horizon, where \(R_{h}\) is the black-hole event horizon. The contribution of the term \(f\left(A\right)\) is negligible and the mixed state entanglement entropy asymptotically approaches the ground state (Bekenstein-Hawking) entropy for large horizon area (i.e., \(A>>a_{1}^{2}\)) [115; 116; 117; 118].
Another useful form of the power-law corrected expression of the entropy is given by the following
relation:
\[S\left(A\right)_{pl}=\frac{A}{4G}\left(1-K_{\alpha}A^{1-\alpha/2}\right), \tag{7}\]
where \(\alpha\) is a dimensionless constant and the term \(K_{\alpha}\) is a constant parameter whose expression is given by
\[K_{\alpha}=\frac{\alpha\left(4\pi\right)^{\alpha/2-1}}{\left(4-\alpha\right)r_ {c}^{2-\alpha}}. \tag{8}\]
The quantity \(r_{c}\) represents the crossover scale. When the wave function of the field is chosen to be a superposition of ground and exited states [112], the second term in Eq. (7) can be regarded as a power-law correction to the area law resulting from entanglement [112]. The correction term is also more significant for higher excitation [112]. It is important to notice that the correction term falls off rapidly with the area \(A\) and hence in the semi-classical limit (which means large values of \(A\)) the area law is recovered. Then, for large black holes the correction term falls off rapidly and the area law is recovered, whereas for small black holes the correction becomes more significant. This can be interpreted as follows. For a large area, i.e., at low energies, it is difficult to excite the modes and hence the ground state modes contribute to most of the entanglement entropy. However, for a small horizon area, a large number of field modes can be excited and contribute significantly to the correction, causing large deviation from the area law.
Inspired by the power-law corrected entropy relation given in Eq. (7), the energy density of the so-called Power-Law Entropy Corrected Holographic Dark Energy (PLECHDE) model can be easily obtained as follows.
\[\rho_{Dpl} = 3c^{2}M_{p}^{2}L^{-2}-\delta M_{p}^{2}L^{-\gamma} \tag{9}\] \[= 3M_{p}^{2}L^{-2}\left[c^{2}-\left(\frac{\delta}{3}\right)L^{- \gamma+2}\right].\]
In the limit case corresponding to \(\delta=0\), the expression of the energy density of DE \(\rho_{D}\) of the PLECHDE given in Eq. (9) yields the well-known holographic energy density, i.e., \(\rho_{D}=3c^{2}M_{p}^{2}L^{-2}\). From a mathematical point of view, the HDE model can be also recovered in the limiting case of \(\gamma\rightarrow\infty\).
The importance of the corrected term in various regions depends on the value assumed by \(\gamma\). The two terms of Eq. (9) can be combined when \(\gamma=2\) and can recover the ordinary HDE density. So, it is possible to study the cases with \(\gamma>2\) and \(\gamma<2\) separately. In the first case, i.e., \(\gamma>2\), the corrected term can be comparable to the first term only when \(L\) is very small. Indeed, it was argued that \(\gamma\) should be within the range \(2<\gamma<4\)[112]. However, the satisfaction of the
generalized second law of thermodynamics for the Universe with the power-law corrected entropy given in Eq. (7) implies that the case \(\gamma<2\) should be rejected [114].
In this paper, we consider the IR cut-off \(L\) of the system proportional to the average radius of Ricci scalar curvature \(R\), i.e., \(L\propto R^{-1/2}\), so that we have \(\rho_{D}\propto R\). We remember that the Ricci scalar curvature \(R\) can be written as follows
\[R=6\left[\dot{H}+2H^{2}+\frac{k}{a\left(t\right)^{2}}\right], \tag{10}\]
where \(H=\dot{a}\left(t\right)/a\left(t\right)\) is the Hubble parameter, \(\dot{H}\) is the first derivative of \(H\) with respect to the cosmic time \(t\), \(a\left(t\right)\) is a dimensionless scale factor, which describes how the Universe evolves, and \(k\) is a constant with dimension of \(length^{-2}\) known as curvature parameter which contains the information about the curvature of the spatial part of the metric describing the Universe, which will be introduced later on. \(k\) takes the values \(-1\), \(0\), \(+1\) which yield, respectively, an open, a flat or a closed FRW Universe.
Ricci DE (RDE) models have been widely studied in recent years [119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140]. Substituting \(L\) with \(R^{-1/2}\) in Eq.(10) and considering \(M_{p}^{2}=1\), we can write the energy density of the Ricci PLECHDE (R-PLECHDE) model \(\rho_{Dpl}\) as follows.
\[\rho_{Dpl}=3c^{2}R-\delta R^{\gamma/2}. \tag{11}\]
In the following Sections we will derive and study some important cosmological quantities considering the energy density of DE defined in Eqs. (11).
This paper is organized as follows. In Section 2, we describe the physical contest we are working in and we derive the EoS parameter \(\omega_{D}\), the deceleration parameter \(q\) and \(\Omega_{D}^{\prime}\) for our models in a non-flat Universe. In Section 3, we establish a correspondence between our model and some scalar fields, in particular the Generalized Chaplygin Gas (GCG), the Modified Chaplygin Gas (MCG), the Modified Variable Chaplygin Gas (MVCG), the New Modified Chaplygin Gas (NMCG), the Viscous Generalized Chaplygin Gas (VGCG), the Dirac-Born-Infeld (DBI), the Yang-Mills (YM) and the Non Linear Electro-Dynamics scalar (NLED) field models. Section 4 is devoted to the Conclusions of this paper.
## II Non interacting model in a non-flat universe
Observational and cosmological evidences suggest that our Universe is not perfectly flat but it has a small positive curvature, which implies a closed Universe. The tendency of a closed Universe
is shown in cosmological (in particular CMB) experiments [141; 142; 143]. Moreover, the measurements of the cubic correction to the luminosity-distance relation of Supernova measurements reveal a closed Universe [144; 145]. For the above reasons, we prefer to consider a non-flat Universe.
Within the framework of the standard Friedmann-Robertson-Walker (FRW) cosmology, the line element for a non-flat Universe is given by the following relation:
\[ds^{2}=-dt^{2}+a^{2}\left(t\right)\left[\frac{dr^{2}}{1-kr^{2}}+r^{2}\left(d \theta^{2}+\sin^{2}\theta d\varphi^{2}\right)\right], \tag{12}\]
where \(t\) represents the cosmic time, \(r\) is the radial component of the metric, \(\theta\) and \(\varphi\) are the two angular coordinates of the metric. \(\theta\) and \(\varphi\) are the usual azimuthal and polar angles, with the constraints given by \(0\leq\theta\leq\pi\) and \(0\leq\varphi\leq 2\pi\). The corresponding Friedmann equation for a non-flat Universe has the following expression:
\[H^{2}+\frac{k}{a^{2}}=\frac{1}{3}\left(\rho_{D}+\rho_{m}\right), \tag{13}\]
where the terms \(\rho_{D}\) and \(\rho_{m}\) are, respectively, the energy densities of DE and DM. We also define the fractional energy densities for matter, curvature and DE, respectively, as
\[\Omega_{m} = \frac{\rho_{m}}{\rho_{cr}}=\frac{\rho_{m}}{3H^{2}}, \tag{14}\] \[\Omega_{k} = \frac{\rho_{k}}{\rho_{cr}}=\frac{k}{H^{2}a^{2}},\] (15) \[\Omega_{Dpl} = \frac{\rho_{Dpl}}{\rho_{cr}}=\frac{\rho_{Dpl}}{3H^{2}}, \tag{16}\]
where \(\rho_{cr}=3H^{2}\) represents the critical energy density, which represents the energy density necessary for flatness. The fractional energy density of curvature \(\Omega_{k}\) gives information about the contribution from the spatial curvature to the total density. Recent observations support a closed Universe with a small positive curvature \(\Omega_{k}\cong 0.02\)[146].
Dividing Eq. (13) by \(H^{2}\) and using Eqs. (14), (15) and (16), it is possible to write the Friedmann equation given in Eq. (13) as
\[1+\Omega_{k}=\Omega_{m}+\Omega_{Dpl}. \tag{17}\]
Eq. (17) has the main property that it relates all the fractional energy densities considered in this paper. If we want to respect the Bianchi identity or local energy-momentum conservation law, i.e., if we want to satisfy the relation \(\nabla_{\mu}T^{\mu\nu}=0\), then we must have that the total energy density \(\rho_{tot}\) must satisfy the following continuity equation:
\[\dot{\rho}_{tot}+3H\left(p_{tot}+\rho_{tot}\right)=0,, \tag{18}\]
where the terms \(\rho_{tot}\) and \(p_{tot}\) are the total energy density and the total pressure, respectively, and they are defined as follows.
\[\rho_{tot} = \rho_{m}+\rho_{Dpl}, \tag{19}\] \[p_{tot} = p_{Dpl}. \tag{20}\]
We must emphasize here that we are considering pressureless DM, i.e., we have \(p_{m}=0\), therefore we have that the total pressure coincides with the DE pressure.
We have that Eq. (18) can be also rewritten as follows.
\[\dot{\rho}_{tot}+3H\left(1+\omega_{tot}\right)\rho_{tot}=0, \tag{21}\]
where \(\omega_{tot}=p_{tot}/\rho_{tot}\) represents the Equation of State (EoS) parameter.
Since the two energy densities \(\rho_{D}\) and \(\rho_{m}\) are conserved separately, the conservation equations for DM and DE take the forms
\[\dot{\rho}_{m}+3H\rho_{m} = 0, \tag{22}\] \[\dot{\rho}_{Dpl}+3H\rho_{Dpl}\left(1+\omega_{Dpl}\right) = 0. \tag{23}\]
We now consider the presence of interaction between the two dark sectors.
By assuming an interaction between DM and DE, the conservation equations for DM and DE take the following forms
\[\dot{\rho}_{m}+3H\rho_{m} = Q, \tag{24}\] \[\dot{\rho}_{Dpl}+3H\rho_{Dpl}\left(1+\omega_{Dpl}\right) = -Q, \tag{25}\]
where the quantity \(Q\) indicates an interaction term which can be, in general, a function of cosmological quantities like the Hubble parameter \(H\), energy densities for DE and DM \(\rho_{D}\) and \(\rho_{m}\) and the deceleration parameter \(q\), i.e., we have that \(Q\left(H,\rho_{m},\rho_{D},q\right)\). In this paper, we have chosen to consider three different expressions for \(Q\), i.e.,
\[Q_{1} = 3b^{2}H\left(\rho_{m}+\rho_{D}\right), \tag{26}\] \[Q_{1} = 3b^{2}H\rho_{m},\] (27) \[Q_{1} = 3b^{2}H\rho_{D}, \tag{28}\]
with \(b^{2}\) being a coupling parameter between DM and DE [147; 148; 149; 150; 151; 152; 153; 154]. We must anyway underline that more general interaction terms can be used [155]. Since the nature of DM and DE remains
unknown, different Lagrangian equations have been proposed to generate this interaction term. Positive values of \(b^{2}\) indicate a transition from DE to DM and vice versa for negative values of \(b^{2}\). Sometimes \(b^{2}\) is taken in the range [0,1][156]. The case with \(b^{2}=0\) represents the non-interacting FRW model while \(b^{2}=1\) yields the complete transfer of energy from DE to matter. Recently, it is reported that this interaction is observed in the Abell cluster A586 showing a transition of DE into DM and vice versa [157; 158]. However, we must underline here that the strength of this interaction is not clearly identified [159]. Observations of both CMB radiation and clusters of galaxies clearly show that the coupling parameter must be in the range \(0<b^{2}<0.025\)[160; 161]. A negative value of \(b^{2}\) is avoided since it leads to a violation of thermodynamic laws. Further high-resolution N-body simulations have demonstrated that the structural properties of highly nonlinear cosmic structures, as their average concentration at a given mass, could be significantly modified in the presence of an interaction between DE and DM [162]. The strength of the coupling parameter can, in fact, significantly modify the cosmic history by modifying the clustering properties of matter since the growth of DM density perturbations is much more sensitive to the interaction [163; 164]. The best way to motivate a suitable form of \(Q\) should be from a consistent theory of quantum gravity or through a reconstruction scheme using the SNeIa data [165; 166].
We now want to derive the expression of the EoS parameter of DE \(\omega_{D}\) for the models we are studying in the case the IR cut-off of the system is given by the Ricci scalar curvature.
Using the Friedmann equation given in Eq. (13) in the general expression of the Ricci scalar \(R\) given in Eq. (10), we obtain that the Ricci scalar \(R\) can be also written as
\[R=6\left(\dot{H}+H^{2}+\frac{\rho_{m}+\rho_{D}}{3}\right). \tag{29}\]
We now want to obtain an expression for the term \(\dot{H}+H^{2}\) as function of the EoS parameter of DE \(\omega_{D}\) in order we are able to relate the EoS parameter \(\omega_{D}\) and \(R\).
Differentiating the Friedmann equation given in Eq. (13) with respect to the cosmic time \(t\), we obtain
\[2H\dot{H}-2H\left(\frac{k}{a^{2}}\right)=\frac{1}{3}\left(\dot{\rho}_{D}+\dot {\rho}_{m}\right). \tag{30}\]
When dividing Eq. (30) by \(2H\), we get
\[\dot{H}=\frac{k}{a^{2}}-\frac{1}{6H}\left(\dot{\rho}_{D}+\dot{\rho}_{m}\right). \tag{31}\]
By inserting in Eq. (31) the expressions of the time derivatives of the energy densities of DM and DE \(\dot{\rho}_{m}\) and \(\dot{\rho}_{Dpl}\) obtained from the continuity equations for DM and DE given in Eqs. (22) and
(23), we can rewrite \(\dot{H}\) as
\[\dot{H}=\frac{k}{a^{2}}-\frac{1}{2}\left[\rho_{m}+\left(1+\omega_{D}\right)\rho_{ D}\right]. \tag{32}\]
Adding Eqs. (13) and (32), we obtain the following relation for \(\dot{H}+H^{2}\)
\[\dot{H}+H^{2}=-\frac{1}{6}\left(\rho_{m}+\rho_{D}\right)-\frac{\omega_{D}\rho_ {D}}{2}. \tag{33}\]
Therefore, inserting the result of Eq. (33) into Eq. (29), after some algebraic calculations, we obtain that the Ricci scalar \(R\) can be written as follows.
\[R = \rho_{m}+\rho_{D}-3\rho_{D}\omega_{D} \tag{34}\] \[= \rho_{m}+\rho_{D}\left(1-3\omega_{D}\right).\]
From the result of Eq. (34), we can easily derive that the expression of the EoS parameter of the PLECHDE model \(\omega_{Dpl}\) is given by the following relation:
\[\omega_{Dpl} = -\frac{R}{3\rho_{Dpl}}+\frac{\Omega_{Dpl}+\Omega_{m}}{3\Omega_{ Dpl}}=-\frac{R}{3\rho_{Dpl}}+\frac{1+\Omega_{k}}{3\Omega_{Dpl}} \tag{35}\] \[= -\frac{R}{3\rho_{Dpl}}+\frac{1+u_{pl}}{3}=\frac{1}{3}\left(1+u_{ pl}-\frac{R}{\rho_{Dpl}}\right),\]
where we used the fact that \(\frac{\rho_{D}+\rho_{m}}{3\rho_{D}}=\frac{\Omega_{D}+\Omega_{m}}{3\Omega_{D}}\) while the parameter \(u_{pl}\) is defined as
\[u_{pl} = \frac{\rho_{m}}{\rho_{Dpl}}, \tag{36}\]
which is equivalent to the following expression:
\[u_{pl} = \frac{\Omega_{m}}{\Omega_{Dpl}}, \tag{37}\]
Moreover, using the expressions of the fractional energy density given in Eq. (17), we obtain
\[u_{pl} = \frac{1+\Omega_{k}}{\Omega_{Dpl}}-1, \tag{38}\]
Substituting into Eq. (35) the expression of the energy density \(\rho_{Dpl}\) given in Eq. (11) and using the relation between the fractional energy densities given in Eq. (17) along with the definition of \(u_{pl}\), we obtain the following expression of the EoS parameter for the R-PLECHDE model \(\omega_{Dpl}\)
\[\omega_{Dpl} = -\frac{1}{3}\left(\frac{1}{3c^{2}-\delta R^{\gamma/2-1}}-\frac{ 1+\Omega_{k}}{\Omega_{Dpl}}\right) \tag{39}\] \[= -\frac{1}{3}\left[\frac{1}{3c^{2}-\delta R^{\gamma/2-1}}-(1+u_{ pl})\right].\]
For completeness, we also derive the expression of the deceleration parameter \(q\). The deceleration parameter \(q\) is generally defined as
\[q = -\frac{\ddot{a}a}{\dot{a}^{2}}=-1-\frac{\dot{H}}{H^{2}}. \tag{40}\]
\(q\), combined with the Hubble parameter \(H\) and the dimensionless density parameters, form a set of very useful parameters for the description of the astrophysical observations. The expansion of the Universe becomes accelerated if the term \(\ddot{a}\) has a positive value, as recent cosmological observations suggest; in this case, \(q\) assumes a negative value. Differentiating the Friedmann given in Eq. (13) with respect to the cosmic time \(t\) and using the results of Eq. (17) along with the continuity equations for DM and DE in Eq. (40), it is possible to write the expression of the deceleration parameter \(q\) as
\[q=\frac{1}{2}\left(1+\Omega_{k}+3\Omega_{D}\omega_{D}\right). \tag{41}\]
Substituting in Eq. (41) the expression of the EoS parameter of DE \(\omega_{Dpl}\) of the R-PLECHDE model given in Eq. (39), we obtain the following expression of \(q_{pl}\) for the R-PLECHDE model
\[q_{pl}=1-\frac{1}{2}\left(\frac{\Omega_{D}}{3c^{2}-\delta R^{ \gamma/2-1}}\right)+\Omega_{k}. \tag{42}\]
We can now derive some important quantities of the R-PLECHDE model in the limiting case of a flat Dark Dominated Universe, which is recovered when \(\delta=0\), \(\Omega_{Dpl}=1\), \(\Omega_{k}=\Omega_{m}=0\) and \(u_{pl}\).
The expression of the energy density of DE model reduces to
\[\rho_{D}=3c^{2}R, \tag{43}\]
where \(R\), in the case corresponding to a flat Dark Dominated Universe, is given by
\[R=6\left(\dot{H}+2H^{2}\right), \tag{44}\]
since we have that \(k=0\).
Therefore, the expression of \(\rho_{D}\) given in Eq. (43), using the expression of the Ricci scalar \(R\) given in Eq. (44), can be written as
\[\rho_{D}=18c^{2}\left(\dot{H}+2H^{2}\right). \tag{45}\]
Inserting the expression of \(\rho_{D}\) given into Eq. (45) in the Friedmann equation given in Eq. (13), we obtain the following first order differential equation for the Hubble parameter \(H\)
\[\dot{H}+\left(\frac{12c^{2}-1}{6c^{2}}\right)H^{2}=0, \tag{46}\]
whose solution is given by
\[H\left(t\right)=\left(\frac{6c^{2}}{12c^{2}-1}\right)\left(\frac{1}{t}\right). \tag{47}\]
In order to have a well defined expression of the Hubble parameter \(H\), we must have that \(c^{2}>1/12\).
Inserting the expression of \(H\) derived in Eq. (47) into the expression of the Ricci scalar curvature for a flat Dark Dominated Universe obtained in Eq. (44), we obtain the following expression for the Ricci scalar \(R\) as a function of the cosmic time \(t\)
\[R\left(t\right)=\frac{36c^{2}}{\left(12c^{2}-1\right)^{2}}\left(\frac{1}{t^{2 }}\right). \tag{48}\]
Using the result of Eq. (48) in Eq. (43), we obtain the following result for the energy density of DE \(\rho_{D}\) as a function of the cosmic time \(t\)
\[\rho_{D}\left(t\right) = \frac{108c^{4}}{\left(12c^{2}-1\right)^{2}}\left(\frac{1}{t^{2}}\right) \tag{49}\] \[= 3\left(\frac{6c^{2}}{12c^{2}-1}\right)^{2}\left(\frac{1}{t^{2}} \right).\]
At last, the EoS parameter of DE \(\omega_{D}\) and the deceleration parameter \(q\) reduce, respectively, to
\[\omega_{D} = \frac{1}{3}-\frac{1}{9c^{2}}, \tag{50}\] \[q = 1-\frac{1}{6c^{2}}. \tag{51}\]
From Eq. (50), we obtain that, in the limiting case of a flat Dark Dominated Universe, the EoS parameter of DE assumes a constant value and, for \(c^{2}<1/12\), we obtain \(\omega_{D}<-1\), i.e., the phantom divide line can be crossed. Since the Ricci scalar diverges at \(c^{2}=1/12\), this value of \(\alpha\) can not be taken into account. Furthermore, from Eq. (51), we obtain that the accelerated phase of the Universe (which is recovered for \(q<0\)) starts for \(c^{2}\leq 1/6\), which corresponds to the starting of the quintessence regime (\(\omega_{D}\leq-1/3\)).
We can also observe that Eqs. (50) and (51) are related by the following relation:
\[\omega_{D}=\frac{2q}{3}-\frac{1}{3}. \tag{52}\]
We must also emphasize here that the case we obtained is similar to power-law expansion of scale factor obtained in [167], in which \(a(t)=t^{6c^{2}/(12c^{2}-1)}\).
Considering the value of \(c^{2}\) obtained in the paper of Gao [168], i.e., \(c^{2}\approx 0.46\), we obtain the
following values for the cosmological quantities we are dealing with
\[H(t) \approx \frac{0.610}{t}, \tag{53}\] \[R(t) \approx \frac{0.811}{t^{2}},\] (54) \[\rho_{D}(t) \approx \frac{1.119}{t^{2}},\] (55) \[\omega_{D} \approx 0.092,\] (56) \[q \approx 0.638,\] (57) \[a(t) \approx t^{0.610}. \tag{58}\]
Therefore, we have obtained with the value of \(c^{2}\) derived in ref. [168] a decelerating Universe because of positive value of the deceleration parameter. We now want to derive the evolutionary form of the fractional energy density of DE. Differentiating the expression of \(\Omega_{Dpl}\) given in Eq. (16) with respect to the variable \(x\), we obtain the following expression for \(\Omega^{\prime}_{Dpl}\)
\[\Omega^{\prime}_{Dpl} = \Omega_{Dpl}\left[\frac{\rho^{\prime}_{Dpl}}{\rho_{Dpl}}-2\left( \frac{\dot{H}}{H^{2}}\right)_{pl}\right], \tag{59}\]
We now derive the expression of \(\rho^{\prime}_{Dpl}\) from the continuity equation for DE given in Eq. (23)
\[\rho^{\prime}_{Dpl} = -3\rho_{Dpl}\left(1+\omega_{Dpl}\right). \tag{60}\]
We can find the expression of \(\left(\frac{\dot{H}}{H^{2}}\right)_{pl}\) from Eq. (33)
\[\left(\frac{\dot{H}}{H^{2}}\right)_{pl} = -1-\frac{1}{6H^{2}}\left(\rho_{m}+\rho_{Dpl}\right)-\frac{\omega _{Dpl}\rho_{Dpl}}{2H^{2}} \tag{61}\] \[= -1-\frac{\rho_{Dpl}}{6H^{2}}\left(1+u_{pl}\right)-\frac{\omega_{ Dpl}\rho_{Dpl}}{2H^{2}}.\]
Using the definition of the fractional energy density of DE defined in Eq. (16), we can write Eq. (61) as follows
\[\left(\frac{\dot{H}}{H^{2}}\right)_{pl} = -5-3\omega_{Dpl}\left(1+\Omega_{Dpl}\right)-\Omega_{Dpl}\left(1+u _{pl}\right). \tag{62}\]
Therefore, using the expression of \(\rho^{\prime}_{Dpl}\) obtained in Eqs. (116) and (116) along with the results of Eq. (62), we obtain the following final expression for \(\Omega^{\prime}_{Dpl}\):
\[\Omega^{\prime}_{Dpl} = \Omega_{Dpl}\left[7+3\omega_{Dpl}\left(1+2\Omega_{Dpl}\right)+2 \Omega_{Dpl}\left(1+u_{pl}\right)\right]. \tag{63}\]
We now derive the evolutionary form of the parameter \(u_{pl}\). Differentiating the expression of \(u_{pl}\) given in Eq. (36) with respect to the cosmic time \(t\), we derive that
\[\dot{u}_{pl} = \frac{\dot{\rho}_{m}}{\rho_{Dpl}}-\frac{\rho_{m}\dot{\rho}_{Dpl}}{ \rho_{Dpl}^{2}}=\frac{\dot{\rho}_{m}}{\rho_{Dpl}}-u_{pl}\left(\frac{\dot{\rho}_{ Dpl}}{\rho_{Dpl}}\right). \tag{64}\]
By using the continuity equations for DE and DM given in Eqs. (22) and (23), we derive, after some calculations, that the time evolution of \(u_{pl}\) is governed by
\[\dot{u}_{pl} = 3Hu_{pl}\omega_{Dpl}, \tag{65}\]
which leads to the following expression for \(u^{\prime}_{pl}\):
\[u^{\prime}_{pl} = 3u_{pl}\omega_{Dpl}. \tag{66}\]
We now consider the interacting case. We still consider the general expression given by
\[\Omega^{\prime}_{Dpl} = \Omega_{Dpl}\left[\frac{\rho^{\prime}_{Dpl}}{\rho_{Dpl}}-2\left( \frac{\dot{H}}{H^{2}}\right)_{pl}\right]. \tag{67}\]
From the continuity equation for DE given in Eq. (25), we obtain the following expressions for \(\rho^{\prime}_{Dpl}\)
\[\rho^{\prime}_{Dpl} = -3\rho_{Dpl}\left(1+\omega_{Dlog}\right)-\frac{Q}{H}. \tag{68}\]
Inserting in Eq. (27) the expression of \(\rho^{\prime}_{Dpl}\) obtained in Eq. (68) along with the expression of \(\left(\frac{\dot{H}}{H^{2}}\right)_{pl}\), we can write \(\Omega^{\prime}_{Dpl}\)
\[\Omega^{\prime}_{Dpl} = \Omega_{Dpl}\left[7+3\omega_{Dpl}\left(1+2\Omega_{Dpl}\right)+2 \Omega_{Dpl}\left(1+u_{pl}\right)-\frac{Q}{H\rho_{Dpl}}\right]. \tag{69}\]
Using the expressions of \(Q_{1}\), \(Q_{2}\) and \(Q_{3}\) we have defined in Eqs. (26), (27) and (28), we finally obtain the following relations
\[\Omega^{\prime}_{Dpl1} = \Omega_{Dpl}\left[7+3\omega_{Dpl}\left(1+2\Omega_{Dpl}\right)+2 \Omega_{Dpl}\left(1+u_{pl}\right)-3b^{2}\left(1+u_{pl}\right)\right], \tag{70}\] \[\Omega^{\prime}_{Dpl2} = \Omega_{Dpl}\left[7+3\omega_{Dpl}\left(1+2\Omega_{Dpl}\right)+2 \Omega_{Dpl}\left(1+u_{pl}\right)-3b^{2}u_{pl}\right],\] (71) \[\Omega^{\prime}_{Dpl3} = \Omega_{Dpl}\left[7+3\omega_{Dpl}\left(1+2\Omega_{Dpl}\right)+2 \Omega_{Dpl}\left(1+u_{pl}\right)-3b^{2}\right]. \tag{72}\]
When \(b^{2}=0\), we recover the same results of the non-interacting case.
Following the same procedure of the non-interacting case, we can also derive that, for the interacting case, the evolutionary form of \(u_{pl}\) is governed by the following law:
\[u^{\prime}_{pl} = 3u_{pl}\omega_{Dpl}+\frac{Q\left(1+u_{pl}\right)}{H\rho_{Dpl}}. \tag{73}\]
Using the definitions of \(Q_{1}\), \(Q_{2}\) and \(Q_{3}\) we have chosen in Eqs. (26), (27) and (28), we derive that \(u^{\prime}_{pl}\) is given by the following relations:
\[u^{\prime}_{pl1} = 3u_{pl}\omega_{Dpl}+3b^{2}\left(1+u_{pl}\right)^{2}, \tag{74}\] \[u^{\prime}_{pl2} = 3u_{pl}\omega_{Dpl}+3b^{2}u_{pl}\left(1+u_{pl}\right),\] (75) \[u^{\prime}_{pl3} = 3u_{pl}\omega_{Dpl}+3b^{2}\left(1+u_{pl}\right). \tag{76}\]
We obtained now all the relevant cosmological quantities for the R-PLECHDE model for both cases corresponding to interacting and non-interacting Dark Sectors. Therefore, we can now start to study the correspondence between the R-PLECHDE model and the scalar fields model we are considering.
## III Correspondence between the R-PLECHDE model and scalar fields
In this Section, we want to establish a correspondence between the R-PLECHDE model and the following scalar fields models: the Generalized Chaplygin Gas (GCG), the Modified Chaplygin Gas (MCG), the Modified Variable Chaplygin Gas (MVCG), the New Modified Chaplygin Gas (NMCG), the Viscous Generalized Chaplygin Gas (VGCG), the Dirac-Born-Infeld (DBI), the Yang-Mills (YM) and the Non Linear Electro-Dynamics (NLED) models [14; 169]. We take this decision since it is widely accepted that scalar field models are an effective description of the DE theory. For this reason, we start comparing the energy density of the DE model considered in this paper with the energy density of corresponding scalar field model and later on we equate the EoS parameter of scalar field models we have chosen to study with the EoS parameter of the DE model we studied.
### Generalized Chaplygin Gas (GCG) Model
We start making a correspondence between the R-PLECHE model and the first scalar field model considered, i.e., the Generalized Chaplygin Gas (GCG) model. In a recent work, Kamenshchik _et al._[170] considered an homogeneous model known as Chaplygin Gas (CG) model which is based on a single fluid obeying the Equation of State EoS \(p=-\frac{A_{0}}{\rho}\), where \(p\) and \(\rho\) represent, respectively, the pressure and the energy density of the fluid while the quantity \(A_{0}\) is a positive constant parameter. Authors proposed a generalization known as Generalized Chaplygin Gas (GCG) model.
The GCG model has the property that it can interpolate the evolution of Universe from the dust to the accelerated phase, therefore it can fit the observational cosmological data [171].
The GCG EoS is defined as follows [172; 173; 174; 175].
\[p_{D}=-\frac{D}{\rho_{D}^{\theta}}, \tag{77}\]
where \(D\) and \(\theta\) are two free constant parameters, with \(D\) positive defined and \(\theta\) in the range \(0<\theta<1\). The CG model is recovered at \(\theta=1\). The EoS given in Eq. (77) with \(\theta=1\) was studied for the first time in 1904 by Chaplygin in order to describe adiabatic processes [170]. The limiting case with \(\theta\neq 1\) has been studied in [176]. The idea that a cosmological model based on the Chaplygin gas could lead to the unification of DE and DM was first proposed for \(\theta=1\) in the paper [177; 178] and then generalized to \(\theta\neq 1\) in [176; 14].
It was derived by Gorini _et al._[179] that the matter power spectrum is compatible with the observed one only if we have \(\theta<10^{-5}\), which means that the GCG is practically indistinguishable from the standard cosmological model with Cosmological Constant \(\Lambda\). In [180], the Chaplygin inflation has been studied in the framework of the Loop Quantum Cosmology. Moreover, it was obtained that the parameters of the Chaplygin inflation model are consistent with the results of 5-year WMAP data [14].
The evolution of the energy density \(\rho_{D}\) of the GCG model is given by
\[\rho_{D}=\left[D+\frac{B}{a^{3(\theta+1)}}\right]^{\frac{1}{\theta+1}}, \tag{78}\]
where \(B\) represents a constant of integration.
In principle, Eq. (78) admits a wide range of positive values of the parameter \(\theta\), however we must remember that it must ensure that the sound velocity (given by the relation \(c_{s}^{2}=\frac{D\theta}{\rho^{\theta+1}}\)) does not exceed the speed of the light \(c\). Furthermore, as pointed out in Bento _et al._[176], only for the range of values \(0<\theta<1\) the analysis of the evolution of energy density fluctuations has a physical meaning.
We now want to reconstruct the potential and dynamics of this scalar field model in the framework of the DE model we are studying. The energy density \(\rho_{D}\) and the pressure \(p_{D}\) of the homogeneous and time-dependent scalar field \(\phi\) are given, respectively, by the following relations:
\[\rho_{D} = \frac{1}{2}\dot{\phi}^{2}+V\left(\phi\right), \tag{79}\] \[p_{D} = \frac{1}{2}\dot{\phi}^{2}-V\left(\phi\right). \tag{80}\]
Using the results of Eqs. (79) and (80), we derive the EoS parameter \(\omega_{D}\) of the GCG model
\[\omega_{D} = \frac{\frac{1}{2}\dot{\phi}^{2}-V\left(\phi\right)}{\frac{1}{2} \dot{\phi}^{2}+V\left(\phi\right)}=\frac{\dot{\phi}^{2}-2V\left(\phi\right)}{ \dot{\phi}^{2}+2V\left(\phi\right)}. \tag{81}\]
Adding Eqs. (79) and (80), we can easily derive that the kinetic energy \(\dot{\phi}^{2}\) term is given by the following relation:
\[\dot{\phi}^{2} = \rho_{D}+p_{D}. \tag{82}\]
Using in Eq. (82) the definition of \(p_{D}\) given in Eq. (77), we obtain the following relation:
\[\dot{\phi}^{2} = \rho_{D}-\frac{D}{\rho_{D}^{\theta}}=\rho_{D}\left(1-\frac{D}{ \rho_{D}^{\theta+1}}\right). \tag{83}\]
Considering the relation \(\dot{\phi}=H\phi^{\prime}\) and using the expression of \(\rho_{D}\) given in Eq. (78), we can write the evolutionary form of \(\phi\)
\[\phi = \int_{a_{0}}^{a}\sqrt{3\Omega_{D}}\left(\sqrt{1-\frac{D}{D+\frac{ B}{a^{3\left(\theta+1\right)}}}}\right)\frac{da}{a}, \tag{84}\]
whose solution, for a flat Dark Dominated Universe, is given by
\[\phi\left(a\right) = \frac{2\sqrt{3}}{3\left(1+\theta\right)}\times \tag{85}\] \[\left[\log\left(a^{\frac{1}{2}\left(3+3\theta\right)}\right)- \log\left(B+\sqrt{B\left(B+a^{3+3\theta}D\right)}\right)\right].\]
Subtracting Eqs. (79) and (80), we derive the scalar potential \(V\left(\phi\right)\) term
\[V\left(\phi\right) = \frac{1}{2}\left(\rho_{D}-p_{D}\right)=\frac{\rho_{D}}{2}\left( 1+\frac{D}{\rho_{D}^{\theta+1}}\right), \tag{86}\]
where we used the definition of \(p_{D}\) given in Eq. (77).
Considering the general expressions of \(\rho_{D}\) given in Eq. (78), we obtain the following expression for \(V\left(\phi\right)\)
\[V\left(\phi\right) = \frac{1}{2}\left[D+\frac{B}{a^{3\left(\theta+1\right)}}\right]^{ \frac{1}{\theta+1}}+\frac{1}{2}\frac{D}{\left[D+\frac{B}{a^{3\left(\theta+1 \right)}}\right]^{\frac{\theta}{\theta+1}}}. \tag{87}\]
We now want to derive the general expressions of the parameters \(D\) and \(B\) as functions of the other cosmological parameters. Dividing Eq. (77) by \(\rho_{D}\), the EoS parameter \(\omega_{D}\) reads
\[\omega_{D}=-\frac{D}{\rho_{D}^{\theta+1}}, \tag{88}\]
which allows to derive the following expression for the parameter \(D\)
\[D=-\omega_{D}\rho_{D}^{\theta+1}. \tag{89}\]
We also derive from Eq. (78) that the parameter \(B\) can be written as
\[B=a^{3(\theta+1)}\left(\rho_{D}^{\theta+1}-D\right), \tag{90}\]
which can be rewritten, substituting the expression of \(D\) given in Eq. (89) as
\[B=\left(a^{3}\rho_{D}\right)^{\theta+1}\left(1+\omega_{D}\right). \tag{91}\]
Using in Eqs. (89) and (91) the expression of the EoS parameter \(\omega_{Dpl}\) of the R-PLECHDE model given in Eq. (39), we can write \(D_{pl}\) and \(B_{pl}\) as follows.
\[D_{pl} = \frac{\rho_{Dpl}^{\theta+1}}{3}\left(\frac{1}{3c^{2}-\delta R^{ \gamma/2-1}}-\frac{1+\Omega_{k}}{\Omega_{Dpl}}\right) \tag{92}\] \[= \frac{\rho_{Dpl}^{\theta+1}}{3}\left[\frac{1}{3c^{2}-\delta R^{ \gamma/2-1}}-\left(1+u_{pl}\right)\right],\] \[B_{pl} = \left(a^{3}\rho_{Dpl}\right)^{\theta+1}\times\] (93) \[\left[1-\frac{1}{3}\left(\frac{1}{3c^{2}-\delta R^{\gamma/2-1}}- \frac{1+\Omega_{k}}{\Omega_{Dpl}}\right)\right]\] \[= \left(a^{3}\rho_{Dpl}\right)^{\theta+1}\times\] \[\left\{1-\frac{1}{3}\left[\frac{1}{3c^{2}-\delta R^{\gamma/2-1}} -\left(1+u_{pl}\right)\right]\right\}.\]
We now want to obtain the expressions of \(D\) and \(B\) for the limiting case of a flat Dark Dominated Universe. Considering the expressions of \(\rho_{D}\) and \(\omega_{D}\) for the flat Dark Dominated case obtained, respectively, in Eqs. (49) and (50), we obtain that \(D_{Dark}\) and \(B_{Dark}\) are given by the following expressions:
\[D_{Dark} = \left[3\left(\frac{6c^{2}}{12c^{2}-1}\right)^{2}\left(\frac{1}{t^ {2}}\right)\right]^{\theta+1}\left(\frac{1-3c^{2}}{9c^{2}}\right), \tag{94}\] \[B_{Dark} = \left\{\left[3\left(\frac{6c^{2}}{12c^{2}-1}\right)^{2}\right]t^ {-2\left(3c^{2}-1\right)/\left(12c^{2}-1\right)}\right\}^{\theta+1}\left( \frac{12c^{2}-1}{9c^{2}}\right), \tag{95}\]
where we used the expression of the scale factor given by the relation \(a\left(t\right)=t^{6c^{2}/\left(12c^{2}-1\right)}\).
Using the value of \(c^{2}\) found in the work of Gao, i.e., \(c^{2}\approx 0.46\), we can write
\[D_{Dark} \approx -0.092\left(\frac{1.119}{t^{2}}\right)^{\theta+1}, \tag{96}\] \[B_{Dark} \approx 1.092\left[\left(1.119\right)t^{-0.168}\right]^{\theta+1}. \tag{97}\]
Furthermore, using the general definition of EoS parameter of DE \(\omega_{D}\), we can rewrite Eqs. (82) and (86) as
\[\dot{\phi}^{2} = \left(1+\omega_{D}\right)\rho_{D}, \tag{98}\] \[V\left(\phi\right) = \frac{1}{2}\left(1-\omega_{D}\right)\rho_{D}. \tag{99}\]
Using in Eqs. (98) and (99) the expression of the EoS parameter \(\omega_{Dpl}\) of the R-PLECHDE model given in Eq. (39), we can derive the kinetic and the potential terms for the R-PLECHDE model
\[\dot{\phi}_{pl}^{2} = \rho_{Dpl}\left[1-\frac{1}{3}\left(\frac{1}{3c^{2}-\delta R^{ \gamma/2-1}}-\frac{1+\Omega_{k}}{\Omega_{Dpl}}\right)\right] \tag{100}\] \[\rho_{Dpl}\left\{1-\frac{1}{3}\left[\frac{1}{3c^{2}-\delta R^{ \gamma/2-1}}-\left(1+u_{pl}\right)\right]\right\},\] \[V\left(\phi\right)_{pl} = \frac{\rho_{Dpl}}{2}\left[1+\frac{1}{3}\left(\frac{1}{3c^{2}- \delta R^{\gamma/2-1}}-\frac{1+\Omega_{k}}{\Omega_{Dpl}}\right)\right]\] (101) \[\frac{\rho_{Dpl}}{2}\left\{1-\frac{1}{3}\left[\frac{1}{3c^{2}- \delta R^{\gamma/2-1}}-\left(1+u_{pl}\right)\right]\right\}.\]
We can obtain the evolutionary form of the GCG model integrating Eq. (100) with respect to the scale factor \(a\left(t\right)\)
\[\phi\left(a\right)_{pl}-\phi\left(a_{0}\right)_{pl} = \int_{a_{0}}^{a}\sqrt{3\Omega_{Dpl}}\times \tag{102}\] \[\left[1-\frac{1}{3}\left(\frac{1}{3c^{2}-\delta R^{\gamma/2-1}}- \frac{1+\Omega_{k}}{\Omega_{Dpl}}\right)\right]^{1/2}\frac{da}{a}\] \[= \int_{a_{0}}^{a}\sqrt{3\Omega_{Dpl}}\times\] \[\left\{1-\frac{1}{3}\left[\frac{1}{3c^{2}-\delta R^{\gamma/2-1}} -\left(1+u_{pl}\right)\right]\right\}^{1/2}\frac{da}{a},\]
where we used the relation \(\dot{\phi}=\phi^{\prime}H\).
In the limiting case for a flat Dark Dominated Universe, i.e., when \(\Omega_{Dpl}=\Omega_{Dlog}=1\), \(\Omega_{k}=\Omega_{m}=0\), and \(\delta=0\) for the R-PLECHDE model, the scalar field and the potential of the GCG reduce, respectively, to
\[\phi\left(t\right) = \left[\frac{6c^{2}}{\sqrt{3c^{2}\left(12c^{2}-1\right)}}\right] \ln\left(t\right), \tag{103}\] \[V\left(t\right) = \frac{6c^{2}\left(6c^{2}+1\right)}{\left(12c^{2}-1\right)^{2}} \left(\frac{1}{t^{2}}\right). \tag{104}\]
Using the value of \(c^{2}\) found in the work of Gao, i.e., \(c^{2}\approx 0.46\), we can write
\[\phi\left(t\right) \approx 1.105errors\ln\left(t\right), \tag{105}\] \[V\left(t\right) \approx \left(\frac{0.508errors}{t^{2}}\right). \tag{106}\]
Moreover, after some algebraic calculations, we derive that the potential \(V\) can be written as function of the scalar field \(\phi\) as
\[V(\phi) = \left[\frac{6c^{2}(6c^{2}+1)}{(12c^{2}-1)^{2}}\right]e^{-\frac{ \sqrt{3c^{2}(12c^{2}-1)}}{3c^{2}}\phi}. \tag{107}\]
Using the value of \(c^{2}\) found in the work of Gao, i.e., \(c^{2}\approx 0.46errors\), we can write
\[V(\phi) \approx 0.508errorse^{-1.810errors\phi}. \tag{108}\]
### Modified Chaplygin Gas (MCG) Model
We now consider the second scalar field model considered in this paper, the Modified Chaplygin Gas (MCG) model. This model represents a generalization of the GCG model [181; 182; 183; 184] with the addition of a barotropic term and it is consistent with the 5-year WMAP data.
The MCG EoS is defined as follows [181; 185].
\[p_{D}=A\rho_{D}-\frac{D}{\rho_{D}^{\theta}}, \tag{109}\]
where \(A\) and \(D\) are two positive constant parameters and \(\theta\) is constrained in the range \(0\leq\theta\leq 1\). An interesting characteristic related to the MCG EoS is that it shows radiation era in the early phases of the Universe. At late time, it behaves like a model which can be fitted to a \(\Lambda\)CDM model.
The energy density \(\rho_{D}\) of the MCG model is given by
\[\rho_{D}=\left[\frac{D}{A+1}+\frac{B}{a^{3(\theta+1)(A+1)}}\right]^{\frac{1}{ \theta+1}}, \tag{110}\]
where \(B\) represents a constant of integration.
We now want to reconstruct the potential and dynamics of the scalar field \(\phi\). The energy density \(\rho_{D}\) and pressure \(p_{D}\) of the scalar field model are given, respectively, by
\[\rho_{D} = \frac{1}{2}\dot{\phi}^{2}+V(\phi), \tag{111}\] \[p_{D} = \frac{1}{2}\dot{\phi}^{2}-V(\phi). \tag{112}\]
Using the expressions of \(\rho_{D}\) and \(p_{D}\) given in Eqs. (111) and (112), we derive the EoS parameter \(\omega_{D}\) for the MCG model
\[\omega_{D} = \frac{\frac{1}{2}\dot{\phi}^{2}-V(\phi)}{\frac{1}{2}\dot{\phi}^{2 }+V(\phi)}=\frac{\dot{\phi}^{2}-2V(\phi)}{\dot{\phi}^{2}+2V(\phi)}. \tag{113}\]
Adding the expressions of Eqs. (111) and (112), we derive the kinetic energy \(\dot{\phi}^{2}\) term
\[\dot{\phi}^{2} = \rho_{D}+p_{D}. \tag{114}\]
Using the definition of \(p_{D}\) given in Eq. (112), we obtain the following relation for \(\dot{\phi}^{2}\)
\[\dot{\phi}^{2}\;=\;\rho_{D}+A\rho_{D}-\frac{D}{\rho_{D}^{\theta}}=\rho_{D}\left(1 +A-\frac{D}{\rho^{\theta+1}}\right). \tag{115}\]
Inserting in Eq. (115) the expression of \(\rho_{D}\) given in Eq. (110), we can write
\[\dot{\phi}^{2}\;=\;\rho_{D}\left\{1+A-\frac{D}{\left[\frac{D}{A+1}+\frac{B}{a^ {3(\theta+1)(A+1)}}\right]}\right\}. \tag{116}\]
Using the relation \(\dot{\phi}=H\phi^{\prime}\), we can write
\[\phi\;=\;\int_{a_{0}}^{a}\sqrt{3\Omega_{D}}\sqrt{\left(1+A\right)-\frac{D}{ \left[\frac{D}{A+1}+\frac{B}{a^{3(\theta+1)(A+1)}}\right]}}\frac{da}{a}, \tag{117}\]
whose solution, for a flat Dark Dominated Universe, is given by
\[\phi\left(a\right)\;=\;\frac{2}{\sqrt{3}\left(1+A\right)^{1/2}\left(1+\theta \right)}\cdot\mathrm{ArcTanh}\left[\frac{\sqrt{(1+A)B+a^{3(1+A)(1+\theta)}D}}{ \sqrt{B\left(1+A\right)}}\right]. \tag{118}\]
Subtracting Eqs. (111) and (112), we can easily derive the scalar potential \(V\left(\phi\right)\) term as
\[V\left(\phi\right)\;=\;\frac{1}{2}\left(\rho_{D}-p_{D}\right). \tag{119}\]
Considering in Eq. (119) the general expressions of \(p_{D}\) and \(\rho_{D}\) given, respectively, in Eqs. (109) and (110), we can write \(V\left(\phi\right)\) as
\[V\left(\phi\right)\;=\;\frac{\left(1-A\right)}{2}\left[\frac{D}{A+1}+\frac{B} {a^{3(\theta+1)(A+1)}}\right]^{\frac{1}{\theta+1}}+\frac{D}{2\left[\frac{D}{A +1}+\frac{B}{a^{3(\theta+1)(A+1)}}\right]^{\frac{\theta}{\theta+1}}}. \tag{120}\]
We now want to find the expressions of the parameters \(D\) and \(B\). Dividing the expression of \(\omega_{D}\) given in Eq. (109) by \(\rho_{D}\), we derive that the EoS parameter \(\omega_{D}\) can be expressed as follows.
\[\omega_{D}=A-\frac{D}{\rho_{D}^{\theta+1}}. \tag{121}\]
Therefore, from Eq. (121), we can easily obtain the following expression for \(D\)
\[D=\rho_{D}^{\theta+1}\left(A-\omega_{D}\right). \tag{122}\]
Moreover, from Eq. (121), we can also derive the following expression for the parameter \(A\):
\[A=\omega_{D}+\frac{D}{\rho_{D}^{\theta+1}}. \tag{123}\]
Instead, from Eq. (110), we derive that \(B\) can be rewritten as follows.
\[B=a^{3(\theta+1)(A+1)}\left(\rho_{D}^{\theta+1}-\frac{D}{A+1}\right). \tag{124}\]
Substituting in Eq. (124) the expression of \(D\) given in Eq (122), we obtain the following relation for \(B\)
\[B=\left[a^{3(A+1)}\rho_{D}\right]^{1+\theta}\left(\frac{1+\omega_{D}}{1+A} \right). \tag{125}\]
Inserting in Eqs. (122) and (125) the expression of the EoS parameter \(\omega_{Dpl}\) of the R-PLECHDE model given in Eq. (39), we derive the following expressions for \(D_{pl}\) and \(B_{pl}\)
\[D_{pl} = \left(\rho_{Dpl}\right)^{\theta+1}\left[A+\frac{1}{3}\left(\frac{ 1}{3c^{2}-\delta R^{\gamma/2-1}}-\frac{1+\Omega_{k}}{\Omega_{Dpl}}\right)\right] \tag{126}\] \[= \left(\rho_{Dpl}\right)^{\theta+1}\left\{A+\frac{1}{3}\left[ \frac{1}{3c^{2}-\delta R^{\gamma/2-1}}-(1+u_{pl})\right]\right\},\] \[B_{pl} = \frac{\left[a^{3(A+1)}\rho_{Dpl}\right]^{\theta+1}}{1+A}\left[1- \frac{1}{3}\left(\frac{1}{3c^{2}-\delta R^{\gamma/2-1}}-\frac{1+\Omega_{k}}{ \Omega_{Dpl}}\right)\right]\] (127) \[= \frac{\left[a^{3(A+1)}\rho_{Dpl}\right]^{\theta+1}}{1+A}\left\{1 -\frac{1}{3}\left[\frac{1}{3c^{2}-\delta R^{\gamma/2-1}}-(1+u_{pl})\right] \right\}.\]
In the limiting case of a flat Dark Dominated Universe, we obtain that the expressions of \(D_{Dark}\) and \(B_{Dark}\) are given, respectively, by
\[D_{Dark} = \left[3\left(\frac{6c^{2}}{12c^{2}-1}\right)^{2}\left(\frac{1}{t^ {2}}\right)\right]^{\theta+1}\left(A-\frac{1}{3}+\frac{1}{9c^{2}}\right), \tag{128}\] \[B_{Dark} = \frac{\left[3\left(\frac{6c^{2}}{12c^{2}-1}\right)^{2}t^{\frac{2 \left(9A-3c^{2}+1\right)}{12c^{2}-1}}\right]^{\theta+1}}{1+A}\left(\frac{4}{3} -\frac{1}{9c^{2}}\right), \tag{129}\]
where the results of Eqs. (49) and (50) for \(\rho_{D}\) and \(\omega_{D}\) are utilized.
Using the value \(c^{2}\approx 0.46\), we can write
\[D_{Dark} \approx \left[\left(\frac{1.119}{t^{2}}\right)\right]^{\theta+1}\left(A- 0.092\right), \tag{130}\] \[B_{Dark} \approx \frac{1.092}{1+A}\left[1.119t^{0.442(9A-0.38)}\right]^{\theta+1}. \tag{131}\]
Furthermore, using the general definition of the EoS parameter \(\omega_{D}\), we can rewrite Eqs. (82) and (86) as follows.
\[\dot{\phi}^{2} = \left(1+\omega_{D}\right)\rho_{D}, \tag{132}\] \[V\left(\phi\right) = \frac{1}{2}\left(1-\omega_{D}\right)\rho_{D}. \tag{133}\]
Using in Eqs. (132) and (133) the expression of the EoS parameter \(\omega_{Dpl}\) of the R-PLECHDE model
given in Eq. (39), we can derive the kinetic and the potential terms for the R-PLECHDE model
\[\dot{\phi}_{pl}^{2} = \rho_{Dpl}\left[1-\frac{1}{3}\left(\frac{1}{3c^{2}-\delta R^{\gamma/ 2-1}}-\frac{1+\Omega_{k}}{\Omega_{Dpl}}\right)\right] \tag{134}\] \[\rho_{Dpl}\left\{1-\frac{1}{3}\left[\frac{1}{3c^{2}-\delta R^{ \gamma/2-1}}-(1+u_{pl})\right]\right\},\] \[V\left(\phi\right)_{pl} = \frac{\rho_{Dpl}}{2}\left[1+\frac{1}{3}\left(\frac{1}{3c^{2}- \delta R^{\gamma/2-1}}-\frac{1+\Omega_{k}}{\Omega_{Dpl}}\right)\right]\] (135) \[\frac{\rho_{Dpl}}{2}\left\{1+\frac{1}{3}\left[\frac{1}{3c^{2}- \delta R^{\gamma/2-1}}-(1+u_{pl})\right]\right\}. \tag{136}\]
We can obtain the evolutionary form of the GCG integrating Eq. (134) with respect to the scale factor \(a\left(t\right)\)
\[\phi\left(a\right)_{pl}-\phi\left(a_{0}\right)_{pl} = \int_{a_{0}}^{a}\sqrt{3\Omega_{Dpl}}\times \tag{137}\] \[\left[1-\frac{1}{3}\left(\frac{1}{3c^{2}-\delta R^{\gamma/2-1}}- \frac{1+\Omega_{k}}{\Omega_{Dpl}}\right)\right]^{1/2}\frac{da}{a}\] \[= \int_{a_{0}}^{a}\sqrt{3\Omega_{Dpl}}\times\] \[\left\{1-\frac{1}{3}\left[\frac{1}{3c^{2}-\delta R^{\gamma/2-1}}- (1+u_{pl})\right]\right\}^{1/2}\frac{da}{a},\]
where \(\dot{\phi}=\phi^{\prime}H\).
In the limiting case for a flat Dark Dominated Universe, i.e., when \(\Omega_{Dpl}=1\), \(\Omega_{k}=\Omega_{m}=0\) and \(\delta=0\) for the R-PLECHDE model, the scalar field and the potential of the GCG reduce, respectively, to
\[\phi\left(t\right) = \frac{6c^{2}}{\sqrt{3c^{2}\left(12c^{2}-1\right)}}\ln\left(t \right), \tag{138}\] \[V\left(t\right) = \frac{6c^{2}\left(6c^{2}+1\right)}{\left(12c^{2}-1\right)^{2}} \left(\frac{1}{t^{2}}\right). \tag{139}\]
Using the value of \(c^{2}\) found in the work of Gao, i.e., \(c^{2}\approx 0.46\), we can write
\[\phi\left(t\right) \approx 1.105\ln\left(t\right), \tag{140}\] \[V\left(t\right) \approx \left(\frac{0.508}{t^{2}}\right). \tag{141}\]
Moreover, after some algebraic calculations, we derive that the potential \(V\) can be written as function of the scalar field \(\phi\) as follows.
\[V(\phi) = \left[\frac{6c^{2}(6c^{2}+1)}{(12c^{2}-1)^{2}}\right]e^{-\frac{ \sqrt{3c^{2}\left(12c^{2}-1\right)}}{3c^{2}}\phi}. \tag{142}\]
Using the value \(c^{2}\approx 0.46\), we can write
\[V(\phi) \approx 0.508e^{-1.810\phi}. \tag{143}\]
### Modified Variable Chaplygin Gas (MVCG)
We now consider the Modified Variable Chaplygin Gas (MVCG) model. Guo and Jhang [186] recently proposed a model known as Variable Chaplygin Gas (VCG) with the following EoS
\[p_{D}=-\frac{B}{\rho_{D}}, \tag{144}\]
where \(B\) indicates a function of the scale factor \(a\left(t\right)\), i.e., \(B=B\left(a\left(t\right)\right)\). This particular assumption seems to be reasonable since it is related to scalar potential if CG is interpreted via Born-Infeld scalar field [187]. In the following part of this Section, we will omit for simplicity the temporal dependence of the scale factor. The VCG model has been studied in the recent paper of [188; 189]. Debnath [190] proposed the EoS of the Modified Variable Chaplygin Gas (MVCG) model in the following form.
\[p_{D}=A\rho_{D}-\frac{B\left(a\right)}{\rho_{D}^{\theta}}. \tag{145}\]
In this paper, we choose \(B(a)=B_{0}a^{-\delta_{1}}\). Therefore, we can write the pressure \(p_{D}\) of the MVCG model as follows.
\[p_{D}=A\rho_{D}-\frac{B_{0}a^{-\delta_{1}}}{\rho_{D}^{\theta}}. \tag{146}\]
where \(A\), \(B_{0}\) and \(\delta_{1}\) indicate three positive constant parameters, with \(B_{0}\) being the present day value of \(B\) and \(\delta_{1}\) being the exponent of the scale factor. Moreover, \(\theta\) is usually taken in the range of values \(0\leq\theta\leq 1\).
In the limiting case of \(B_{0}=0\), Eq. (146) leads to a barotropic EoS (or equivalently to a barotropic fluid). In general, the barotropic EoS \(p=A\rho\) is able to describe different kinds of media. For example, the limiting case with \(A=-1\) (i.e., \(p=-\rho\)) leads to the Cosmological Constant case; the limiting case with \(A=-2/3\) leads to domain walls; the limiting case with \(A=-1/3\) produces cosmic strings; the limiting case with \(A=0\) corresponds to dust or matter; when \(A=1/3\), we obtain the EoS for relativistic gas; the limiting case with \(A=2/3\) gives the perfect gas; finally, the limiting case with \(A=1\) represents the ultra-stiff matter. If we consider a constant expression of \(B\), i.e., \(B=B_{0}\), in Eq. (146) (which is recovered for the limiting case of \(\delta_{1}=0\)), we recover the EoS of the original modified CG model. Eq. (146) shows that, in the MVCG scenario, it interpolates between a radiation dominated phase \(\left(A=\frac{1}{3}\right)\) and a quintessence-dominated phase described by a constant EoS. In the limiting case corresponding to \(A=0\) and \(\alpha=1\), we obtain the usual CG. Recently, it was derived, using the latest Supernovae data, that models with \(\alpha>1\)
are also possible [191]. We must also underline that, in the limiting case corresponding to \(A=0\), Eq. (146) yields a fluid with negative pressure which is generally characterized in the quintessence regime.
This modified form of the CG has also a phenomenological motivation since it can explain the flat rotational curves of galaxies [192]. The galactic rotational velocity \(V_{c}\) is related to the MVG parameter \(A\) through the relation \(V_{c}=\sqrt{2A}\) while the density parameter \(\rho\) is related to the radial size of the galaxy \(r\) by the relation \(\rho=\frac{A}{2\pi Gr^{2}}\). At high densities, the first term of the MVCG model is the dominant term and it produces the flat rotational curve that is consistent with present observations. The parameter \(A\) varies from galaxy to galaxy due to the variations of \(V_{c}\).
The energy density \(\rho_{D}\) of the MVCG model is given by the following relation:
\[\rho_{D}=\left\{\frac{3\left(\theta+1\right)B_{0}}{\left[3\left(\theta+1 \right)\left(A+1\right)-\delta_{1}\right]}\left(\frac{1}{a^{\delta_{1}}} \right)-\frac{C}{a^{3\left(\theta+1\right)\left(A+1\right)}}\right\}^{\frac{ 1}{1+\theta}}, \tag{147}\]
where \(C\) is a positive constant of integration and \(3\left(\theta+1\right)\left(A+1\right)>\delta_{1}\) so that the first term results are positive. \(\delta_{1}\) must be positive, otherwise the scale factor will tend to infinity, which implies that the energy density \(\rho_{D}\) will tend to infinity too (which is not the case for the expanding Universe).
We now reconstruct the expressions of the potential and the dynamics of the scalar field. For this purpose, we consider a time dependent scalar field \(\phi\left(t\right)\) with potential \(V\left(\phi\right)\), which are directly related with the energy density and pressure of MVCG as follows.
\[\rho_{D} = \frac{1}{2}\dot{\phi}^{2}+V\left(\phi\right), \tag{148}\] \[p_{D} = \frac{1}{2}\dot{\phi}^{2}-V\left(\phi\right). \tag{149}\]
Since the kinetic term is positive, we have that the MVCG is of quintessence type.
We know that the deceleration parameter \(q\) can be expressed thanks to the following expression:
\[q=-\frac{\ddot{a}}{aH^{2}}. \tag{150}\]
In order to have an accelerating Universe, \(q\) must be negative, i.e., we must have \(\ddot{a}>0\) since the scale factor \(a\) is positive and \(H^{2}\) is always positive. \(\ddot{a}>0\) implies the following relation:
\[\left[\frac{2\left(1+\theta\right)-\delta_{1}}{3\left(1+\theta\right)\left(1+ A\right)-\delta_{1}}\right]a^{3\left(1+\theta\right)\left(1+A\right)-\delta_{1}}> \frac{C\left(1+3A\right)}{3B_{0}}. \tag{151}\]
The result of Eq. (151) requires \(\delta_{1}<2\left(1+\theta\right)\). Since we also have that \(0\leq\theta\leq 1\), we derive from the condition \(\delta_{1}<2\left(1+\theta\right)\) that the value of \(\delta_{1}\) must be in the range \(0<\delta_{1}<4\).
This expression shows that for small value of scale factor we have decelerating Universe while for
large values of the scale factor we have an accelerating Universe and the transition occurs when the scale factor assumes the value
\[a=\left\{\frac{C(1+3A)\left[3(1+\theta)(1+A)-\delta_{1}\right]}{3B_{0}\left[2(1+ \theta)-\delta_{1}\right]}\right\}^{\frac{1}{3(1+\theta)(1+A)-\delta_{1}}}. \tag{152}\]
For small values of scale factor \(a\left(t\right)\), we have the following relation between energy density \(\rho\) and scale factor \(a\)
\[\rho\cong\frac{C^{\frac{1}{1+\theta}}}{a^{3(1+A)}}, \tag{153}\]
which corresponds to an Universe dominated by an EoS of the type \(p=A\rho\).
Instead, for large values of the scale factor, we have the following relation between \(\rho\) and the scale factor
\[\rho\cong\left[\frac{3(1+\theta)B_{0}}{3(1+\theta)(1+A)-\delta_{1}}\right]^{ \frac{1}{(1+\theta)}}a^{-\frac{\theta}{1+\theta}}, \tag{154}\]
which corresponds to the following EoS:
\[p=\left[\frac{\delta_{1}}{3(1+\theta)}-1\right]\rho, \tag{155}\]
which describes a quintessence model [193].
We have that, in the limiting case corresponding to \(\delta_{1}=0\), Eq. (155) leads to the original modified Chaplygin gas scenario [194]. However, Eq. (155) shows that, in the variable modified Chaplygin gas scenario, it interpolates between a radiation dominated phase (which corresponds to the case with \(A=1/3\)) and a quintessence-dominated phase described by the constant EoS \(p=\gamma\rho\) where \(\gamma=-1+\frac{\delta_{1}}{3(1+\theta)}<-\frac{1}{3}\).
We must also have that the energy density given in Eq. (147) must be positive, so that the scale factor \(a\left(t\right)\) must obey the following condition:
\[a\left(t\right)>\left\{-\frac{C\left[3\left(\alpha+1\right)\left(A+1\right)- \delta_{1}\right]}{3\left(\alpha+1\right)B_{0}}\right\}^{\frac{1}{3\left( \alpha+1\right)\left(A+1\right)-\delta_{1}}}. \tag{156}\]
Therefore, we have that the minimum value of the scale factor \(a\left(t\right)\) is given by
\[a_{min}\left(t\right)=\left\{-\frac{C\left[3\left(\theta+1\right)\left(A+1 \right)-\delta_{1}\right]}{3\left(\theta+1\right)B_{0}}\right\}^{\frac{1}{3 \left(\theta+1\right)\left(A+1\right)-\delta_{1}}}. \tag{157}\]
Adding Eqs. (148) and (149), we can easily derive the kinetic energy \(\dot{\phi}^{2}\) term as follows.
\[\dot{\phi}^{2} = \rho_{D}+p_{D}. \tag{158}\]
Instead, subtracting Eqs. (148) and (149), we can easily derive the scalar potential \(V\left(\phi\right)\) term as follows.
\[V\left(\phi\right) = \frac{\left(\rho_{D}-p_{D}\right)}{2}. \tag{159}\]
Inserting in Eq. (158) the expression of \(p_{D}\) and \(\rho_{D}\) given, respectively, in Eqs. (146) and (147), we obtain the following expression for \(\dot{\phi}^{2}\)
\[\dot{\phi}^{2} = \left(1+A\right)\left\{\frac{3\left(\theta+1\right)B_{0}}{\left[ 3\left(\theta+1\right)\left(A+1\right)-\delta_{1}\right]}\left(\frac{1}{a^{ \delta_{1}}}\right)-\frac{C}{a^{3\left(\theta+1\right)\left(A+1\right)}} \right\}^{\frac{1}{1+\theta}} \tag{160}\] \[- \frac{B_{0}a^{-\delta_{1}}}{\left\{\frac{3\left(\theta+1\right) B_{0}}{\left[3\left(\theta+1\right)\left(A+1\right)-\delta_{1}\right]}\left( \frac{1}{a^{\delta_{1}}}\right)-\frac{C}{a^{3\left(\theta+1\right)\left(A+1 \right)}}\right\}^{\frac{\theta}{1+\theta}}}.\]
Using the relation \(\dot{\phi}=H\phi^{\prime}\), we can write
\[\phi = \int_{t_{0}}^{t}\left(1+A\right)^{1/2}\left\{\frac{3\left(\theta +1\right)B_{0}}{\left[3\left(\theta+1\right)\left(A+1\right)-\delta_{1}\right] }\left(\frac{1}{a^{\delta_{1}}}\right)-\frac{C}{a^{3\left(\theta+1\right)\left( A+1\right)}}\right\}^{\frac{1}{2\left(1+\theta\right)}}dt \tag{161}\] \[- \int_{t_{0}}^{t}\frac{\left(B_{0}a^{-\delta_{1}}\right)^{1/2}}{ \left\{\frac{3\left(\theta+1\right)B_{0}}{\left[3\left(\theta+1\right)\left(A +1\right)-\delta_{1}\right]}\left(\frac{1}{a^{\delta_{1}}}\right)-\frac{C}{a^{ 3\left(\theta+1\right)\left(A+1\right)}}\right\}^{\frac{\theta}{2\left(1+ \theta\right)}}}dt.\]
From the Friedmann equation given in Eq. (13), for \(k=0\) (i.e., for a flat Universe), we get the explicit form of \(t\) as function of the scale factor \(a\left(t\right)\) as follows.
\[t=Ka^{\frac{\delta_{1}}{2\left(1+\theta\right)}}\,{}_{2}F_{1}\left[\frac{1}{2 \left(1+\theta\right)},-z,1-z,-\left(\frac{C}{K}\right)a^{-\frac{\delta_{1}}{2 \left(1+\theta\right)z}}\right], \tag{162}\]
where the quantities \(K\) and \(z\) are defined as follows.
\[K = \frac{2}{\delta_{1}}\left[\left(1+\theta\right)^{\theta}\sqrt{ \frac{\delta_{1}}{6B_{0}z}}\right]^{\frac{1}{1+\theta}}, \tag{163}\] \[z = \frac{\delta_{1}}{2\left(1+\theta\right)\left[3\left(1+A\right) \left(1+\theta\right)-\delta_{1}\right]}. \tag{164}\]
Moreover, \({}_{2}F_{1}\) represents the hypergeomtric function of second type.
For a flat Universe, considering the expressions of \(t\) given in Eq. (162), we derive the following expression of \(\phi\):
\[\phi = \frac{\sqrt{1+A}}{3\left(1+A\right)\left(1+\theta\right)-\delta _{1}}\times \tag{165}\] \[\left\{2\log\left(\sqrt{u+x}+\sqrt{u+y}\right)-\sqrt{\frac{y}{x} }\log\left[\frac{\left(\sqrt{x\left(u+x\right)}+\sqrt{y\left(u+y\right)} \right)^{2}}{x^{3/2}\sqrt{y}u}\right]\right\},\]
where the parameters \(x\), \(y\) and \(u\) are defined, respectively, as follows.
\[x = \frac{\delta_{1}}{1+A}, \tag{166}\] \[y = 3\left(1+\theta\right),\] (167) \[u = \left(\frac{\delta_{1}C}{B_{0}}\right)a^{\delta_{1}\left(1-\frac {y}{x}\right)}. \tag{168}\]
Moreover, inserting in Eq. (159) the expression of \(p_{D}\) and \(\rho_{D}\) given, respectively, in Eqs. (146) and (147), we obtain the following expression for \(V\left(\phi\right)\):
\[V\left(\phi\right) = \frac{\left(1-A\right)}{2}\left\{\frac{3\left(\theta+1\right)B_ {0}}{\left[3\left(\theta+1\right)\left(A+1\right)-\delta_{1}\right]}\left( \frac{1}{a^{\delta_{1}}}\right)-\frac{C}{a^{3\left(\theta+1\right)\left(A+1 \right)}}\right\}^{\frac{1}{1+\theta}} \tag{169}\] \[+\frac{B_{0}a^{-\delta_{1}}}{2\left\{\frac{3\left(\theta+1 \right)B_{0}}{\left[3\left(\theta+1\right)\left(A+1\right)-\delta_{1}\right] }\left(\frac{1}{a^{\delta_{1}}}\right)-\frac{C}{a^{3\left(\theta+1\right) \left(A+1\right)}}\right\}^{\frac{\theta}{1+\theta}}}.\]
We now want to derive the general expressions of the parameters \(B_{0}\) and \(C\).
Dividing the expression of \(p_{D}\) given in Eq. (146) by \(\rho_{D}\), we obtain the following relation for the EoS parameter \(\omega_{D}\)
\[\omega_{D}=A-\frac{B_{0}a^{-\delta_{1}}}{\rho_{D}^{\theta+1}}, \tag{170}\]
which leads to the following expression of \(B_{0}\)
\[B_{0}=a^{\delta_{1}}\left(A-\omega_{D}\right)\rho_{D}^{\theta+1}. \tag{171}\]
Instead, from the expression of \(\rho_{D}\) given in Eq. (147), we find that
\[C=\left\{\frac{3\left(\theta+1\right)B_{0}}{\left[3\left(\theta+1\right) \left(A+1\right)-\delta_{1}\right]}\left(\frac{1}{a^{\delta_{1}}}\right)-\rho _{D}^{1+\theta}\right\}a^{-3\left(\theta+1\right)\left(A+1\right)}. \tag{172}\]
Using in Eq. (172) the expression of \(B_{0}\) obtained in Eq. (171), we obtain the following expression for \(C\)
\[C=\left[\rho_{D}a^{-3\left(A+1\right)}\right]^{\theta+1}\left[\frac{3\left( \theta+1\right)\left(A-\omega_{D}\right)}{3\left(\theta+1\right)\left(A+1 \right)-\delta_{1}}-1\right]. \tag{173}\]
Substituting in Eqs. (171) and (173) the expression of the EoS parameter for the R-PLECHDE
model given in Eq. (39), we find that \(B_{0,pl}\) and \(C_{pl}\) can be written as follows.
\[B_{0,pl} = a^{\delta_{1}}\left[A+\frac{1}{3}\left(\frac{1}{3c^{2}-\delta R^{ \gamma/2-1}}-\frac{1+\Omega_{k}}{\Omega_{Dpl}}\right)\right]\rho_{Dpl}^{\theta+1} \tag{174}\] \[= a^{\delta_{1}}\left\{A+\frac{1}{3}\left[\frac{1}{3c^{2}-\delta R ^{\gamma/2-1}}-(1+u_{pl})\right]\right\}\rho_{Dpl}^{\theta+1},\] \[C_{pl} = \left[\rho_{Dpl}a^{-3(A+1)}\right]^{\theta+1}\left\{\frac{3\left( \theta+1\right)\left[A+\frac{1}{3}\left(\frac{1}{3c^{2}-\delta R^{\gamma/2-1} }-\frac{1+\Omega_{k}}{\Omega_{Dpl}}\right)\right]}{3\left(\theta+1\right)(A+1 )-\delta_{1}}-1\right\}\] (175) \[= \left[\rho_{Dpl}a^{-3(A+1)}\right]^{\theta+1}\times\] \[\left\{\frac{3\left(\theta+1\right)\left[A+\frac{1}{3}\left( \frac{1}{3c^{2}-\delta R^{\gamma/2-1}}-(1+u_{pl})\right)\right]}{3\left( \theta+1\right)(A+1)-\delta_{1}}-1\right\}.\]
In the limiting case of a flat Dark Dominated Universe, we have that \(B_{0,Dark}\) and \(C_{Dark}\) can be written as follows.
\[B_{0,Dark} = \left[3\left(\frac{6c^{2}}{12c^{2}-1}\right)^{2}\left(\frac{1}{t^ {2}}\right)\right]^{\theta+1}t^{6\delta_{1}c^{2}/(12c^{2}-1)}\left(A-\frac{1}{ 3}+\frac{1}{9c^{2}}\right), \tag{176}\] \[C_{Dark} = \left\{\left[3\left(\frac{6c^{2}}{12c^{2}-1}\right)^{2}\right]t^ {-18(A+1)c^{2}/(12c^{2}-1)+2}\right\}^{\theta+1}\times\] (177) \[\left[\frac{3\left(\theta+1\right)\left(A-\frac{1}{3}+\frac{1}{9c ^{2}}\right)}{3\left(\theta+1\right)(A+1)-\delta_{1}}-1\right],\]
where the results of Eqs. (49) and (50) for \(\rho_{D}\) and \(\omega_{D}\) are utilized.
Using the value \(c^{2}\approx 0.46\), we can write
\[B_{0,Dark} \approx \left[\left(\frac{1.118568}{t^{2}}\right)\right]^{\theta+1}t^{1. 105\delta_{1}}\left(A-0.092\right), \tag{178}\] \[C_{Dark} \approx \left[\left(1.118568\right)t^{-1.831867A+0.16184}\right]^{\theta+ 1}\times\] (179) \[\left[\frac{3\left(\theta+1\right)\left(A-0.092\right)}{3\left( \theta+1\right)\left(A+1\right)-\delta_{1}}-1\right].\]
Furthermore, using the general definition of EoS parameter, we can rewrite Eqs. (82) and (86) as follows.
\[\dot{\phi}^{2} = \left(1+\omega_{D}\right)\rho_{D}, \tag{180}\] \[V\left(\phi\right) = \frac{1}{2}\left(1-\omega_{D}\right)\rho_{D}. \tag{181}\]
Using in Eqs. (180) and (181) the expression of the EoS parameter \(\omega_{Dpl}\) of the R-PLECHDE model
given in Eq. (39), we can derive the kinetic and the potential terms for the R-PLECHDE model
\[\dot{\phi}_{pl}^{2} = \rho_{Dpl}\left[1-\frac{1}{3}\left(\frac{1}{3c^{2}-\delta R^{\gamma/ 2-1}}-\frac{1+\Omega_{k}}{\Omega_{Dpl}}\right)\right] \tag{182}\] \[\rho_{Dpl}\left\{1-\frac{1}{3}\left[\frac{1}{3c^{2}-\delta R^{ \gamma/2-1}}-(1+u_{pl})\right]\right\},\] \[V\left(\phi\right)_{pl} = \frac{\rho_{Dpl}}{2}\left[1+\frac{1}{3}\left(\frac{1}{3c^{2}- \delta R^{\gamma/2-1}}-\frac{1+\Omega_{k}}{\Omega_{Dpl}}\right)\right]\] (183) \[\frac{\rho_{Dpl}}{2}\left\{1+\frac{1}{3}\left[\frac{1}{3c^{2}- \delta R^{\gamma/2-1}}-(1+u_{pl})\right]\right\}.\]
We can obtain the evolutionary form of the GCG model integrating Eq. (182) with respect to the scale factor \(a\left(t\right)\)
\[\phi\left(a\right)_{pl}-\phi\left(a_{0}\right)_{pl} = \int_{a_{0}}^{a}\sqrt{3\Omega_{Dpl}}\times \tag{184}\] \[\left[1-\frac{1}{3}\left(\frac{1}{3c^{2}-\delta R^{\gamma/2-1}}- \frac{1+\Omega_{k}}{\Omega_{Dpl}}\right)\right]^{1/2}\frac{da}{a}\] \[= \int_{a_{0}}^{a}\sqrt{3\Omega_{Dpl}}\times\] \[\left\{1-\frac{1}{3}\left[\frac{1}{3c^{2}-\delta R^{\gamma/2-1}} -(1+u_{pl})\right]\right\}^{1/2}\frac{da}{a},\]
where \(\dot{\phi}=\phi^{\prime}H\).
In the limiting case for a flat Dark Dominated Universe, i.e., when \(\Omega_{Dpl}=\Omega_{Dlog}=1\), \(\Omega_{k}=\Omega_{m}=0\) and \(\delta=0\) for the R-PLECHDE model, the scalar field \(\phi\) and the potential \(V\) of the GCG model reduce, respectively, to
\[\phi\left(t\right) = \left[\frac{6c^{2}}{\sqrt{3c^{2}\left(12c^{2}-1\right)}}\right] \ln\left(t\right), \tag{185}\] \[V\left(t\right) = \frac{6c^{2}\left(6c^{2}+1\right)}{\left(12c^{2}-1\right)^{2}} \left(\frac{1}{t^{2}}\right). \tag{186}\]
Using the value \(c^{2}\approx 0.46\), we can write
\[\phi\left(t\right) \approx 1.105\ln\left(t\right), \tag{187}\] \[V\left(t\right) \approx \left(\frac{0.508}{t^{2}}\right). \tag{188}\]
Moreover, after some algebraic calculations, we derive that the potential \(V\) as function of the scalar field \(\phi\)
\[V(\phi) = \left[\frac{6c^{2}(6c^{2}+1)}{(12c^{2}-1)^{2}}\right]e^{-\frac{ \sqrt{3c^{2}\left(12c^{2}-1\right)}}{3c^{2}}\phi}. \tag{189}\]
Using the value \(c^{2}\approx 0.46\), we obtain
\[V(\phi) \approx 0.508e^{-1.810\phi}. \tag{190}\]
### New Modified Chaplygin Gas (NMCG) Model
We now consider as model that represents the DE the New Modified Chaplygin Gas (NMCG), which has EoS given by [195]
\[p_{D}=B\rho_{D}-\frac{K\left(a\right)}{\rho_{D}^{\theta}}, \tag{191}\]
where \(K\left(a\right)\) is a function of the scale factor \(a\), \(B\) is a positive constant and \(\theta\) assumes values in the range \(0\leq\theta\leq 1\). If we take \(K\left(a\right)\) in the form \(K\left(a\right)=-\omega_{D}A_{1}a^{-3\left(\omega_{D}+1\right)\left(\theta+1 \right)}\) as introduced by [196], Eq. (191) can be written as follows.
\[p_{D}=B\rho_{D}+\left(\frac{\omega_{D}A_{1}}{\rho_{D}^{\theta}}\right)a^{-3 \left(\omega_{D}+1\right)\left(\theta+1\right)}. \tag{192}\]
The energy density \(\rho_{D}\) of the NMCG model is given by
\[\rho_{D}=\left[\left(\frac{\omega_{D}A_{1}}{\omega_{D}-B}\right)a^{-3\left( \omega_{D}+1\right)\left(\theta+1\right)}+B_{1}a^{-3\left(B+1\right)\left( \theta+1\right)}\right]^{\frac{1}{1+\theta}}, \tag{193}\]
where \(B_{1}\) is a constant of integration.
We now want to derive the expressions of the parameters \(B_{1}\) and \(A_{1}\). From Eq. (193), we can easily derive the following expression for \(B_{1}\):
\[B_{1}=a^{3\left(B+1\right)\left(\theta+1\right)}\left[\rho_{D}^{\theta+1}- \left(\frac{\omega_{D}}{\omega_{D}-B}\right)A_{1}a^{-3\left(\omega_{D}+1 \right)\left(\theta+1\right)}\right]. \tag{194}\]
Moreover, dividing by \(\rho_{D}\) the expression of \(p_{D}\) given in Eq. (192) and using the definition of EoS parameter \(\omega_{D}\), we obtain
\[A_{1}=\left(\frac{\omega_{D}-B}{\omega_{D}}\right)\rho_{D}^{\theta+1}a^{3 \left(\omega_{D}+1\right)\left(\theta+1\right)}. \tag{195}\]
We can obtain the final expressions of \(A_{1}\) and \(B_{1}\) for the R-PLECHDE model inserting in Eqs. (194) and (195) the expression of the EoS parameter given in Eq. (39)
\[B_{1,pl} = a^{3\left(B+1\right)\left(\theta+1\right)}\left[\rho_{Dpl}^{ \theta+1}-\left(\frac{\omega_{Dpl}}{\omega_{Dpl}-B}\right)A_{1}a^{-3\left( \omega_{Dpl}+1\right)\left(\theta+1\right)}\right], \tag{196}\] \[A_{1,pl} = \left(\frac{\omega_{Dpl}-B}{\omega_{Dpl}}\right)\rho_{Dpl}^{ \theta+1}a^{3\left(\omega_{Dpl}+1\right)\left(\theta+1\right)}. \tag{197}\]
In the limiting case of a flat Dark Dominated Universe, we obtain
\[B_{1,Dark} = t^{2\left(\theta+1\right)}\left\{\left[3\left(\frac{6c^{2}}{12c^ {2}-1}\right)^{2}\right]^{\theta+1}-\left[\frac{3c^{2}-1}{3c^{2}\left(1-3B \right)-1}\right]A_{1}\right\}, \tag{198}\] \[A_{1,Dark} = \frac{3c^{2}\left(1-3B\right)-1}{3c^{2}-1}\left[3\left(\frac{6c^ {2}}{12c^{2}-1}\right)^{2}\right]^{\theta+1}, \tag{199}\]
where the results of Eqs. (49) and (50) for \(\rho_{D}\) and \(\omega_{D}\) are utilized.
For \(c^{2}\approx 0.46\), we obtain
\[B_{1,Dark} \approx t^{2(\theta+1)}\left[\left(1.119\right)^{\theta+1}+\frac{0.38A_{1} }{4.14B-0.38}\right], \tag{200}\] \[A_{1,Dark} \approx -\left(\frac{4.14B-0.38}{0.38}\right)\left(1.119\right)^{\theta+ 1}. \tag{201}\]
Furthermore, using the general definition of EoS parameter, we can rewrite Eqs. (82) and (86) as follows.
\[\dot{\phi}^{2} = \left(1+\omega_{D}\right)\rho_{D}, \tag{202}\] \[V\left(\phi\right) = \frac{1}{2}\left(1-\omega_{D}\right)\rho_{D}. \tag{203}\]
Using in Eqs. (180) and (203) the expression of the EoS parameter \(\omega_{Dpl}\) of the R-PLECHDE model given in Eq. (39), we can derive the kinetic and the potential terms for the R-PLECHDE model
\[\dot{\phi}_{pl}^{2} = \rho_{Dpl}\left[1-\frac{1}{3}\left(\frac{1}{3c^{2}-\delta R^{ \gamma/2-1}}-\frac{1+\Omega_{k}}{\Omega_{Dpl}}\right)\right] \tag{204}\] \[\rho_{Dpl}\left\{1-\frac{1}{3}\left[\frac{1}{3c^{2}-\delta R^{ \gamma/2-1}}-\left(1+u_{pl}\right)\right]\right\},\] \[V\left(\phi\right)_{pl} = \frac{\rho_{Dpl}}{2}\left[1+\frac{1}{3}\left(\frac{1}{3c^{2}- \delta R^{\gamma/2-1}}-\frac{1+\Omega_{k}}{\Omega_{Dpl}}\right)\right]\] (205) \[\frac{\rho_{Dpl}}{2}\left\{1+\frac{1}{3}\left[\frac{1}{3c^{2}- \delta R^{\gamma/2-1}}-\left(1+u_{pl}\right)\right]\right\}.\]
We can obtain the evolutionary form of the GCG integrating Eq. (204) with respect to the scale factor \(a\left(t\right)\)
\[\phi\left(a\right)_{pl}-\phi\left(a_{0}\right)_{pl} = \int_{a_{0}}^{a}\sqrt{3\Omega_{Dpl}}\times \tag{206}\] \[\left[1-\frac{1}{3}\left(\frac{1}{3c^{2}-\delta R^{\gamma/2-1}}- \frac{1+\Omega_{k}}{\Omega_{Dpl}}\right)\right]^{1/2}\frac{da}{a}\] \[= \int_{a_{0}}^{a}\sqrt{3\Omega_{Dpl}}\times\] \[\left\{1-\frac{1}{3}\left[\frac{1}{3c^{2}-\delta R^{\gamma/2-1}} -\left(1+u_{pl}\right)\right]\right\}^{1/2}\frac{da}{a},\]
where \(\dot{\phi}=\phi^{\prime}H\).
In the limiting case for a flat Dark Dominated Universe, i.e., when \(\Omega_{Dpl}=\Omega_{Dlog}=1\), \(\Omega_{k}=\Omega_{m}=0\), \(\delta=0\) for the R-PLECHDE model, the scalar field and the potential of the GCG reduce,
respectively, to
\[\phi\left(t\right) = \frac{6c^{2}}{\sqrt{3c^{2}\left(12c^{2}-1\right)}}\ln\left(t\right), \tag{207}\] \[V\left(t\right) = \frac{6c^{2}\left(6c^{2}+1\right)}{\left(12c^{2}-1\right)^{2}} \left(\frac{1}{t^{2}}\right). \tag{208}\]
For \(c^{2}\approx 0.46\), we obtain
\[\phi\left(t\right) \approx 1.105\ln\left(t\right), \tag{209}\] \[V\left(t\right) \approx \left(\frac{0.508}{t^{2}}\right). \tag{210}\]
Moreover, after some algebraic calculations, we derive that the potential \(V\) can be written as function of the scalar field \(\phi\) as follows.
\[V(\phi) = \left[\frac{6c^{2}(6c^{2}+1)}{(12c^{2}-1)^{2}}\right]e^{-\frac{ \sqrt{3c^{2}\left(12c^{2}-1\right)}}{3c^{2}}\phi}. \tag{211}\]
For \(c^{2}\approx 0.46\), we have
\[V(\phi) \approx 0.508e^{-1.810\phi}. \tag{212}\]
### Viscous Generalized Chaplygin Gas (VGCG) Model
We now consider the Viscous Generalized Chaplygin Gas (VGCG) Model. In order to be more general possible respect to the previous sections, we now consider a viscous DE interacting with DM. In an isotropic and homogeneous FRW Universe, the dissipative effects are generated by the presence of bulk viscosity in the cosmic fluids [197; 198; 199; 200; 201; 202]. Some dissipative processes in the Universe (including bulk viscosity, shear viscosity and heat transport) are believed to be present in any realistic theory of the evolution of the Universe and they have been widely studied in [203; 204; 205; 206]. The role of viscosity has been widely discussed and it is considered a promising candidate which can explain several cosmological problems like DE. The viscous DE model can provide an explanation the high photon to baryon ratio [207] and it leads to an inflationary scenario in the early phase of the Universe evolution [208]. The coefficient of viscosity should decrease as the Universe expands; moreover, its presence can explain the current accelerated expansion of the Universe [209; 210; 211]. This model is also consistent with astrophysical observations at lower redshifts, and a viscous cosmic model favors a standard cold DM model with Cosmological Constant (\(\Lambda\)CDM) in the later cosmic evolution [212]. The model also presents the scenario of phantom crossing [213].
The general theory of dissipation in a relativistic imperfect fluid was done by Eckart [214] and, in a different formulation, by Landau and Lifshitz [215]. This is only the first order deviation from
equilibrium and may has a causality problem [197; 198; 199; 200; 201; 202]. The full causal theory was developed by Israel & Stewart [216] and it has also been studied in the evolution of the early Universe [217]. However, the character of the evolution equation is very complicated in the full causal theory [197; 198; 199; 200; 201; 202]. Fortunately, since the phenomena are quasi-stationary, i.e., they slowly vary on space and time scale which are characterized by the mean free path and the mean collision time of the fluid particles, the conventional theory can be still considered valid. In the case of an isotropic and homogeneous FRW Universe, the dissipative process can be modeled as a bulk viscosity within a thermodynamical approach, while the shear viscosity can be safely ignored [218]. For other works on viscous DE models see [219; 220; 221; 222; 223; 224; 225; 226].
DE with bulk viscosity has the peculiar property to cause an accelerated expansion of phantom-type in late evolution stages of the Universe [227; 228; 229] and it can also alleviate several cosmological problems like the coincidence and the age problems and also the phantom crossing.
The observations also indicate that the Universe medium is not a perfect fluid and the viscosity is concerned in the evolution of the Universe (for more details see [230] and references therein). In the framework of FRW metric, the shear viscosity has no contribution in the energy-momentum tensor \(T^{\mu\nu}\) and the bulk viscosity behaves like an effective pressure [231].
The bulk viscosity introduces dissipation by only redefining the effective pressure \(p_{eff}\) given by the relation \(p_{eff}=p-3\nu H\), where \(\nu\) indicates the bulk viscosity coefficient. The condition \(\nu>0\) guaranties a positive entropy production, consequently, no violation of the second law of the thermodynamics [232]. The case \(\nu=\tau H\), implying the bulk viscosity is proportional to the fluid velocity vector, is physically natural and has been considered earlier in a astrophysical context, see [233].
The energy conservation equation yields the following expression of the energy density of the VGCG model \(\rho_{D}\)[234]
\[\rho_{D}=\left[\frac{Da^{-3(\theta+1)(1-\nu\varrho)}-\chi}{1-\nu\varrho}\right] ^{\frac{1}{\theta+1}}, \tag{213}\]
where \(\varrho=m_{p}^{-1}\sqrt{1-r_{m}}\) (with \(r_{m}=\frac{\rho_{m}}{\rho_{D}}=\frac{\Omega_{m}}{\Omega_{D}}\)) and \(D\) is constant of integration.
The energy-momentum tensor corresponding to the bulk viscous fluid is given by
\[T=(\rho+\bar{p})\,u_{\mu}u_{\nu}-\bar{p}g_{\mu\nu}, \tag{214}\]
where
\[\bar{p}=p_{D}-3\varepsilon H \tag{215}\]
represents the total pressure which involves the proper pressure \(p\), the bulk viscosity coefficient \(\varepsilon\) and the Hubble parameter \(H\).
We have that, in this case, \(p_{D}=\frac{\chi}{\rho_{D}^{\theta}}\), with \(\chi>0\). We notice that the first term on the right hand side of Eq. (215) mimics the GCG model and the parameter \(\theta\) varies in the range \(0<\theta<1\). If \(\theta=1\), it leads the Chaplygin gas model otherwise, if \(\theta<0\), it corresponds to a polytropic gas.
We choose an expression of \(\varepsilon\) which depends on the energy density \(\rho_{D}\) in the following way: \(\varepsilon=\nu\rho_{D}^{1/2}\) (with \(\nu\) being a constant parameter). Therefore, using in Eq. (215) the expression of \(\varepsilon\) we have chosen, we can rewrite \(\bar{p}\) as follows.
\[\bar{p}=\frac{\chi}{\rho_{D}^{\theta}}-3\nu H\rho_{D}^{1/2}. \tag{216}\]
The energy density \(\rho_{D}\) and pressure \(p_{D}\) of the viscous dark energy model are given by the following expressions:
\[\rho_{D} = \left[\frac{Da^{-3(\theta+1)(1-\nu\varrho)}-\chi}{1-\nu\varrho} \right]^{\frac{1}{\theta+1}}, \tag{217}\] \[p_{D} = \chi\left[\frac{1-\nu\varrho}{Da^{-3(\theta+1)(1-\nu\varrho)}- \chi}\right]^{\frac{\theta}{\theta+1}}-3\nu H\left[\frac{Da^{-3(\theta+1)(1- \nu\varrho)}-\chi}{1-\nu\varrho}\right]^{1/2}. \tag{218}\]
We now want to derive the expressions of the parameters \(\chi\) and \(D\). The expression of \(\chi\) as function of the EoS parameter \(\omega_{D}\) can be easily derived from Eq. (216). Dividing the result of Eq. (216) by \(\rho_{D}\) and using the general definition of \(\omega_{D}\), we obtain
\[\chi=\rho_{D}^{\theta+1}\left(3\nu H\rho_{D}^{-1/2}+\omega_{D}\right). \tag{219}\]
We can determine the expressions for \(D\) from Eq. (217), obtaining
\[D=\left[\rho_{D}^{\theta+1}\left(1-\nu\varrho\right)+\chi\right]a^{3(\theta+ 1)(1-\nu\varrho)}. \tag{220}\]
Inserting the expression of \(\chi\) derived in Eq. (219) into the expression of \(D\) obtained in Eq. (220), we have
\[D=a^{3(\theta+1)(1-\nu\varrho)}\rho_{D}^{\theta+1}\left(1-\nu\varrho+3\nu H \rho_{D}^{-1/2}+\omega_{D}\right). \tag{221}\]
Using in Eqs. (219) and (221) the expression of the EoS parameter for the R-PLECHDE model
obtained in Eq. (39), we obtain
\[\chi_{pl} = \rho_{Dpl}^{\theta+1}\left[3\nu H\rho_{Dpl}^{-1/2}-\frac{1}{3}\left( \frac{1}{3c^{2}-\delta R^{\gamma/2-1}}-\frac{1+\Omega_{k}}{\Omega_{Dpl}}\right)\right] \tag{222}\] \[= \rho_{Dpl}^{\theta+1}\left\{3\nu H\rho_{Dpl}^{-1/2}-\frac{1}{3} \left[\frac{1}{3c^{2}-\delta R^{\gamma/2-1}}-(1+u_{pl})\right]\right\},\] \[D_{pl} = a^{3(\theta+1)(1-\nu\varrho)}\rho_{Dpl}^{\theta+1}\times\] (223) \[\left[1-\nu\varrho+3\nu H\rho_{D}^{-1/2}-\frac{1}{3}\left(\frac{1 }{3c^{2}-\delta R^{\gamma/2-1}}-\frac{1+\Omega_{k}}{\Omega_{Dpl}}\right)\right]\] \[= a^{3(\theta+1)(1-\nu\varrho)}\rho_{Dpl}^{\theta+1}\times\] \[\left\{1-\nu\varrho+3\nu H\rho_{D}^{-1/2}-\frac{1}{3}\left[\frac {1}{3c^{2}-\delta R^{\gamma/2-1}}-(1+u_{pl})\right]\right\}.\]
In the limiting case of a flat Dark Dominated Universe, we obtain the following expressions for \(\chi_{Dark}\) and \(D_{Dark}\)
\[\chi_{Dark} = \left[3\left(\frac{6c^{2}}{12c^{2}-1}\right)^{2}\left(\frac{1}{t^ {2}}\right)\right]^{\theta+1}\times \tag{224}\] \[\left\{\sqrt{3}\nu+\frac{1}{3}-\frac{1}{9c^{2}}\right\},\] \[D_{Dark} = t^{18(\theta+1)\left(1-\nu m_{p}^{-1}\right)c^{2}/(12c^{2}-1)} \left[3\left(\frac{6c^{2}}{12c^{2}-1}\right)^{2}\left(\frac{1}{t^{2}}\right) \right]^{\theta+1}\times\] (225) \[\left\{\frac{4}{3}-\nu m_{p}^{-1}+\sqrt{3}\nu-\frac{1}{9c^{2}} \right\},\]
where the results of Eqs. (49) and (50) for \(\rho_{D}\) and \(\omega_{D}\) are utilized.
Using the value of \(c^{2}\) found in the work of Gao, i.e., \(c^{2}\approx 0.46\), we can write
\[\chi_{Dark} \approx \left(\frac{1.119}{t^{2}}\right)^{\theta+1}\times \tag{226}\] \[\left(1.732\nu+0.092\right),\] \[D_{Dark} \approx \left(\frac{1.119}{t^{2}}\right)^{\theta+1}t^{1.838(\theta+1) \left(1-\nu m_{p}^{-1}\right)}\times\] (227) \[\left(1.092-\nu m_{p}^{-1}+1.732\nu\right).\]
We must emphasize that, for the flat Dark Dominated case, we used that fact that \(r_{m}=0\), therefore we have that \(\varrho=1\).
Furthermore, using the general definition of EoS parameter, we can rewrite Eqs. (82) and (86) as follows.
\[\dot{\phi}^{2} = \left(1+\omega_{D}\right)\rho_{D}, \tag{228}\] \[V\left(\phi\right) = \frac{1}{2}\left(1-\omega_{D}\right)\rho_{D}. \tag{229}\]
Using in Eqs. (228) and (229) the expression of the EoS parameter \(\omega_{D}\) of the R-PLECHDE model given in Eq. (39), we can derive the kinetic and the potential terms for the R-PLECHDE model
\[\dot{\phi}_{pl}^{2} = \rho_{Dpl}\left[1-\frac{1}{3}\left(\frac{1}{3c^{2}-\delta R^{ \gamma/2-1}}-\frac{1+\Omega_{k}}{\Omega_{Dpl}}\right)\right] \tag{230}\] \[\rho_{Dpl}\left\{1-\frac{1}{3}\left[\frac{1}{3c^{2}-\delta R^{ \gamma/2-1}}-(1+u_{pl})\right]\right\},\] \[V\left(\phi\right)_{pl} = \frac{\rho_{Dpl}}{2}\left[1+\frac{1}{3}\left(\frac{1}{3c^{2}- \delta R^{\gamma/2-1}}-\frac{1+\Omega_{k}}{\Omega_{Dpl}}\right)\right]\] (231) \[\frac{\rho_{Dpl}}{2}\left\{1+\frac{1}{3}\left[\frac{1}{3c^{2}- \delta R^{\gamma/2-1}}-(1+u_{pl})\right]\right\}.\]
We can obtain the evolutionary form of the GCG integrating Eq. (230) with respect to the scale factor \(a\left(t\right)\)
\[\phi\left(a\right)_{pl}-\phi\left(a_{0}\right)_{pl} = \int_{a_{0}}^{a}\sqrt{3\Omega_{Dpl}}\times \tag{232}\] \[\left[1-\frac{1}{3}\left(\frac{1}{3c^{2}-\delta R^{\gamma/2-1}} -\frac{1+\Omega_{k}}{\Omega_{Dpl}}\right)\right]^{1/2}\frac{da}{a}\] \[= \int_{a_{0}}^{a}\sqrt{3\Omega_{Dpl}}\times\] \[\left\{1-\frac{1}{3}\left[\frac{1}{3c^{2}-\delta R^{\gamma/2-1}} -(1+u_{pl})\right]\right\}^{1/2}\frac{da}{a},\]
where \(\dot{\phi}=\phi^{\prime}H\).
In the limiting case for a flat Dark Dominated Universe, i.e., when \(\Omega_{Dpl}=\Omega_{Dlog}=1\), \(\Omega_{k}=\Omega_{m}=0\) and \(\delta=0\) for the R-PLECHDE model, the scalar field and the potential of the GCG, respectively, reads
\[\phi\left(t\right) = \frac{6c^{2}}{\sqrt{3c^{2}\left(12c^{2}-1\right)}}\ln\left(t \right), \tag{233}\] \[V\left(t\right) = \frac{6c^{2}\left(6c^{2}+1\right)}{\left(12c^{2}-1\right)^{2}} \left(\frac{1}{t^{2}}\right). \tag{234}\]
For \(c^{2}\approx 0.46\), we obtain
\[\phi\left(t\right) \approx 1.105\ln\left(t\right), \tag{235}\] \[V\left(t\right) \approx \left(\frac{0.508}{t^{2}}\right). \tag{236}\]
Moreover, after some algebraic calculations, we derive that the potential \(V\) can be written as function of the scalar field \(\phi\) as follows.
\[V(\phi) = \left[\frac{6c^{2}(6c^{2}+1)}{(12c^{2}-1)^{2}}\right]e^{-\frac{ \sqrt{3c^{2}\left(12c^{2}-1\right)}}{3c^{2}}}. \tag{237}\]
For \(c^{2}\approx 0.46\), we have
\[V(\phi) \approx 0.508e^{-1.810\phi}. \tag{238}\]
### Dirac-Born-Infeld (DBI) Model
We now consider the Dirac-Born-Infeld (DBI) Model. Many works with the aim of the connection between the string theory and the inflation have been recently considered. While doing so, various ideas of string theory based on the concept of branes have proved themselves fruitful. One area which has been well explored in recent years, is inflation driven by the open string sector through dynamical Dp-branes. This is the so-called Dirac-Born-Infeld (DBI) inflation, which lies in a special class of K-inflation models. Considering the DE scalar field is a Dirac-Born-Infeld DBI scalar field, the action \(ds_{bdi}\) of the field can be written as follows [235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245].
\[ds_{dbi}=\int d^{4}x\,a^{3}\left(t\right)\left[F\left(\phi\right)\sqrt{1-\frac {\dot{\phi}^{2}}{F\left(\phi\right)}}+V\left(\phi\right)-F\left(\phi\right) \right], \tag{239}\]
where the quantity \(F\left(\phi\right)\) is the tension and the quantity \(V\left(\phi\right)\) represents the potential. From the action of the DBI model given in Eq. (239), the corresponding pressure \(p_{dbi}\) and energy density \(\rho_{dbi}\) of the scalar field model can be written as follows.
\[p_{dbi} = \left(\frac{\gamma-1}{\gamma}\right)F\left(\phi\right)-V\left( \phi\right), \tag{240}\] \[\rho_{dbi} = \left(\gamma-1\right)F\left(\phi\right)+V\left(\phi\right), \tag{241}\]
where the quantity \(\gamma\) is reminiscent from the usual relativistic Lorentz factor and it is given by the following relation:
\[\gamma=\left[1-\frac{\dot{\phi}^{2}}{F\left(\phi\right)}\right]^{-1/2}. \tag{242}\]
Using the expressions of \(p_{dbi}\) and \(\rho_{dbi}\) given in Eqs. (240) and (241), we have that the EoS parameter \(\omega_{dbi}\) is given by
\[\omega_{dbi} = \frac{\left(\frac{\gamma-1}{\gamma}\right)F\left(\phi\right)-V \left(\phi\right)}{\left(\gamma-1\right)F\left(\phi\right)+V\left(\phi\right) }=\frac{\left(\gamma-1\right)F\left(\phi\right)-\gamma V\left(\phi\right)}{ \gamma\left[\left(\gamma-1\right)F\left(\phi\right)+V\left(\phi\right)\right]}. \tag{243}\]
We now want to obtain the relations for \(F\), \(\dot{\phi}^{2}\) and \(V\). Adding Eqs. (240) and (241) and using the general definition of \(\omega_{D}\), after some algebraic calculations, we derive that
\[F = \rho_{D}\left(\frac{\gamma}{\gamma^{2}-1}\right)\left(\omega_{D}+1 \right). \tag{244}\]
From the definition of \(\gamma\) given in Eq. (242) and using the result of Eq. (244), we obtain
\[\dot{\phi} = \sqrt{\frac{\rho_{D}\left(\omega_{D}+1\right)}{\gamma}}. \tag{245}\]
Subtracting Eqs. (240) and (241) and using the general definition of \(\omega_{D}\), after some algebraic calculations, we can derive that
\[V = -\left(\frac{\rho_{D}}{\gamma+1}\right)\left(\gamma\omega_{D}-1 \right). \tag{246}\]
Therefore, we obtain the following set of expressions for \(F\), \(\dot{\phi}\) and \(V\)
\[F = \rho_{D}\left(\frac{\gamma}{\gamma^{2}-1}\right)\left(\omega_{D} +1\right), \tag{247}\] \[\dot{\phi} = \sqrt{\frac{\rho_{D}\left(\omega_{D}+1\right)}{\gamma}},\] (248) \[V = -\frac{\rho_{D}}{\gamma+1}\left(\gamma\omega_{D}-1\right). \tag{249}\]
Using in Eqs. (247), (248) and (249) the expression of the EoS parameter for the R-PLECHDE model given in Eq. (39), we obtain the following expressions for \(F_{pl}\), \(\dot{\phi}_{pl}\) and \(V_{pl}\)
\[F_{pl} = \rho_{Dpl}\left(\frac{\gamma}{\gamma^{2}-1}\right)\left[1-\frac{ 1}{3}\left(\frac{1}{3c^{2}-\delta R^{\gamma/2-1}}-\frac{1+\Omega_{k}}{\Omega_{ Dpl}}\right)\right] \tag{250}\] \[= \rho_{Dpl}\left(\frac{\gamma}{\gamma^{2}-1}\right)\left\{1- \frac{1}{3}\left[\frac{1}{3c^{2}-\delta R^{\gamma/2-1}}-(1+u_{pl})\right] \right\},\] \[\dot{\phi}_{pl} = \sqrt{\frac{\rho_{Dpl}\left[1-\frac{1}{3}\left(\frac{1}{3c^{2}- \delta R^{\gamma/2-1}}-\frac{1+\Omega_{k}}{\Omega_{Dpl}}\right)\right]}{ \gamma}}\] (251) \[= \sqrt{\frac{\rho_{Dpl}\left\{1-\frac{1}{3}\left[\frac{1}{3c^{2}- \delta R^{\gamma/2-1}}-(1+u_{pl})\right]\right\}}{\gamma}},\] \[V_{pl} = \frac{\rho_{Dpl}}{\gamma+1}\left[\frac{\gamma}{3}\left(\frac{1}{ 3c^{2}-\delta R^{\gamma/2-1}}-\frac{1+\Omega_{k}}{\Omega_{Dpl}}\right)+1\right]\] (252) \[= \frac{\rho_{Dpl}}{\gamma+1}\left\{\frac{\gamma}{3}\left[\frac{1} {3c^{2}-\delta R^{\gamma/2-1}}-(1+u_{pl})\right]+1\right\}.\]
In the limiting case of a flat Dark Dominated Universe, using in Eqs. (247), (248) and (249) the expressions of \(\omega_{D}\) and \(\rho_{D}\) given in Eqs. (50) and (49), we find the following expressions for \(F_{Dark}\), \(\dot{\phi}_{Dark}\) and \(V_{Dark}\)
\[F_{Dark} = \left[3\left(\frac{6c^{2}}{12c^{2}-1}\right)^{2}\left(\frac{1}{t^ {2}}\right)\right]\left(\frac{\gamma}{\gamma^{2}-1}\right)\left(\frac{4}{3}- \frac{1}{9c^{2}}\right), \tag{253}\] \[\dot{\phi}_{Dark} = \left(\frac{6c^{2}}{12c^{2}-1}\right)\sqrt{\frac{4-\frac{1}{3c^{ 2}}}{\gamma}}\left(\frac{1}{t}\right)\] (254) \[= \sqrt{\frac{12c^{2}}{\left(12c^{2}-1\right)\gamma}}\left(\frac{1 }{t}\right),\] \[V_{Dark} = -\frac{3\left(\frac{6c^{2}}{12c^{2}-1}\right)^{2}}{\gamma+1} \left[\gamma\left(\frac{1}{3}-\frac{1}{9c^{2}}\right)-1\right]\left(\frac{1}{t ^{2}}\right). \tag{255}\]
For \(c^{2}\approx 0.46\), we have
\[F_{Dark} \approx \left(\frac{1.222}{t^{2}}\right)\left(\frac{\gamma}{\gamma^{2}-1} \right), \tag{256}\] \[\dot{\phi}_{Dark} \approx \sqrt{\frac{1}{\gamma}}\left(\frac{1.106}{t}\right),\] (257) \[V_{Dark} \approx -\frac{1.119}{\gamma+1}\left(\frac{0.092\gamma-1}{t^{2}}\right). \tag{258}\]
We now consider the particular case corresponding to
\[F\left(\phi\right)=F_{0}\dot{\phi}^{2}, \tag{259}\]
with \(F_{0}>0\) being a constant parameter. In this case, we have that
\[\gamma=\sqrt{\frac{F_{0}}{F_{0}-1}}. \tag{260}\]
Eq. (260) implies that \(F_{0}>1\) in order to have a real value of \(\gamma\). Negative values of \(F_{0}\) also lead to positive values of \(\gamma\) but they cannot be considered since we previously imposed that \(F_{0}>0\).
Using the result of Eq. (260), we can rewrite Eqs. (247), (248) and (249) in the following forms:
\[F = \rho_{D}\sqrt{F_{0}\left(F_{0}-1\right)}\left(\omega_{D}+1 \right), \tag{261}\] \[\dot{\phi} = \left(\sqrt{\frac{F_{0}}{F_{0}-1}}\right)^{-1/4}\sqrt{\rho_{D} \left(\omega_{D}+1\right)},\] (262) \[V = -\frac{\rho_{D}}{\sqrt{\frac{F_{0}}{F_{0}-1}}+1}\left[\left( \sqrt{\frac{F_{0}}{F_{0}-1}}\right)\omega_{D}-1\right]. \tag{263}\]
Using in Eqs. (261), (262) and (263) the expression of the EoS parameter for the R-PLECHDE model given in Eq. (39), we obtain the following expressions for \(F_{pl}\), \(\dot{\phi}_{pl}\) and \(V_{pl}\)
\[F_{pl} = \sqrt{F_{0}\left(F_{0}-1\right)}\rho_{Dpl}\left[1-\frac{1}{3} \left(\frac{1}{3c^{2}-\delta R^{\gamma/2-1}}-\frac{1+\Omega_{k}}{\Omega_{Dpl} }\right)\right] \tag{264}\] \[= \sqrt{F_{0}\left(F_{0}-1\right)}\rho_{Dpl}\left\{1-\frac{1}{3} \left[\frac{1}{3c^{2}-\delta R^{\gamma/2-1}}-\left(1+u_{pl}\right)\right] \right\},\] \[\dot{\phi}_{pl} = \left(\sqrt{\frac{F_{0}}{F_{0}-1}}\right)^{-1/4}\sqrt{\rho_{Dpl} \left[1-\frac{1}{3}\left(\frac{1}{3c^{2}-\delta R^{\gamma/2-1}}-\frac{1+ \Omega_{k}}{\Omega_{Dpl}}\right)\right]}\] (265) \[= \left(\sqrt{\frac{F_{0}}{F_{0}-1}}\right)^{-1/4}\sqrt{\rho_{Dpl} \left\{1-\frac{1}{3}\left[\frac{1}{3c^{2}-\delta R^{\gamma/2-1}}-\left(1+u_{ pl}\right)\right]\right\}},\] \[V_{pl} = \frac{\rho_{Dpl}}{\sqrt{\frac{F_{0}}{F_{0}-1}}+1}\left\{\left( \sqrt{\frac{F_{0}}{F_{0}-1}}\right)\left[\frac{1}{3}\left(\frac{1}{3c^{2}- \delta R^{\gamma/2-1}}-\frac{1+\Omega_{k}}{\Omega_{Dpl}}\right)\right]+1\right\}\] (266) \[= \frac{\rho_{Dpl}}{\sqrt{\frac{F_{0}}{F_{0}-1}}+1}\left\{\left( \sqrt{\frac{F_{0}}{F_{0}-1}}\right)\left[\frac{1}{3}\left(\frac{1}{3c^{2}- \delta R^{\gamma/2-1}}-\left(1+u_{pl}\right)\right)\right]+1\right\}.\]
In the limiting case of a flat Dark Dominated Universe, using the expression of \(\rho_{D}\) and \(\omega_{D}\) given in Eqs. (49) and (50), we find the following expressions for \(F_{Dark}\), \(\dot{\phi}_{Dark}\) and \(V_{Dark}\)
\[F_{Dark} = \sqrt{F_{0}\left(F_{0}-1\right)}\left[3\left(\frac{6c^{2}}{12c^{2} -1}\right)^{2}\left(\frac{1}{t^{2}}\right)\right]\left(\frac{4}{3}-\frac{1}{9c ^{2}}\right), \tag{267}\] \[\dot{\phi}_{Dark} = \left(\sqrt{\frac{F_{0}}{F_{0}-1}}\right)^{-1/4}\left(\frac{c^{2 }}{12c^{2}-1}\right)^{1/2}\left(\frac{2\sqrt{3}}{t}\right),\] (268) \[V_{Dark} = -\frac{3\left(\frac{6c^{2}}{12c^{2}-1}\right)^{2}}{\sqrt{\frac{F_ {0}}{F_{0}-1}}+1}\left[\left(\sqrt{\frac{F_{0}}{F_{0}-1}}\right)\left(\frac{1 }{3}-\frac{1}{9c^{2}}\right)-1\right]\left(\frac{1}{t^{2}}\right). \tag{269}\]
From Eq. (268), we can easily obtain the following relation for \(\phi\)
\[\phi_{Dark}=\left(\sqrt{\frac{F_{0}}{F_{0}-1}}\right)^{-1/4}\left(\frac{c^{2}} {12c^{2}-1}\right)^{1/2}2\sqrt{3}\ln t \tag{270}\]
For \(c^{2}\approx 0.46\), we obtain
\[F_{Dark} \approx 1.092\sqrt{F_{0}\left(F_{0}-1\right)}\left[\left(\frac{1.119}{t ^{2}}\right)\right], \tag{271}\] \[\phi_{Dark} \approx 1.105\left(\sqrt{\frac{F_{0}}{F_{0}-1}}\right)^{-1/4}\ln t\] (272) \[V_{Dark} \approx -\frac{1.119}{\sqrt{\frac{F_{0}}{F_{0}-1}}+1}\left[0.092\left( \sqrt{\frac{F_{0}}{F_{0}-1}}\right)-1\right]\left(\frac{1}{t^{2}}\right). \tag{273}\]
### Yang-Mills (YM) Model
We now consider the Yang-Mills (YM) model. Recent studies suggest that Yang-Mills field [246; 247; 248; 249; 250; 251; 252; 253; 254; 255] can be considered as a useful candidate to describe the DE nature. There are two main reasons which can be taken into account in order to consider the YM field as source of DE. First of all, for the normal scalar field models, the connection of the field to particle physics models has not been clear until now. The second reason is that the weak energy condition can not be violated by the field. The YM field we consider has some interesting features: it is an important cornerstone for any particle physics model with interactions mediated by gauge bosons, so it can be incorporated into a sensible unified theory of particle physics. Moreover, the EoS of matter for the effective YM Condensate (YMC) is different from that of the ordinary matter as well as the scalar fields, and the state of \(-1<\omega<0\) and \(\omega<-1\) can be also naturally realized.
In the effective YMC DE model, the effective Yang-Mills field Lagrangian \(L_{YMC}\) is given by
\[L_{YMC}=\frac{bF}{2}\left(\ln\left|\frac{F}{\kappa^{2}}\right|-1\right), \tag{274}\]
where the quantity \(\kappa\) represents the re-normalization scale with dimension of squared mass and \(F\) plays the role of the order parameter of the YMC and it is given by
\[F=-\frac{1}{2}F_{\mu\nu}^{\alpha}F^{\alpha\mu\nu}=E^{2}-B^{2}. \tag{275}\]
For the pure electric case we have, \(B=0\) which implies that \(F=E^{2}\).
Furthermore, we have that \(b\) is the Callan-Symanzik coefficient [256; 257] and it is given, for \(SU\left(N\right)\) by the following relation:
\[b=\frac{11N-2N_{f}}{24\pi^{2}}, \tag{276}\]
where \(N_{f}\) represents the number of quark flavors.
For the gauge group \(SU(2)\), we have \(b=2\cdot\frac{11}{24\pi^{2}}\) when the fermion's contribution is neglected, and \(b=2\cdot\frac{5}{24\pi^{2}}\) when the number of quark flavors is taken to be \(N_{f}=6\). For the case of \(SU(3)\) the effective Lagrangian in Eq. (274) leads to a phenomenological description of the asymptotic freedom for the quarks inside hadrons [258; 259; 260]. It should be noticed that the \(SU(2)\) YM field is introduced here as a model for the cosmic dark energy, it may not be directly identified as the QCD gluon fields, nor the weak-electromagnetic unification gauge fields, such as \(Z^{0}\) and \(W^{\pm}\). The YMC has an energy scale characterized by the parameter \(\kappa^{1/2}\approx 10^{-3}eV\), much smaller than that of QCD and of the weak-electromagnetic unification. An explanation can be given for the form in Eq. (274) as an effective Lagrangian up to 1-loop quantum correction [258; 259; 260]. A classical \(SU(N)\) YM field Lagrangian is given by
\[L=\frac{1}{2g_{0}^{2}}F, \tag{277}\]
where \(g_{0}\) indicates the bare coupling constant. When the 1-loop quantum corrections are also included, the bare coupling constant \(g_{0}\) will be replaced by the running coupling \(g\) as follows.
\[g_{0}^{2}\to g^{2}=\frac{4\cdot 12\pi^{2}}{11N\ln\left(\frac{k}{k_{0}^{2}} \right)}=\frac{2}{b\ln\left(\frac{k}{k_{0}^{2}}\right)}, \tag{278}\]
where \(k\) represents the momentum transfer and \(k_{0}\) indicates the energy scale. In order to have an effective theory, we can just replace the momentum transfer \(k^{2}\) by the field strength \(F\) in the following way:
\[\ln\left(\frac{k}{k_{0}^{2}}\right)\to 2\ln\left|\frac{F}{\kappa^{2}e} \right|=2\ln\left|\frac{F}{\kappa^{2}}-1\right|. \tag{279}\]
We have to notice that the result of Eq. (279) yields the result of Eq. (274).
Some of the interesting characteristics of this effective YMC action include the Lorentz invariance [261], the gauge invariance, the asymptotic freedom and the correct trace anomaly [100 ]. Having a logarithmic dependence on the field strength, the Lagrangian of the YMC model has a form similar to he Coleman-Weinberg scalar effective potential [263] and to the Parker-Raval effective gravity Lagrangian [264].
We also want to emphasize that the renormalization scale \(\kappa\) is the only parameter of this effective YMC model. In contrast to the scalar field DE models, this YMC Lagrangian is completely fixed by quantum corrections up to order of 1-loops, and there is no way for adjusting its functional form.
From the Lagrangian given in Eq. (274), we can derive the expressions of the energy density \(\rho_{y}\) and of the pressure \(p_{y}\) of the YMC as follows.
\[\rho_{y} = \frac{\epsilon E^{2}}{2}+\frac{bE^{2}}{2}, \tag{280}\] \[p_{y} = \frac{\epsilon E^{2}}{6}-\frac{bE^{2}}{2}, \tag{281}\]
where \(\epsilon\) represents the dielectic constant of the YMC which is given by the following relation:
\[\epsilon=2\frac{\partial L_{eff}}{\partial F}=b\ln\left|\frac{F}{\kappa^{2}} \right|. \tag{282}\]
Eqs. (280) and (281) can be also rewritten in the following alternative way:
\[\rho_{y} = \frac{1}{2}b\kappa^{2}\left(y+1\right)e^{y}, \tag{283}\] \[p_{y} = \frac{1}{6}b\kappa^{2}\left(y-3\right)e^{y}, \tag{284}\]
or, equivalently, as follows.
\[\rho_{y} = \frac{1}{2}\left(y+1\right)bE^{2}, \tag{285}\] \[p_{y} = \frac{1}{6}\left(y-3\right)bE^{2}, \tag{286}\]
where the parameter \(y\) is defined as follows.
\[y=\frac{\epsilon}{b}=\ln\left|\frac{F}{\kappa^{2}}\right|=\ln\left|\frac{E^{2 }}{\kappa^{2}}\right|. \tag{287}\]
Therefore, considering the expressions of \(\rho_{y}\) and \(p_{y}\) given in Eqs. (283) and (284) or, equivalently, in Eqs. (285) and (286), we obtain that the EoS parameter \(\omega_{y}\) of the YMC model is given by
\[\omega_{y}=\frac{p_{y}}{\rho_{y}}=\frac{y-3}{3\left(y+1\right)}. \tag{288}\]
Using the definition of \(y\) given in Eq. (287), we can easily derive that at the critical point \(\varepsilon=0\) leads to \(\omega_{y}=-1\), then the Universe has exactly a de Sitter expansion. Near this point, if \(\varepsilon<0\), we have that \(\omega_{y}<-1\), while \(\varepsilon>0\) implies \(\omega_{y}>-1\). So, as stated before, the range of values EoS \(0>\omega_{y}>-1\) and \(\omega<-1\) can all be naturally realized.
The expression of \(\omega_{y}\) given in Eq. (288) leads to the following expression for \(y\)
\[y=-\frac{3\left(\omega_{y}+1\right)}{3\omega_{y}-1}. \tag{289}\]
We have that, in order to ensure that the energy density \(\rho_{y}\) is positive in any physically viable model, the quantity \(y\) should be greater than \(1\), which leads to \(F>\kappa^{2}/e\approx 0.368\kappa^{2}\). Before considering a particular cosmological model, \(\omega_{y}\) as a function of \(F\) is interesting to be studied. The YMC model exhibits an EoS of radiation with \(p_{y}=\frac{1}{2}\rho_{y}\) and EoS parameter \(\omega_{y}=\frac{1}{2}\) for large values of the dielectric, i.e., \(\epsilon>>b\) (which implies \(F>>\kappa^{2}\)). On the other hand, for \(\epsilon=0\) (i.e., for \(F=\kappa^{2}\)), which is called the critical point, the YMC has an EoS of the cosmological constant, i.e., \(\omega_{y}=-1\), with \(p_{y}=-\rho_{y}\). The latter case occurs when the YMC energy density takes on the value of the critical energy density \(\rho_{y}=\frac{1}{2}b\kappa^{2}\)[246; 248]. It is this interesting property of the EoS of YMC, going from \(\omega_{y}=\frac{1}{3}\) at higher energies (\(F>>\kappa^{2}\)) to \(w=-1\) at low energies (\(F=\kappa^{2}\)), that makes it possible for the scaling solution [265; 266] for the DE component to exist in our model. More interestingly, this transition is smooth since \(\omega\) is smooth function of \(y\) in the range \((-1,\infty)\). We now need to determine if the EoS parameter \(\omega_{y}\) can cross over \(-1\). By looking at Eq. (288) for \(\omega_{y}\), we see that \(\omega_{y}\) only depends on the value of the condensate strength \(F\). In principle, \(\omega_{y}<-1\) can be achieved as soon as \(F<\kappa^{2}\). Moreover, with regards to the behavior of \(\omega_{y}\) as a function of \(F\), this crossing is also smooth. However, when the YMC is put into a cosmological model as DE component, together with the other components, to drive the expansion of the Universe, the value of \(F\) can not be arbitrary, it comes out as a function of time \(t\) and has to be determined by the dynamic evolution. Specifically, when the YMC does not decay into the matter and radiation, \(\omega_{y}\) can only approaches to \(-1\) asymptotically, but will not cross over \(-1\). On the other hand, when the YMC decays into the matter and/or radiation, \(\omega_{y}\) does cross over \(-1\), and, depending on the strength of the coupling, \(\omega_{y}\) will settle down to an asymptotic value \(-1.17\). As a merit, in this lower region of \(\omega_{y}<-1\), all the physical quantities \(\rho_{y}\), \(p_{y}\) and \(\omega_{y}\) behave smoothly, there is no finite-time singularities that are suffered by a class of scalar models.
If we now equate the EoS of the YMC \(\omega_{y}\) with the EoS parameters of the model we are studying, we can write \(y\) as follows.
\[y=-\frac{3\left(\omega_{D}+1\right)}{3\omega_{D}-1}. \tag{290}\]
Using in Eq. (290) the expression of the EoS parameter for the R-PLECHDE model given in Eq. (39), we obtain
\[y_{pl} = -\frac{\left[-\frac{1}{3c^{2}-\delta R^{\gamma/2-1}}+\left(\frac{1+ \Omega_{b}}{\Omega_{Dpl}}\right)+3\right]}{\left[-\frac{1}{3c^{2}-\delta R^{ \gamma/2-1}}+\left(\frac{1+\Omega_{b}}{\Omega_{Dpl}}\right)\right]-1} \tag{291}\] \[= -\frac{\left[-\frac{1}{3c^{2}-\delta R^{\gamma/2-1}}+(1+u_{pl})+3 \right]}{\left[-\frac{1}{3c^{2}-\delta R^{\gamma/2-1}}+(1+u_{pl})\right]-1}.\]
In the limiting case of a flat Dark Dominated Universe, using the expression of \(\omega_{D}\) given in Eq. (50), we get
\[y_{Dark}=12c^{2}-1. \tag{292}\]
In order to have \(y>1\), we derive from Eq. (292) that \(c^{2}>1/6\). The value of \(c^{2}\) derived in the original work of RDE of Gao is \(c^{2}\approx 0.46\), which is in agreement with the constrain we obtained. In particular, we obtain, for \(c^{2}\approx 0.46\), that \(y_{Dark}\approx 4.52\).
### Non-Linear Electro-Dynamics (NLED) Model
We now consider the last model we have chosen to study, i.e., the Non-Linear Electro-Dynamics (NLED) Model. Recently, a new approach has been considered in order to avoid cosmic singularity through a nonlinear extension of the Maxwell electromagnetic theory. Exact solutions of Einstein's field equations coupled with Non-Linear Electro-Dynamics (NLED) reveal an acceptable nonlinear effect in strong gravitational and magnetic fields. Also, the General Relativity (GR) coupled with NLED effects can explain the primordial inflation.
The Lagrangian density \(L_{M}\) for free fields in the Maxwell electrodynamics may be written as follows [267; 268; 269].
\[L_{M}=-\frac{F^{\mu\nu}F_{\mu\nu}}{4\mu}, \tag{293}\]
where \(F^{\mu\nu}\) is the electromagnetic field strength tensor and \(\mu\) is the magnetic permeability. We now consider the generalization of Maxwell's electro-magnetic Lagrangian up to the second-order terms of the fields as follows.
\[L=-\frac{F}{4\mu_{0}}+\omega F^{2}+\eta{F^{*}}^{2}, \tag{294}\]
where \(\omega\) and \(\eta\) are two arbitrary constants and \(F^{*}\) is defined as follows.
\[F^{*}=F^{*}_{\mu\nu}F_{\mu\nu}, \tag{295}\]
where \(F^{*}_{\mu\nu}\) is the dual of \(F_{\mu\nu}\).
We now consider the particular case when the homogeneous electric field \(E\) in plasma gives rise to an electric current of charged particles and then rapidly decays. So, the squared magnetic field \(B^{2}\) dominates over \(E^{2}\), i.e., \(E^{2}\approx 0\) and hence \(F=2B^{2}\). So \(F\) is now only a function of the magnetic field \(B\). The pressure \(p_{NLED}\) and the energy density \(\rho_{NLED}\) of the Non-linear Electrodynamics Field are given, respectively, by
\[p_{NLED} = \frac{B^{2}}{6\mu}\left(1-40\mu\omega B^{2}\right), \tag{296}\] \[\rho_{NLED} = \frac{B^{2}}{2\mu}\left(1-8\mu\omega B^{2}\right). \tag{297}\]
The weak condition (given by \(\rho_{NLED}>0\)) is obeyed for \(B<\frac{1}{2\sqrt{2\mu\omega}}\) and the pressure \(p_{NLED}\) will be negative when \(B>\frac{1}{2\sqrt{10\mu\omega}}\). The magnetic field generates DE if the strong energy condition is violated, i.e., if \(\rho_{B}+3p_{B}<0\), if \(B>\frac{1}{2\sqrt{6\mu\omega}}\).
The EoS parameter \(\omega_{NLED}\) of the Nonlinear Electrodynamics Field is given as follows.
\[\omega_{NLED}=\frac{p_{NLED}}{\rho_{NLED}}=\frac{1-40\mu\omega B^{2}}{3\left( 1-8\mu\omega B^{2}\right)}, \tag{298}\]
which leads to the following expression for \(B^{2}\)
\[B^{2} = \frac{1-3\omega_{NLED}}{8\mu\omega\left(5-3\omega_{NLED}\right)}. \tag{299}\]
Making a correspondence between the EoS parameter of the NLED model with the EoS of the DE model we are studying, we obtain
\[B^{2} = \frac{1-3\omega_{D}}{8\mu\omega\left(5-3\omega_{D}\right)}. \tag{300}\]
Using in Eq. (300) the expression of the EoS parameter for the R-PLECHDE model given in Eq. (39), we obtain the following expressions for \(B^{2}_{pl}\):
\[B^{2}_{pl} = \frac{1+\left(\frac{1}{3c^{2}-\delta R^{\gamma/2-1}}-\frac{1+ \Omega_{k}}{\Omega_{Dpl}}\right)}{8\mu\omega\left[5+\left(\frac{1}{3c^{2}- \delta R^{\gamma/2-1}}-\frac{1+\Omega_{k}}{\Omega_{Dpl}}\right)\right]} \tag{301}\] \[= \frac{1+\left[\frac{1}{3c^{2}-\delta R^{\gamma/2-1}}-(1+u_{pl}) \right]}{8\mu\omega\left\{5+\left[\frac{1}{3c^{2}-\delta R^{\gamma/2-1}}-(1+u _{pl})\right]\right\}}.\]
In the limiting case of a flat Dark Dominated Universe, using the expression of \(\omega_{D}\) given in Eq. (50), we get
\[B^{2}_{Dark}=\frac{1}{8\mu\omega\left(12c^{2}+1\right)}. \tag{302}\]
Using in Eq. (302) \(c^{2}\approx 0.46\), we obtain
\[B_{Dark,Gao}^{2}\approx\frac{1}{52.16\mu\omega}, \tag{303}\]
which implies that \(B_{Dark,Gao}\approx\frac{1}{2\sqrt{13.04\mu\omega}}\), i.e. a value lower than \(\frac{1}{2\sqrt{2\mu\omega}}\) in order to produce DE.
## IV Conclusions
In this paper, we considered the entropy-corrected version of the HDE model, which interacts with DM in the non-flat FRW Universe (and with IR cut-off equivalent to the Ricci scalar \(R\)). The HDE model is an attempt to probe the nature of DE within the framework of quantum gravity. We considered the logarithmic correction term to the energy density of the HDE model. The addition of correction terms to the energy density of HDE is motivated from the Loop Quantum Gravity (LQG), which is one of the most promising theories of quantum gravity. Using the expression of this modified energy density, we obtained the EoS parameter, deceleration parameter and evolution of energy density parameter for the interacting R-ECHDE model
For both non-interacting Dark Sectors, we derived the expressions of the equation of state (EoS) parameter \(\omega_{D}\), the deceleration parameter \(q\). Moreover, we established a correspondence between the DE model considered and some scalar fields like the Generalized Chaplygin Gas (GCG), the Modified Chaplygin Gas (MCG), the Modified Variable Chaplygin Gas (MVCG), the New Modified Chaplygin Gas (NMCG), the Viscous Generalized Chaplygin Gas (VGCG), the Dirac-Born-Infeld, the Yang-Mills and the Non-Linear Electro-Dynamics ones. We obtained the final expressions of some parameters which characterize the model we decided to study. These correspondences are essential to understand how various candidates of DE are mutually related to each other. The limiting case of a flat Dark Dominated Universe without entropy correction was studied in each scalar field. Moreover, we calculated the quantities we obtained for the limiting case of the flat Dark Dominated Universe, i.e. \(\Omega_{D}=1\), \(\Omega_{m}=\Omega_{k}=0\). We also have calculated the expressions of the quantities derived in this paper for \(c^{2}=0.46\), as found in the paper of Gao.
## Acknowledgment
Financial support under the CSIR Grant No. 03(1420)/18/EMR-II is thankfully acknowledged by Surajit Chattopadhyay. Surajit Chattopadhyay acknowledges IUCAA, Pune for hospitality during a scientific visit when a part of the work was initiated. AT acknowledges the generous
support from the Egyptian Center for Theoretical Physics (ECTP) at the Future University in Egypt (FUE)!
|
2307.04343 | Hierarchical Semantic Tree Concept Whitening for Interpretable Image
Classification | With the popularity of deep neural networks (DNNs), model interpretability is
becoming a critical concern. Many approaches have been developed to tackle the
problem through post-hoc analysis, such as explaining how predictions are made
or understanding the meaning of neurons in middle layers. Nevertheless, these
methods can only discover the patterns or rules that naturally exist in models.
In this work, rather than relying on post-hoc schemes, we proactively instill
knowledge to alter the representation of human-understandable concepts in
hidden layers. Specifically, we use a hierarchical tree of semantic concepts to
store the knowledge, which is leveraged to regularize the representations of
image data instances while training deep models. The axes of the latent space
are aligned with the semantic concepts, where the hierarchical relations
between concepts are also preserved. Experiments on real-world image datasets
show that our method improves model interpretability, showing better
disentanglement of semantic concepts, without negatively affecting model
classification performance. | Haixing Dai, Lu Zhang, Lin Zhao, Zihao Wu, Zhengliang Liu, David Liu, Xiaowei Yu, Yanjun Lyu, Changying Li, Ninghao Liu, Tianming Liu, Dajiang Zhu | 2023-07-10T04:54:05Z | http://arxiv.org/abs/2307.04343v1 | # Hierarchical Semantic Tree Concept Whitening for Interpretable Image Classification
###### Abstract
With the popularity of deep neural networks (DNNs), model interpretability is becoming a critical concern. Many approaches have been developed to tackle the problem through post-hoc analysis, such as explaining how predictions are made or understanding the meaning of neurons in middle layers. Nevertheless, these methods can only discover the patterns or rules that naturally exist in models. In this work, rather than relying on post-hoc schemes, we proactively instill knowledge to alter the representation of human-understandable concepts in hidden layers. Specifically, we use a hierarchical tree of semantic concepts to store the knowledge, which is leveraged to regularize the representations of image data instances while training deep models. The axes of the latent space are aligned with the semantic concepts, where the hierarchical relations between concepts are also preserved. Experiments on real-world image datasets show that our method improves model interpretability, showing better disentanglement of semantic concepts, without negatively affecting model classification performance.
Explainable AI (XAI), hierarchical tree of semantic concepts, image embedding, image interpretation.
## I Introduction
Machine learning interpretability has recently received considerable attention in various domains [1, 2, 3, 4]. An important challenge that arises with deep neural networks (DNNs) is the opacity of semantic meanings of data representations in hidden layers. Several types of methods have been proposed to tackle the problem. First, recent works have shown that some neurons could be aligned with certain high-level semantic patterns in data [5, 6]. Second, it is possible to extract concept vectors [7] or clusters [8] to identify semantic meanings from latent representations. However, these methods are built upon the assumption that semantic patterns are already learned by DNNs, and the models would admit the post-hoc method of a specific form. There is no guarantee that the assumption holds true for any model, especially when meaningful patterns or rules may not be manifested in the model, thus leading to over-interpretation [9, 3]. Meanwhile, although many post-hoc explanation methods are proposed with the expectation of improving or debugging models, it is challenging to achieve this goal in practice. Although we could collect human annotations to guide prediction explanations and improve model credibility [10, 11], manually labeling or checking semantic concepts is rather difficult. Unlike explaining individual predictions, which is a local and instance-level task, extracting concepts provides a global understanding of models, where manual inspection of such interpretation is time-consuming and much harder, if not impossible.
Instead of relying on post-hoc approaches, we aim to instill interpretability as a constraint into model establishment. For example, explanation regularization is proposed in [12], but it constrains gradient magnitude instead of focusing on semantic concepts. Meanwhile, \(\beta\)-VAE and its variants [13, 14] add independence constraints to learn disentangled factors in latent representations, but it is difficult to explicitly specify and align latent dimensions with semantic meanings. Ideally, we want to construct DNNs whose latent space could tell us how it is encoding concepts. The recent decorrelated batch normalization (DBN) method [15] normalizes representations, providing an end-to-end technique for manipulating representations, but it is not directly related to interpretability.
In this work, we propose a novel Hierarchical Semantic Tree Concept Whitening (HaST-CW) model to decorrelate the latent representations in image classification for disentangling concepts with hierarchical relations. The idea of our work is illustrated in Fig. 1. Specifically, we define each concept as one class of objects, where the concepts are of different granularities and form a hierarchical tree structure. We decorrelate the activations of neural network layers, so that each concept is aligned with one or several latent dimensions. Unlike the traditional DBN method (Fig. 1a), which treats different concepts as independent, our method is able to leverage the hierarchically related organization of label concepts inherent in domain knowledge (Fig. 1b). The consideration of relations between different concepts is crucial in many real-world applications [16, 17]. For example, in the healthcare domain, the relationship of different disease stages (concepts) may reflect the progression of the disease, which is significant for reversing pathology [18, 19, 20, 21]. Also, in the precision agriculture domain [2, 22], real-time monitoring of interactions of multiple agricultural objects (concepts) with each other and with the environment are crucial in maintaining agro-ecological balance [2]. In our model, a novel semantic constraint (SC) loss function is designed to regularize representations. As a result, the data representations of two concepts with higher semantic similarity will be closer with each other in the latent space. Moreover, a new hierarchical
concept whitening (HCW) method is proposed to decorrelate different label concepts hierarchically. We evaluated the proposed HaST-CW model using a novel agriculture image dataset called Agri-ImageNet. The results suggest that our model could preserve the semantic relationship between the label concepts, and provide a clear understanding of how the network gradually learns the concept in different layers, without hurting classification performance.
## II Related Work
**Post-Hoc Interpretation.** Post-Hoc interpretation can be divided into approaches that explain predictions or models [1, 3]. Prediction-oriented interpretation aims to develop faithful and robust measures to quantify feature importance towards individual predictions for identifying those features (e.g., pixels, super-pixels, words) that made most contributions [23, 24, 25, 26, 27, 28]. Model-oriented interpretation analyzes behaviors of neural networks either by characterizing the function of model components [29, 5, 30] or analyzing semantic concepts from latent representations [7, 8, 31, 32]. The proposed method also targets concept-level interpretation in deep neural networks. Different from post-hoc techniques that focus on discovering existing patterns in models, the newly proposed HaST-CW proactively injects concept-related knowledge into training and disentangles different concepts to promote model interpretability.
**Inherently Interpretable Models.** Another school of thought favors building inherently explainable machine learning models [33, 34]. Some approaches design models that highlight prototypical features of samples as interpretation. For example, Chen et al. [35] classifies images by dissecting images into parts and comparing these components to similar prototypes towards prediction. Li et al. [36] designs an encoder-decoder framework to allow comparisons between inputs and the learned prototypes in latent space. Some other works such as \(\beta\)-VAE and its variants [13, 14] regularize representation learning for autoencoders to produce disentangled factors in representation dimensions, but the semantic meaning of each dimension remains unknown without further manual inspection. In contrast, our method attempts to explicitly align latent dimensions with specific semantic concepts contained in external knowledge. A recent technique called Concept Whitening (CW) [34] constrains the latent space, after revising Batch Whitening [37, 38], such that it aligns with predefined classes. Our method attempts to infuse more complex knowledge of concept relations into representation learning.
**Applying Whitening to Computer Vision.** Whitening is a standard image preprocessing technique, which refers to transforming the covariance matrix of input vectors into the identity matrix. In fact, the well-known Batch Normalization [39] can be regarded as a variant of whitening where only the normalization process is retained. There are many works in deep learning that describe the effectiveness of whitening [40, 41, 42] and the process of finding the whitening matrix [43]. Our work further takes semantics into consideration during the whitening process towards more interpretable representation learning.
## III Methodology
### _Overview_
The proposed HaST-CW model aims to preserve the underlying hierarchical relationship of label concepts, as well as to disentangle these concepts by decorrelating their latent representations. To achieve this goal, we leverage the hierarchical tree structure of the label concepts extracted from specific domain knowledge (Sec. III-B). Then, the obtained structure of label concepts is used as prior knowledge to be instilled into the model for guiding the representation learning process. There are two key components in the knowledge instillation process - the hierarchical concept whitening (HCW) module and the semantic constraint (SC) loss, which will be elaborated in Sec. III-C and Sec. III-D, respectively.
### _The Hierarchical Semantic Tree of Concepts_
In this work, we used a newly collected and curated Agri-ImageNet dataset to develop and evaluate the HaST-CW model. There are 9173 high quality images in Agri-ImageNet, covering 21 different types of agricultural objects. Taking each type of agricultural object as one class, we have 21 label
Fig. 1: The intuition behind HaST-CW. (a) Distribution of discrete concepts in the latent space after applying concept whitening. (b) Distribution of hierarchical concepts after applying HaST-CW.
Fig. 2: Hierarchical Tree Structure of Concepts.
concepts in total. Some pairs of agriculture objects have the supertype-subtype relationship between them, so we obtain the parent-child relationship between the corresponding labels. As a result, a tree structure is built to represent the underlying hierarchically related organization of label concepts, which is shown in Fig. 2. Two concepts connected in the tree structure means they have parent-child relationship, where the parent is located at the lower hierarchy level. Besides the parent-child relation, we further introduce two notions - brother and cousin. If two concepts have the same parent, then they are brothers. If the parents of two concepts are brothers, then the two concepts are cousins. According to the laws of inheritance: (1) objects with the parent-child relation should be more similar than those with the uncle-child relation (vertical parent-child relationship); and (2) the traits of brothers should be more similar than cousins (horizontal brother-cousin relationship). An effective model should be able to capture both of the vertical relationship and horizontal relationship, so that the representation of any concept in the latent space should be closer to its parent than uncles, and closer to brothers than cousins. For our HaST-CW model shown in Fig. 3, a new HCW module (Sec. III-C) is proposed to preserve the vertical relationship, and a novel SC loss (Sec. III-D) is proposed to preserve the horizontal relationship.
### _Hierarchical Concept Whitening_
The hierarchical concept whitening (HCW) module is one of the key components in the HaST-CW model, which aims to disentangle different label concepts while preserving their underlying hierarchical relationship. Specifically, in this work, the set of label concepts were denoted by \(C=\{C_{i}\}_{i=1}^{N_{c}}\), where \(C_{i}\) represents the \(i^{th}\) concept and \(N_{c}=21\) is the number of concepts. For \(C_{i}\), its parent, children, brothers and cousins were denoted as \(C_{i\cdot\mathcal{P}}\), \(\{C_{i\cdot children}\}\), \(\{C_{i\cdot B}\}\) and \(\{C_{i\cdot c}\}\), respectively. A dataset is denoted as \(\mathcal{D}\{\textbf{x}_{i},y_{i}\}_{i=1}^{n}\). We use \(\textbf{X}^{C_{i}}=\{\textbf{x}_{j}^{C_{i}}\}_{j=1}^{n_{i}}\) to denote the set of \(i^{th}\)-class samples labeled by \(C_{i\cdot}\).
In traditional whitening transformation [34], during the training process, data samples are first fed into the model in mini-batches to obtain the latent representation matrix \(\textbf{Z}_{d\times n}\), where \(n\) is the mini-batch size and \(d\) is the dimension of latent representation. We use ResNet as the model backbone in this work. Then a transformation \(\psi\) is applied to decorrelate and standardize \(\textbf{Z}_{d\times n}\):
\[\psi(\textbf{Z})=\textbf{W}(\textbf{Z}-\mu\textbf{I}_{n\times 1}{}^{T}), \tag{1}\]
where \(\textbf{W}_{d\times d}\) is the orthogonal whitening matrix, and \(\mu=\frac{1}{n}\sum_{i=1}^{n}\textbf{z}_{i}\) is the sample mean. A property of representation whitening is that \(\textbf{Q}^{\textbf{T}}\textbf{W}\) is still a valid whitening matrix if **Q** is an orthogonal matrix. We leverage this property for interpretable representation learning. In our model, besides decorrelation and standardization, we expect that the transformed representation of samples from concept \(C_{i}\), namely \(\textbf{Q}^{\textbf{T}}\psi(\textbf{Z}^{C_{i}})\), can align well with the \(i^{th}\) axis of latent space. Meanwhile, the underlying hierarchical relationship of concepts should also be preserved in their latent representations. That is, we need to find an orthogonal matrix \(\textbf{Q}=[\textbf{q}_{1},\textbf{q}_{2},\ldots,\textbf{q}_{N_{c}}]\) with two requirements: (1) \(\textbf{Z}^{C_{i}}\) should be most activated by \(\textbf{q}_{i}\), i.e., the \(i^{th}\) column of **Q**; (2) \(\textbf{Z}^{C_{i}}\) should also be activated by \(\{\textbf{q}_{c}\}\), where \(c\in\{C_{i\cdot children}\}\) is the child of concept \(C_{i}\). The first constraint makes the representation align together with the corresponding concept dimension, and the second one maintains the vertical parent-child relationship between concepts. To this end, the optimization problem can be formulated as:
\[\max_{\textbf{q}_{1},\cdots,\textbf{q}_{N_{c}}} \sum_{i=1}^{N_{c}}[\frac{1}{n_{i}}\textbf{q}_{i}^{T}\psi(\textbf{ Z}^{C_{i}})\textbf{I}_{n_{i}\times 1}+ \tag{2}\] \[\sum_{c\in\{C_{i\cdot children}\}}\frac{1}{n_{i}\times N_{cd}}( \textbf{q}_{c})^{T}\psi(\textbf{Z}^{C_{i}})\textbf{I}_{n_{i}\times 1}],\] \[s.t. \textbf{Q}^{T}\textbf{Q}=\textbf{I}_{d},\]
where \(N_{cd}=|\{C_{i\cdot children}\}|\) is the number of child concepts of \(C_{i}\). To solve this optimization problem with the orthogonality constraint, a gradient descent method with the curvilinear search algorithm [44] is adopted. With the whitening matrix **W** and rotation orthogonality matrix **Q**, HaST-CW can replace any batch normalization layer in deep neural networks. The details of representation whitening for HaST-CW is summarized in Algorithm 1.
``` Input: \(\textbf{Q}\), \(\textbf{W}\), \(\textbf{
The operation of \(\textbf{Q}^{T}\psi(\cdot)\) forms the HCW module. During the first training stage, **Q** will be fixed and other parameters (\(\theta,\omega,\textbf{W},\mu\)) will be optimized according to Eq. (3) to minimize the classification error. The first stage will take \(T_{thre}\) mini batches (we set \(T_{thre}=30\) in experiments). After that, **Q** will be updated by the Cayley transform [44]:
\[\textbf{Q}^{\prime}=(\textbf{I}+\frac{\eta}{2}\textbf{A})^{-1}(\textbf{I}- \frac{\eta}{2}\textbf{A})\textbf{Q}, \tag{4}\]
\[\textbf{A}=\textbf{G}\textbf{Q}^{T}-\textbf{Q}\textbf{G}^{T}, \tag{5}\]
where **A** is a skew-symmetric matrix. **G** is the gradient of the concept alignment loss, which is defined in Algorithm 2. \(\eta\) is the learning rate. At the end of the second stage, an updated \(\textbf{Q}^{\prime}\) will participate in the first training stage of the next iteration.
```
1:Input: mini-batch input \(\textbf{Z}\in\mathbb{R}^{d\times n}\)
2:Optimization Variables: orthogonal matrix \(\textbf{Q}\in\mathbb{R}^{d\times d}\)
3:Output: whitened representations \(\hat{\textbf{Z}}\in\mathbb{R}^{d\times n}\)
4:The batch mean: \(\mu=\frac{1}{n}\textbf{Z}\cdot\textbf{I}\)
5:The centered representations: \(\textbf{Z}_{\textbf{C}}=\textbf{Z}-\mu\cdot\textbf{I}^{T}\)
6:Calculate ZCA-whitening matrix **W**
7:Calculate the whitened representation: \(\hat{\textbf{Z}}=\textbf{Q}^{T}\textbf{W}\textbf{Z}_{\textbf{C}}\)
```
**Algorithm 1** Forward Pass of HCW Module
### _Semantic Constraint Loss_
Besides preserving the vertical parent-child relationship of concepts, we further model the horizontal relation between concepts that are at the same hierarchy level (i.e., brothers or cousins). Different from the HCW in Eq. (2) that focuses on concept alignment, here we directly control the distance between representations of different concepts with the horizontal relation [45, 46]. To this end, we propose a Semantic Constraint (SC) loss to model the horizontal brother-cousin relationship as below:
\[\mathcal{L}_{SC}=\alpha\mathcal{L}_{\mathcal{B}}+\beta\mathcal{L }_{\mathcal{C}}, \tag{6}\] \[\mathcal{L}_{\mathcal{B}}=\sum_{j}\sum_{\textbf{S}_{i}\in\{C_{i, B}\}}\sum_{k}max\{0,m_{\mathcal{B}}-d(\textbf{z}_{j}^{C_{i}},\textbf{z}_{k}^{ \textbf{S}_{i}})\},\] \[\mathcal{L}_{\mathcal{C}}=\sum_{j}\sum_{\textbf{S}_{i}\in\{C_{i, B}\}}\sum_{\textbf{c}_{i}\in\{C_{i,C}\}}\sum_{k}\sum_{l}max\{0,d(\textbf{z}_{j} ^{C_{i}},\textbf{z}_{k}^{\textbf{S}_{i}})\] \[-d(\textbf{z}_{j}^{C_{i}},\textbf{z}_{l}^{\textbf{C}_{i}})+m_{ \mathcal{C}}\}.\]
There are two components in the SC loss and their contributions are controlled by two hyperparameters - \(\alpha\) and \(\beta\). The first term \(\mathcal{L}_{\mathcal{B}}\) is a contrastive loss, which takes a pair of image representations labeled by two brother concepts as input and enlarges the distance between them. It uses a hyperparameter \(m_{\mathcal{B}}\) to control the distance. The distance between two concepts increases when \(m_{\mathcal{B}}\) is set larger. \(\textbf{B}_{i}\in\{C_{i,\mathcal{B}}\}\) denotes one of the brothers of concept \(C_{i}\). The second term \(\mathcal{L}_{\mathcal{C}}\) is a triplet loss. It takes three inputs: the anchor image representation \(\textbf{z}_{j}^{C_{i}}\), the image representation \(\textbf{z}_{k}^{\textbf{S}_{i}}\) labeled by brother concept of the anchor, and the image representation \(\textbf{z}_{l}^{\mathcal{C}_{i}}\) labeled by cousin concept of the anchor. \(\mathcal{C}_{i}\in\{C_{i,\mathcal{C}}\}\) denotes the cousins of concept \(C_{i}\). The triplet loss encourages the anchor-brother distance to be smaller compared with the anchor-cousin distance in representation space. In this way, the distance of image representations from brother classes tends to be smaller than the distance of image representations from cousin classes. The gap between the two types of distance is controlled by the margin value \(m_{\mathcal{C}}\). Consequently, the hierarchical concept whitening module, together with the SC loss, enables the latent representations of concepts with similar semantics to be close with each other in the latent space.
### _Latent Feature Maps Activation_
The proposed HaST-CW model can generate latent representations (\(\hat{\textbf{z}}_{i}\)) for input images (\(\textbf{x}_{i}\)) at each neural network layer by \(\hat{\textbf{z}}_{i}=\textbf{Q}^{T}\psi(\Phi(\textbf{x}_{i};\theta);\textbf{W},\mu)\). The latent representation can be used to assess the interpretability of the learning process by measuring the degree of activation of \(\hat{\textbf{z}}_{i}\) at different concept dimensions (i.e. \(\{\textbf{q}_{i}\}\)). In the implementation, \(\Phi(\cdot)\) is a CNN based deep network, whose convolution output \(\textbf{z}_{i}=\Phi(\textbf{x}_{i};\theta)\) is a tensor with the dimension \(\textbf{z}_{i}\in R^{d\times h\times w}\). Since \(\hat{\textbf{z}}_{i}\) is calculated by \(\hat{\textbf{z}}_{i}=\textbf{Q}^{T}\psi(\textbf{z}_{i})\) where \(\textbf{Q}^{T}\in R^{d\times d}\), we obtain \(\hat{\textbf{z}}_{i}\in R^{d\times h\times w}\), where \(d\) is the channel dimension and \(h\times w\) is the feature map dimension. The hierarchical concept whitening operation \(\textbf{Q}^{T}\psi(\cdot)\) is conducted upon the \(d\) feature maps. Therefore, different feature maps contain the information of whether and where the concept patterns exist in the image. However, as a tensor the feature map cannot directly measure the degree of _concept activation_. To solve this problem and at the same time to reserve both of the high-level and low-level information, we first apply the max pooling on the feature map and then use the mean value of the downstream feature map to represent the original one. By this way, we reshape the original feature map \(\textbf{z}_{i}\in R^{d\times h\times w}\) to \(\textbf{z}_{i}^{\prime}\in R^{d\times 1}\). Finally, \(\textbf{z}_{i}^{\prime}\) is used to measure the activation of image \(\textbf{x}_{i}\) at each concept dimension.
## IV Experiments
In the experiments section, we first visually demonstrate how our method can effectively learn and hierarchically organize concepts in the latent space (Sec. IV-B). We also show that (Sec. IV-C), compared to existing concept whitening methods, HaST-CW not only separates concepts, but also can separate groups of semantically related concepts in the latent space. After that, we discuss the advantages offered by our method with quantitative results and intuitive examples (Sec. IV-D) compared with baselines, including the CW module and ablated versions of our method.
### _Experimental Setting_
#### Iv-A1 Data Preparation
In this work, we use a newly collected and curated Agri-ImageNet dataset to evaluate the proposed HaST-CW model. In total, 9173 images from 21 classes are used in our experiments. Each image is labeled with the class at the highest possible hierarchy level. For example, an image of Melrose apple will be labeled as "Melrose" rather than the superclass "Apple". Then we divide images per class into three parts by 60%/20%/20% for a standardized
training/validation/test splitting. Because the resolution of the original images can range from 300 to 5000, we adopt the following steps to normalize the image data: 1) we first lock aspect ratio and resize the images to make the short edge to be 256; 2) During each training epoch, the images in the training and validation datasets are randomly cropped into 224x224; 3) During testing process, images in the test dataset are center cropped to be of size 224x224; 4) After cropping, the pixel values of images are normalized to [0,1]. Then, the whole training dataset is divided into two parts (\(\mathcal{D}_{T}\) and \(\mathcal{D}_{C}\) in Algorithm 2). \(\mathcal{D}_{C}\) is the concept dataset used to update the matrix \(\mathbf{Q}\) in the second stage (Eq. (4)). It is created by randomly selecting 64 images from each class in the training dataset. The remaining images in the training dataset \(\mathcal{D}_{T}\) are used in the first stage to train the model parameters (Eq. (3)).
#### Iii-A2 Model Setting
In this work, we use several ResNet structures [47] to extract features from images, including ResNet18 and ResNet50. During the training process, the two-stage training scheme adopts a 30-to-1 ratio to alternatively train the whole framework. In this case, after 30 mini batches of continuous training, the model will pause and the rotation orthogonal matrix \(\mathbf{Q}\) will be optimized at the next mini batch. Two hyper-parameters \(\alpha\) and \(\beta\) in the SC loss are set to be 1.0. Adam optimizer is used to train the whole model with a learning rate of 0.1, a batch size of 64, a weight decay of 0.01, and a momentum rate of 0.9.
### _Visualization of Semantic Map_
To illustrate the learned semantic hierarchical structure, we show the representations extracted from the latent hidden layer of all the samples in Figure 4. For better visualization, we use Uniform Manifold Approximation and Projection (UMAP) [48] to project the representations to a two-dimensional space. All the images are color coded using the 17 sub-concepts which are defined on the left of Figure 4. The top panel shows the result using CW method. In general, all the concepts are assembled as small groups, but neither semantic relations nor hierarchical structures have been learned. We highlight the super-concept of "Weed" (black) and three sub-concepts ( "Apple Golden" - green, "Apple Fuji" - red and "Apple Melrose" - blue) in the right column. We can see that the three types of apple (sub-concepts) are evenly distributed along with other fruits samples. The bottom panel shows our HaST-CW results. All the different concepts successfully keep their distinct cluster patterns as CW result. After our two-stage training process to instill the semantic and hierarchical knowledge, the three types of apple images have been pulled together and form a new concept ("Apple" with orange circle) at a higher level. Moreover, the newly learned concept of "Apple" simultaneously possesses sufficient distance to "Weed" (different super-concept) and maintains relatively close relations to "Strawberry", "Orange", "Mango" as well as other types of "Fruit". This result demonstrates the effectiveness of our hierarchical semantic concept learning framework, without negatively affecting the overall classification performance.
### _Efficiency and Accuracy of Concept Alignment_
In this section, we compare the learning efficiency and accuracy of the proposed HaST-CW with that of the conventional CW method. We track the alignment between image representations and their corresponding concepts at each layer. Specifically, we randomly select six concepts, and for each concept we sort and select the top five images whose representations show the strongest activation at the corresponding concept axis. We show the results at both shallow and deep layers (layer 4 vs. layer 8) in Figure 5. From the results of layer 4 (the left column) we can see that most of the top five images obtained by conventional CW (the rows marked by
green box) are mismatched with the corresponding concepts. For example, the five images under the concept of "Apple-Melrose" obtained by CW are from the "Weed" class. The five images under the concept of "Snake Weed" are actually from other subclass of "Weed". Moreover, this situation continues in the following layers and has not been changed until layer 8. On the contrary, with the help of our designed semantic constraint loss, our HaST-CW (the rows marked by orange boxes) can learn the intrinsic concept faster and achieves the best performance at an earlier training stage (e.g., at a shallow layer). This result demonstrates that by paralleling multiple HCW layers the proposed HaST-CW model can capture the high-level features more efficiently.
To further demonstrate the alignment between images and the corresponding concepts, we project each image in the test dataset into a latent space where each concept can be represented by an axis. To visualize the alignments at different concept hierarchies (Figure 2), we show three pairs of concepts which belong to different hierarchical levels as examples: "Apple-Melrose"-"Apple-Fuji" is from hierarchy 3 (H-3), "Snake Weed"-"Parkinsonia" is from hierarchy 2 (H-2), and "Weed"-"Apple" crosses hierarchies 1 and 2 ("Weed": H-1, "Apple": H-2). Within each concept pair, a two-dimensional space has been built by taking the two concepts as axes. Thus, each image can be mapped into the space by calculating the similarity between image representation and the two concept representations. The results are shown in Figure 6. Different rows correspond to different methods and the concept axes (space) are defined at the bottom.
The first column of Figure 6 shows the data distribution in the two-dimensional space of "Apple-Melrose"-"Apple-Fuji" concept pair. The images belonging to Apple-Melrose class should have the highest similarity with the concept of "Apple-Melrose", and thereby they should be located at the right-bottom corner. Similarly, the images of Apple-Fuji class should be located at the left-top corner. The other images should distribute in the space according to the similarity with the two concepts. For example, compared to images of fruit-related classes, images of weed-related classes will have lower semantic similarity with the two concepts, so they should locate near the origin point (left-bottom corner). As shown in the first column, the two models which adopt the HaST-CW method (the second and third rows) can better follow the above-mentioned patterns. While in the CW model (the first row), nearly all the images are gathered at the right-bottom corner. This may be due to the high similarity between the two concepts considered, since they share the same super-class of "Apple". As a result, CW model may be limited in distinguishing different classes with high semantic similarity. A similar situation happens in the second column with the concept pair of "Snake Weed"-"Parkinsonia". These results suggest that compared to CW method, HaST-CW can better capture the subtle differences of semantic-related classes.
The third column shows the results of the concept pair of two super-classes: "Weed" and "Apple". As each of the super-class concept contains multiple sub-classes, the intra-class variability is greater. Our proposed HaST-CW, together with the SC loss (the third row), can effectively capture the common visual features and project the "Weed" and "Apple" images to
Fig. 4: UMAP visualization of the latent space embedding with AgiImageNet images, colored according to the legend of image labels on the left. The top panel shows the results of the CW method and we highlight the super-concept “Weed” (black) and three sub-concepts. As shown in the bottom panel, we apply the same rules to the output of HaST-CW and visualize the results. In addition, we draw an orange circle that encapsulates three types of apples to represent the super-concept “Apple”.
Fig. 5: Top 5 activation images of each concept. The image panel is divided into two sets of columns: the left set of columns contains the results of layer 4 (a shallow layer), whereas the right set of columns holds the results of layer 8 (a deeper layer). Each concept covers two rows that correspond to the results of the conventional CW (marked by green boxes) and the proposed HaST-CW (marked by orange boxes), respectively.
the left-top and right-bottom, respectively. At the same time, the images belonging to different sub-classes under "Weed" and "Apple" are assembled as blocks instead of scattered along the diagonal line. In the other two methods, especially in the CW method (the first row), the images of "Weed" class spread out over a wide range along the vertical axis. This result suggests that the proposed HaST-CW with SC loss can effectively model both the inter- and intra- class similarity.
### _Interpretable Image Classification_
In this section, we compare the classification performance of the proposed HaST-CW method and the SC loss function with the conventional CW method using different backbones: ResNet18 and ResNet50. The results are summarized in Table I. Different rows correspond to different model settings. Within each model setting, we repeat the experiments for five times to reduce the effect of random noise. The mean and variance of accuracy (ACC.) are reported in the fourth column. From the results, we can see that the classification performance is slightly better than the other three model settings. This result indicates that the proposed HaST-CW model can improve the interpretability without hurting predictive performance.
To track and visualize the classification process, we randomly select two images from Apple-Melrose class and Snake Weed class. The activation values between each image with the six relevant concepts are calculated and normalized to [0, 1]. The images, concepts and activation values are organized into a hierarchical activation tree. The results are shown in Figure 7. We could observe that the activation values of each image correctly represent the semantic relationship between the images and the concepts. For example, in Figure 7 (a), the image located at the root is from Snake Weed class which is a subclass of Weed. The activation values of the image are consistent with this relationship and possess the highest activation values on the two concepts - "Weed" and "Snake Weed".
## V Conclusion and Future Work
In this study, we propose a new HaST-CW and demonstrate its superiority over Concept Whitening [34]. HaST-CW decorrelates representations in the latent space and aligns concepts with corresponding dimensions. In addition, it correctly groups concepts at different granularity levels in the latent space and preserves hierarchical structures of concepts of interest. By doing so, we can interpret concepts better and observe the semantic relationships among concepts.
We believe there are many possibilities for future work. One promising direction is automatically learning concepts from data. In this scenario, we can jointly learn possible concepts from common abstract features among images and how to represent these learned concepts in the latent space.
Fig. 6: Data distribution in the concept latent space. Three pairs of concepts corresponding to different semantic hierarchy levels are selected. For each concept pair, a two-dimensional space is built by taking the concepts as axes. To visualize the alignments between images and the concepts, the images are projected into the two-dimensional space by similarity values between image representations and the two concept representations. Different rows in the figure panel correspond to different methods and the concept axes (space) are defined at the bottom.
Fig. 7: Hierarchical activation tree. We randomly select two images from the Apple-Melrose class and the Snake Weed class. For each image, activation values corresponding to the 6 concepts are calculated and normalized to [0, 1]. The highest activation values are highlighted with red along the hierarchical path.
For example, it might be possible to develop unsupervised or weakly-supervised methods to automatically learn the concept tree from data. By jointly learning concepts, their representations, and relations, the model may discover more data-driven semantic structures.
HaST-CW can also be extended with post-hoc interpretability strategies (such as saliency-based methods that highlight focused areas used for classification). Such explanations at the concept level can provide a more global view of model behaviors.
In addition, while this work focuses on the the natural image domain, the idea of leveraging hierarchical knowledge to guide representation learning is generalizable to other domains such as natural language processing [22, 49, 50] and medical image analysis [51, 52, 53, 54, 55, 56, 57, 58, 59]. Exploring knowledge-infused learning in different domains [60, 61, 62, 63, 64, 65, 66] and tasks [67, 68, 69, 70, 71], including innovative applications [72, 73, 74], is an interesting future direction.
In conclusion, as deep learning models become increasingly complex, model interpretability is crucial for understanding behaviors, gaining trust, and enabling human-AI collaboration. Our work complements previous work and lays a solid foundation for further exploration.
|
2310.16388 | Deepfake Detection: Leveraging the Power of 2D and 3D CNN Ensembles | In the dynamic realm of deepfake detection, this work presents an innovative
approach to validate video content. The methodology blends advanced
2-dimensional and 3-dimensional Convolutional Neural Networks. The 3D model is
uniquely tailored to capture spatiotemporal features via sliding filters,
extending through both spatial and temporal dimensions. This configuration
enables nuanced pattern recognition in pixel arrangement and temporal evolution
across frames. Simultaneously, the 2D model leverages EfficientNet
architecture, harnessing auto-scaling in Convolutional Neural Networks.
Notably, this ensemble integrates Voting Ensembles and Adaptive Weighted
Ensembling. Strategic prioritization of the 3-dimensional model's output
capitalizes on its exceptional spatio-temporal feature extraction. Experimental
validation underscores the effectiveness of this strategy, showcasing its
potential in countering deepfake generation's deceptive practices. | Aagam Bakliwal, Amit D. Joshi | 2023-10-25T06:00:37Z | http://arxiv.org/abs/2310.16388v1 | # Deepfake Detection: Leveraging the Power of 2D and 3D CNN Ensembles
###### Abstract
In the dynamic realm of deepfake detection, this work presents an innovative approach to validate video content. The methodology blends advanced 2-dimensional and 3-dimensional Convolutional Neural Networks. The 3D model is uniquely tailored to capture spatiotemporal features via sliding filters, extending through both spatial and temporal dimensions. This configuration enables nuanced pattern recognition in pixel arrangement and temporal evolution across frames. Simultaneously, the 2D model leverages EfficientNet architecture, harnessing auto-scaling in Convolutional Neural Networks. Notably, this ensemble integrates Voting Ensembles and Adaptive Weighted Ensembling. Strategic prioritization of the 3-dimensional model's output capitalizes on its exceptional spatio-temporal feature extraction. Experimental validation underscores the effectiveness of this strategy, showcasing its potential in countering deepfake generation's deceptive practices.
Deepfake detection, Ensemble models, Convolutional Neural Network, Spatiotemporal Features, Voting Ensembles
## I Introduction
The accelerating pace of digital innovation has propelled us into an age where content generation and distribution are more powerful than ever. While these advancements have enriched the creative landscape and expanded avenues for communication, they have also given rise to issues that jeopardize the integrity of factual information. Chief among these concerns is the advent of deepfake videos--hyper-realistic, such as in Fig. 1, yet entirely fabricated visual narratives capable of causing misinformation, character assassination, and a host of other societal perils [1][2].
With the increasing sophistication of deepfakes, the demand for robust detection techniques has never been higher. Convolutional Neural Networks (CNNs), particularly 2D variants, have long been the cornerstone of image recognition tasks but are increasingly viewed as insufficient for capturing the temporal nuances inherent in video data. This realization has catalyzed the development of 3D CNNs, specifically designed to integrate both spatial and temporal dimensions, making them ideal for video-based analysis [3].
While various methods such as fingerprinting, chrominance properties have shown success in identifying GAN-based deepfakes [4][5][6], this research aspires to build a universally applicable detection model. To this end, we introduce a cutting-edge ensemble model that synergizes the strengths of both 2D and 3D CNNs. The proposed approach melts the efficient spatial feature extraction capabilities of 2D CNNs--particularly focusing on the EfficientNet architecture--with the spatiotemporal capabilities of 3D CNNs. By doing so, this research aims for a more comprehensive analysis of video data that can unearth subtle patterns from both individual frames and their temporal relationships. In addition, this model incorporates advanced ensemble techniques such as Voting Ensembles [7] and Adaptive Weighted Ensembling [8] to optimize detection accuracy.
This work serves as a comprehensive guide for those looking to navigate the multifaceted challenges of deepfake detection which aims to strengthen the defenses against digital falsehoods and ensure that the digital landscape remains a bastion of veracity.
The remainder of this paper is organized as follows: Section II provides an overview of related work in the domain. The methodology adopted for deepfake detection is elaborated in Section III. Section IV describes the experiments conducted, followed by a detailed discussion of results in Section V. The paper concludes with insights and future directions in Section VI.
## II Related Work
The field of deepfake detection has been enriched with numerous techniques, both from deep learning and traditional realms. While various non-deep learning approaches have been widely explored in past studies, Generative Adversarial Networks (GANs) have particularly dominated this arena. They are employed to analyze the artifacts produced by deepfakes, aiming to detect these minute inconsistencies. Studies have revealed unique chrominance properties of deepfake images distinct from camera-produced images [4][5]. Their analysis, especially in the chrominance components and the
Fig. 1: Paired fake samples generated from original pristine faces in the DFDC dataset
residual domain, can differentiate between GAN produced imagery and authentic camera images. Furthermore, peculiarities in deepfake outputs were studied, and different GAN sources were identified [6]. This underscores the significance of anomalies in images produced by GANs for deepfake detection. However, these methods primarily target GAN-generated deepfakes, overlooking other types of deepfakes, thereby compromising their robustness.
Support Vector Machines (SVMs) have also found extensive application in deepfake detection. For instance, employed facial geometry alongside SVM classification have been used to detect anomalies like unnatural reflections and detailed discrepancies in eyes and teeth. Using distinct feature vector sets, they exposed deepfakes' challenges in achieving lifelike outputs [9]. Other studies utilized SVM-based classification, the former leveraging a pixel anomaly-based feature extraction technique and the latter employing the Speeded Up Robust Features (SURF) algorithm with the Bag of Words (BoW) model for detecting face swaps [10][11].
Frequency-based techniques have been another promising direction. A compression-oriented approach using a self-supervised decoupling network [12] and harnessing harnessed a ResNet-based Pixel CNN [13] have been proposed. However, their performance fluctuated with different compression rates. The performance fluctuation observed with different compression rates indicates a potential challenge in maintaining consistent detection accuracy across various data settings. The potential of the phase spectrum, particularly its sensitivity to upsampling, was explored using the Discrete Fourier Transform (DCT) [14]. Several studies have also explored the utility of frequency-aware frameworks and high-frequency noise features for the detection of deepfakes [15][16]. While these methods have demonstrated efficacy in well-lit scenarios, they exhibit sensitivity to environmental noise that adversely impact the high-frequency components crucial for detection.
Various innovative deep learning techniques have also been proposed for deepfake detection. A mouth-centric approach was the focus of one study, although it encountered difficulties with videos where the mouth was obscured [17]. Another research effort highlighted the employment of biological signals [18]. Principal component analysis (PCA) and Convolutional LSTM Residual Network (CLRNet) were explored in other studies [19][20]. Yet another study combined eye sequences with a hybrid network that integrates VGG and LSTM architectures, although it faced challenges related to the natural variability in human eye blinking [21].
Additionally, methods centered around neural networks have also been developed. FakeSpotter was introduced in one such study; it used a shallow-layered architecture to scrutinize neuron behavior and outperformed other detectors in terms of accuracy [22]. However, its shallow architecture may limit its ability to capture more complex features of deepfakes. Other research made use of pre-trained networks, with one specifically focusing on the XceptionNet and distortions in facial regions in deepfakes [23][24]. Inflated 3D ConvNets (I3D) [25] have also been used, which capture important spatio-temporal features of videos, which can be used to identify deepfakes.
Building upon the methodologies presented in [26] and [25], the aim of this work is to formulate a more robust architecture for deepfake detection. The model is engineered for resiliency, capable of identifying a wide array of deepfakes beyond those generated through GANs or other deep learning techniques. By integrating a 3D CNN into our framework, we strive to enhance the overall performance of deepfake detection.
## III Proposed Method
This section elucidates our approach for deepfake detection, rooted in the principle of ensembling. The power of model ensembling, renowned for its potential to amplify prediction performance, is harnessed in this method. Our central aim is to explore the feasibility and methodologies of training diverse CNN-based classifiers--encompassing both 2D and 3D - to extract complementary high-level semantic information. Such harmonized information sources are anticipated to synergistically enhance the ensemble's performance for deepfake detection. Concurrently, we place an emphasis on devising a model that remains both nimble in design and straightforward to train.
### _Attention2D_
This method's 2D model capitalizes on the strengths of the EfficientNet model series, a groundbreaking approach for CNN automatic scaling, known for its high accuracy and efficiency [15]. We specifically selected the EfficientNetB4 architecture due to its optimal balance between model parameters, computational cost, and classification process, as highlighted in [15]. In comparison to XceptionNet, a face manipulation detection standard presented in [27], EfficientNetB4 achieved higher top-1 accuracy with 4 million fewer parameters and a reduction of 4.2 Billion FLOPS. This led to the preference for the EfficientNetB4.
The EfficientNetB4 structure is outlined within the blue block of Fig. 2, consistent with the definitions provided in [15]. The input to the network is squared facial images taken from individual video frames, efficiently extracted through robust face detectors mentioned in [28][29].
Expanding upon this foundation, this method implements modifications drawing inspiration from both the realms of natural language processing and computer vision by incorporating attention mechanisms. Influential works such as the residual attention networks [30] and transformers [16] have showcased the ability of neural networks to focus on the most pertinent parts of their input, be it imagery or textual sequences. This integrated attention approach merges the inherent attention mechanism of EfficientNet with self-attention techniques from previous studies [31][32]. This procedure is as follows:
1. Extract feature maps, sized 14 x 14 x 112, from EfficientNetB4 up to its fourth MBConv block.
2. Process these feature maps with a singular convolutional layer of kernel size 1, subsequently applying a Sigmoid
activation function to yield a single attention map, as advised in [26].
3. Multiply the resulting attention map element-wise with the feature maps from the designated layer.
For visualization, this attention methodology is depicted in the yellow block of Fig. 2.
This attention mechanism has a two-fold advantage. It not only guides the network to emphasize significant areas of the feature maps but also reveals which parts of the input it deems most valuable. The ensuing attention map can overlay the input, marking areas deemed critical by the network. The attention block's output is then directed to the rest of the EfficientNetB4 layers. The augmented network is termed as Attention2D and undergoes end-to-end training.
### _Ensemble3D_
For each face within the video, this method specifically focus on the smallest region of interest, ensuring that our model emphasizes facial features that could be indicative of deepfake manipulations. This methodology is particularly essential given that deepfake techniques often exhibit subtle temporal inconsistencies, which might be missed in standard 2D convolution approaches.
The 3D convolution technique stands out from its 2D counterpart primarily because of its capacity to slide across an additional dimension--time (or depth) alongside height and width. While 2D convolutions operate on individual frames, 3D convolutions understand videos as volumetric data, adding the crucial temporal dimension. Consequently, the filters in a 3D convolution evolve to be three-dimensional, thereby allowing the model to recognize patterns that transcend individual frames, capturing anomalies over a sequence of frames, and offering a more holistic view of the video [33].
This method leverages four distinct 3D convolution models. The first is the I3D model [25], an innovation that ingeniously employs sets of RGB frames as input. This model is a modification of the acclaimed Inception model, with its 2D convolutional layers transformed into 3D to tap into the potential of spatio-temporal modeling. To kickstart training, the pre-trained weights of the Inception model trained on ImageNet are inflated. Prior research has substantiated that such inflation proposed approach leads to improved 3D images generation quality, providing a significant boost to the performance of 3D models [34].
Supplementing the I3D model, three other models are deployed: 3D ResNet34, MC3 and R(2+1)D [35]. To fortify the models and ensure they don't develop a dependency on specific facial features or regions, the CutMix data augmentation technique is employed with I3D and R(2+1)D. CutMix's prowess lies in its ability to compel models to focus on diverse regions of the input, thereby fostering a more robust recognition system [36].
Finally, rather than depending on a singular model, this method amalgamates the strengths of all trained models into an ensemble. This approach capitalizes on the unique strengths and mitigates the individual weaknesses of each model. The predictions of this ensemble are then averaged across all faces within a video, ensuring a comprehensive and reliable evaluation.
### _Ensembling Techniques_
In this approach, we advocate for a synergistic fusion of Voting Ensembles [7] and Adaptive Weighted Ensembling [8]. This blend aims to harness the strengths of individual ensemble techniques for a robust prediction performance. Within this ensemble framework, the results derived from the 3D model are assigned a higher weight, recognizing its inherent ability to discern spatio-temporal features, which are paramount in video-based deepfake detection. Delving deeper into the internal workings of both our 3D and 2D models, Voting Ensembles are employed as the cornerstone mechanism to aggregate and finalize predictions. This ensemble within an ensemble strategy not only fortifies the prediction robustness but also ensures that the intricacies of the data are well-captured and represented in the final outcomes.
The proposed workflow is shown in Fig. 2.
## IV Experiments
This sections reports all the details regarding the datasets used and the experimental setup.
### _Dataset_
The method under consideration is evaluated using the DFDC dataset, which was specifically released for a corresponding Kaggle competition. Comprising over 119,000 video sequences designed for this challenge, the dataset includes both genuine and manipulated videos. Authentic videos feature a diverse range of actors in terms of gender, skin tone, age, and so on, and are recorded against a variety of backgrounds to add visual complexity. On the other hand, the manipulated videos are generated from these real videos through various DeepFake techniques, such as multiple face-swapping algorithms. Each sequence has an approximate length of 300 frames. Notably, the dataset is heavily skewed towards fake videos, containing around 100,000 fake sequences and 19,000 real ones.
### _Network Comparison_
The experimental setup considers a variety of neural networks for benchmarking:
1. As the top-performing model cited in [19], XceptionNet serves as an essential point of reference for our tests.
2. EfficientNetB4 stands out for its superior accuracy and efficiency relative to other methodologies, as documented in [15].
3. EfficientNetB4Att excels as the highest-performing model in the scope of this research, as mentioned in [26].
4. A streamlined 3D convolutional neural network specifically designed for deepfake identification is introduced in [37].
For each network, individual training and evaluation are conducted using the DFDC dataset.
### _Setup_
From each video, 10 frames are selected. This choice is grounded in the understanding that increasing the frame count per video can lead to overfitting, and any additional frames don't significantly enhance performance as per [26].
The analysis conducted by primarily centers on areas featuring the subject's face. As a preliminary step, BlazeFace extractor [29] is employed to isolate faces from each frame, which proves to be speedier in their trials compared to the MTCNN detector [28] utilized in [19]. In cases where multiple faces are detected, the face with the highest confidence rating is retained. The subsequent input to the neural network is a squared color image with dimensions of 224 x 224 pixels.
To bolster the resilience of the models during training and validation, data augmentation techniques are implemented on the extracted faces. Specifically, random downscaling, horizontal flipping, adjustments in brightness and contrast, hue saturation, shearing, rotations, noise addition, and normalization are employed by leveraging Albumentation [38] for these tasks.
For the training phase, the Adam optimizer [39] is chosen with tailored hyperparameters: \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), \(\epsilon=10^{-8}\), with an initial learning rate at \(10^{-5}\). This combination of optimizer and hyperparameters plays a pivotal role in the successful training of the models.
### _Metrics_
#### Iii-D1 Log Loss
Given an input sample face, the network produces a score, \(\hat{y}_{i}\), correlated with the face. It's important to note that this score hasn't been subjected to a Sigmoid activation function. The process of weight updating employs the widely-accepted LogLoss function [40]:
\[\mathcal{L}_{\text{log}}=-\frac{1}{N}\sum_{i=1}^{N}\left[y_{i}\log(S(\hat{y}_ {i}))+(1-y_{i})\log(1-S(\hat{y}_{i}))\right] \tag{1}\]
In the expression labeled as 1, \(\hat{y}_{i}\) signifies the score attributed to the face indexed by \(i\), and \(y_{i}\in{0,1}\) indicates the label assigned to that specific face. To elaborate, a label of 0 pertains to faces derived from authentic videos, while a label of 1 is linked to faces sourced from manipulated videos. \(N\) corresponds to the total count of faces utilized in the training process, and \(S(\cdot)\) represents the Sigmoid function.
#### Iii-D2 Area Under the Curve (AUC)
AUC is another vital indicator for assessing how well our deepfake detection models function. In contrast to LogLoss, which calculates the degree of divergence between predicted values and actual labels, the AUC quantifies the model's competence in distinguishing real faces from manipulated ones. With a scale that varies between 0 and 1, an AUC score of 0.5 indicates that the model's performance is equivalent to random chance, whereas a score of 1 signifies flawless categorization.
Let \(T=\{t_{1},t_{2},\ldots,t_{N}\}\) represent the true labels and \(P=\{p_{1},p_{2},\ldots,p_{N}\}\) be the predicted probabilities after applying the Sigmoid function \(S(\hat{y}_{i})\) to the raw scores \(\hat{y}_{i}\). The AUC [41] can be computed as follows:
\[\text{AUC}=\frac{1}{N_{0}\times N_{1}}\sum_{i=1}^{N}\sum_{j=1}^{N}\mathbb{I}(t _{i}<t_{j})\mathbb{I}(p_{i}>p_{j}) \tag{2}\]
In the equation labeled as 2, \(N_{0}\) and \(N_{1}\) denote the count of authentic and altered facial instances, respectively. The symbol
Fig. 2: Proposed Model Workflow
\(\mathbb{I}(x)\) represents the indicator function, yielding a value of 1 when condition \(x\) is satisfied and 0 otherwise. This metric holds an advantage in its resistance to alterations in thresholds and its ability to offer a comprehensive assessment of the model's effectiveness across various classification thresholds.
## V Results and Discussion
This section presents a comprehensive overview of the gathered results attained throughout the course of experimentation.
### _Attention2D_
Attention2D created an attention map, similar to [26]. The output from the Sigmoid layer within the attention block in extracted, producing a 2D map of size 14 x 14, which we then upscale to 224 x 224 dimensions. The proposed attention mechanism adeptly accentuates intricate facial features like eyes, mouth, nose, and ears. Conversely, regions with subtle gradients which don't offer much valuable information to the network are attenuated. As multiple studies, including [12], indicate that deepfake generation often leaves artifacts around facial features, our model selectively concentrates on these pivotal areas. This selective focus curtails redundant computations, enhancing the model's efficiency.
### _Detection Capability_
This section encapsulates the findings from the foundational XceptionNet network juxtaposed with the proposed model, drawing comparisons with other studies in the field as well.
In Table I, the performance metrics across various model architectures are delineated. For clarification: XN symbolizes XceptionNet, E4 represents EfficientNetB4, E4A is for EfficientNetB4Att, E4AS denotes EfficientNetB4AttST [26]. 3D signifies the Ensemble3D, and 2D stands for Attention2D.
A discerning look at these outcomes reveals the inherent benefits of model ensembling with regard to performance metrics. As anticipated, optimal results often emanate from integrating two or more networks, implying that such combinations bolster both deepfake detection accuracy, which is measured via AUC, and detection quality, which is assessed through LogLoss. Notably, for the dataset in consideration, both LogLoss and AUC metrics consistently surpass the baseline values. The proposed method's Ensemble3D and Attention2D models marginally outpace other individual models, and some of their ensembles as well. Crucially, the fusion of the proposed methods 2D and 3D models eclipses prior models, testifying to its robust nature.
However, every model has its limitations. This model may require careful integration of both 2D and 3D components, making the training process potentially more complex than single-model systems. Moreover, while the ensembling approach reduces overfitting risks and captures diverse patterns, it also demands more computational resources and may be challenging to deploy in real-time scenarios or on devices with limited processing capabilities. The model's dependency on voting ensembles and weighted ensembling might also make it susceptible to potential inefficiencies if one component model underperforms.
## VI Conclusion and Future Scope
This work puts forth an advanced system for deepfake detection, that synergistically combines the capabilities of 2D and 3D CNNs. Through the proposed ensembling strategy, focus is on harnessing the robust spatiotemporal features that 3D models are proficient at capturing, giving them higher weight in the ensemble. This focus amplifies the system's overall performance. Additionally, attention mechanisms are integrated into the 2D EfficientNet architecture, contributing not just to enhanced predictive power but also to greater model interpretability. The empirical evaluation reveals that the proposed hybrid approach significantly outpaces existing benchmarks. It achieves this by harmonizing disparate models and leveraging their complementary functionalities. It is noticeable that the performance experiences a significant enhancement when placing greater emphasis on 3D models, owing to their proficiency in handling intricate spatiotemporal relationships.
As the landscape of deepfake technology continues to grow more complex, it becomes imperative that existing detection methodologies adapt and evolve concurrently. Looking ahead, future studies could focus on incorporating even more advanced attention mechanisms, exploring a wider variety of models for ensembling, or venturing into the realm of real-time deepfake detection applications. In summary, this work offers a balanced, efficient, and interpretable avenue for tackling the evolving threat posed by deepfakes.
|
2308.05800 | JWST observations of galaxy damping wings during reionization
interpreted with cosmological simulations | Spectra of the highest redshift galaxies taken with JWST are now allowing us
to see into the heart of the reionization epoch. Many of these observed
galaxies exhibit strong damping wing absorption redward of their Lyman-$\alpha$
emission. These observations have been used to measure the redshift evolution
of the neutral fraction of the intergalactic medium and sizes of ionized
bubbles. However, these estimates have been made using a simple analytic model
for the intergalactic damping wing. We explore the recent observations with
models of inhomogeneous reionization from the Sherwood-Relics simulation suite.
We carry out a comparison between the damping wings calculated from the
simulations and from the analytic model. We find that although the agreement is
good on the red side of the Lyman-$\alpha$ emission, there is a discrepancy on
the blue side due to residual neutral hydrogen present in the simulations,
which saturates the intergalactic absorption. For this reason, we find that it
is difficult to reproduce the claimed observations of large bubble sizes at z ~
7, which are driven by a detection of transmitted flux blueward of the
Lyman-$\alpha$ emission. We suggest instead that the observations can be
explained by a model with smaller ionized bubbles and larger intrinsic
Lyman-$\alpha$ emission from the host galaxy. | Laura C. Keating, James S. Bolton, Fergus Cullen, Martin G. Haehnelt, Ewald Puchwein, Girish Kulkarni | 2023-08-10T18:00:04Z | http://arxiv.org/abs/2308.05800v1 | JWST observations of galaxy damping wings during reionization interpreted with cosmological simulations
###### Abstract
Spectra of the highest redshift galaxies taken with JWST are now allowing us to see into the heart of the reionization epoch. Many of these observed galaxies exhibit strong damping wing absorption redward of their Lyman-\(\alpha\) emission. These observations have been used to measure the redshift evolution of the neutral fraction of the intergalactic medium and sizes of ionized bubbles. However, these estimates have been made using a simple analytic model for the intergalactic damping wing. We explore the recent observations with models of inhomogeneous reionization from the Sherwood-Relics simulation suite. We carry out a comparison between the damping wings calculated from the simulations and from the analytic model. We find that although the agreement is good on the red side of the Lyman-\(\alpha\) emission, there is a discrepancy on the blue side due to residual neutral hydrogen present in the simulations, which saturates the intergalactic absorption. For this reason, we find that it is difficult to reproduce the claimed observations of large bubble sizes at \(z\sim 7\), which are driven by a detection of transmitted flux blueward of the Lyman-\(\alpha\) emission. We suggest instead that the observations can be explained by a model with smaller ionized bubbles and larger intrinsic Lyman-\(\alpha\) emission from the host galaxy.
keywords: dark ages, reionisation, first stars - galaxies: high-redshift - intergalactic medium - methods: numerical
## 1 Introduction
Absorption lines observed in the spectra of luminous objects, blueward of the Lyman-\(\alpha\) (Ly\(\alpha\)) wavelength at the redshift of the source, indicate the presence of intergalactic neutral hydrogen. At redshifts approaching the epoch of reionization, this absorption is observed to saturate and regions with no significant transmission are detected on scales of tens of comoving Mpc (Becker et al., 2015; Zhu et al., 2021). However, it is difficult to use these observations to infer the presence of completely neutral gas, as for gas at the mean cosmic density, complete saturation will occur already at a volume-weighted average neutral hydrogen fraction \(\langle x_{\rm HI}\rangle_{\rm v}\sim 10^{-5}\)(McQuinn, 2016).
More compelling evidence for a significantly neutral intergalactic medium (IGM) is an observation of Ly\(\alpha\) absorption that extends to wavelengths redward of the source redshift (Miralda-Escude, 1998). When the diffuse IGM has a high volume filling factor of neutral gas, its optical depth becomes large enough that the absorption profile is dominated by the natural line broadening described by the Lorentzian component of the Voigt profile (Meiksin, 2009). This means that there is a reasonable probability for photons to scatter in the wings of the line, even several thousands of kilometres away from line centre. This results in an absorption profile known as a damping wing. The strength of the damping wing is proportional to the optical depth of the diffuse IGM and hence the volume-averaged neutral fraction (Gunn and Peterson, 1965; Chen, 2023), allowing for constraints on the reionization history from individual bright objects.
Multiple quasars observed at \(z=7\) and above display the signature of IGM damping wings (Mortlock et al., 2011; Banados et al., 2018; Wang et al., 2020; Yang et al., 2020). By comparing the observed damping wings with theoretical models that also take local ionization by the quasar into account, the neutral fraction of the average IGM at the source redshift can be inferred (Bolton et al., 2011; Greig et al., 2017, 2019; Davies et al., 2018). A challenge in these measurements is the determination of the intrinsic Ly\(\alpha\) emission of the quasar, with different reconstructions leading to different constraints on the neutral fraction (e.g., Greig et al., 2022). Despite this, there is consensus from the different analyses of the observed damping wings that reionization is incomplete above \(z=7\)(Fan et al., 2022). The constraints are further limited by the redshifts of the most distant quasars currently known, although this is expected to improve with the Euclid wide survey, with more than \(60\ z>8\) quasars with magnitude \(H<24\) predicted to be discovered (Schindler et al., 2023).
The presence of damping wings in the spectra of long-duration gamma-ray burst (GRB) afterglows can also be used to constrain the timing of reionization (Totani et al., 2006; Chornock et al., 2013; Har
tooog et al., 2015). This has some advantages over studies with quasars (Barkana & Loeb, 2004). The spectra of GRB afterglows close to the Ly\(\alpha\) wavelength can be described as a power law, sidestepping the issue of reconstructing an intrinsic emission line profile. The hosts of GRBs are further expected to be star-forming galaxies, which are more numerous than quasars and which likely live in less biased environments of the Universe. The main disadvantage is that these GRB afterglows frequently show evidence for damped Ly\(\alpha\) systems (DLAs) at the redshift of the host galaxy (e.g., Jensen et al., 2001), which will also produce a damping wing that may eclipse the intergalactic signal. There are also relatively few high signal-to-noise GRB afterglow spectra currently available, but detections by future missions are predicted to provide competitive constraints on the evolution of the IGM neutral fraction (Lidz et al., 2021; Tanvir et al., 2021).
The impact of IGM damping wings can also be detected statistically from populations of galaxies. The luminosity function of Ly\(\alpha\) emitting galaxies is observed to decline above \(z=6\) more rapidly than expected from comparison with UV luminosity functions (Konno et al., 2014, 2018), indicating that absorption from the IGM may be obscuring the Ly\(\alpha\) emission of the galaxies. This is complemented by the declining fraction of Lyman-break galaxies that show Ly\(\alpha\) emission above \(z=6\)(Stark et al., 2010; Pentericci et al., 2011, 2014; Schenker et al., 2012, 2014). Through comparison with theoretical models, these observations can be translated into constraints on the progress of reionization (Mason et al., 2018, 2019; Hoag et al., 2019; Jung et al., 2020; Bolan et al., 2022; Wold et al., 2022; Jones et al., 2023). Detections of Ly\(\alpha\) emission in individual galaxies can also be used to estimate the sizes of ionized bubbles surrounding the host galaxies (Mason & Gronke, 2020; Whitler et al., 2023; Wistok et al., 2023).
JWST is now opening up a new window in the study of IGM damping wings in galaxies, with damping wings being routinely observed in the spectra of individual galaxies (Curtis-Lake et al., 2023). By comparing the strength of these damping wings with the analytic model of Miralda-Escude (1998), the spectra can be used to place constraints on the IGM neutral fraction (Curtis-Lake et al., 2023; Hsiao et al., 2023). Umeda et al. (2023) used the same model to fit the strength of damping wings in stacks of galaxy spectra in different redshift bins, placing constraints on the evolution of the volume-weighted average neutral fraction and ionized bubble size out to \(z=12\). They found evidence for an IGM neutral fraction that is increasing with redshift, and that is consistent with empirical models of galaxy-driven reionization (Ishigaki et al., 2018; Finkelstein et al., 2019; Naidu et al., 2020). Interestingly, the bubble sizes that Umeda et al. (2023) measured were towards the larger end of what is predicted by theoretical models (Lu et al., 2023). However, as with the GRB afterglows, the interpretation of these results may be complicated by DLAs associated with the host galaxies. Indeed, Heintz et al. (2023) recently found evidence for three \(z>8.8\) galaxies hosting DLAs with column densities \(N_{\rm HI}>10^{22}\,{\rm cm}^{-2}\).
Interpretation of these observations requires a model that captures all of the relevant physics, such as residual neutral gas in ionized regions of the IGM (Mesinger & Haiman, 2004; Laursen et al., 2011; Bolton & Haehnelt, 2013; Messinger et al., 2015; Mason & Gronke, 2020), the impact of infalling gas onto the host halo (Dijkstra et al., 2007; Sadoun et al., 2017; Weinberger et al., 2018; Park et al., 2021) and the inhomogeneity of reionization (Mesinger & Furlanetto, 2008; McQuinn et al., 2008; Garel et al., 2021; Gronke et al., 2021; Smith et al., 2022; Qin et al., 2022; Chen, 2023). In this paper, we explore recent JWST observations of IGM damping wings in the context of these issues, analysing damping wings constructed from lines of sight through simulations of inhomogeneous reionization from the Sherwood-Relics simulation suite (Puchwein et al., 2023). Specifically, we critically evaluate the use of the Miralda-Escude (1998) model to constrain reionization with recent JWST observations.
We describe our models of the IGM damping wing in Section 2. In Section 3, we compare the results of the damping wings constructed from the reionization simulations to the commonly used Miralda-Escude (1998) model. Section 4 presents a comparison of our results with damping wings observed in JWST galaxies, focusing specifically on the recent results of Umeda et al. (2023) and Heintz et al. (2023). Finally, in Section 5, we summarise our results and present our conclusions. Throughout the paper, we assume the Planck Collaboration et al. (2014) cosmological parameters, namely \(\Omega_{\rm m}=0.308\), \(\Omega_{\Lambda}=0.692\) and \(h=0.678\).
## 2 Modelling the IGM damping wing
### The Sherwood-Relics simulations
To generate mock IGM damping wings, we analyse simulations from the Sherwood-Relics simulation suite1(Puchwein et al., 2023). These are a set of cosmological hydrodynamical simulations that build upon the original Sherwood suite (Bolton et al., 2017) and which were designed to model the evolution of the IGM during and after the epoch of reionization. The simulations were performed with a modified version of the smoothed particle hydrodynamical code p-gadget-3 (Springel, 2005). The fiducial simulation we analyse has a box size of \(40\,h^{-1}\) cMpc and \(2\times 2048^{3}\) gas and dark matter particles, resulting in particle masses of \(M_{\rm gas}=9.97\times 10^{4}\,h^{-1}\,M_{\odot}\) and \(M_{\rm dm}=5.37\times 10^{5}\,h^{-1}\,M_{\odot}\). The gravitational softening used was \(I_{\rm soft}=0.78\,h^{-1}\) kpc. We further analyse the range, lower resolution volume with box size \(160\,h^{-1}\) cMpc and \(2\times 2048^{3}\) gas and dark matter particles. This simulation has \(M_{\rm gas}=6.38\times 10^{6}\,h^{-1}\,M_{\odot}\), \(M_{\rm dm}=3.44\times 10^{7}\,h^{-1}\,M_{\odot}\) and \(I_{\rm soft}=3.13\,h^{-1}\) kpc. Star formation in these simulations is treated using a computationally efficient subgrid model, where any gas with temperature \(T<10^{5}\) K and density 1000 times the cosmic mean density is immediately converted into star particles. Although this approach is highly simplified, it should not affect the properties of the low-density IGM in which we are interested (Viel et al., 2004).
Footnote 1: [https://www.nottingham.ac.uk/astronomy/sherwood-relics](https://www.nottingham.ac.uk/astronomy/sherwood-relics)
To account for the patchy nature of reionization, we utilise a subset of the Sherwood-Relics simulations that were performed with an inhomogeneous UV background. Although it is now becoming feasible to perform fully-coupled radiation-hydrodynamic galaxy formation simulations of cosmic reionization (see Gnedin & Madau, 2022, for a recent review), a different approach has been taken in the Sherwood-Relics project. We give only a brief overview of the method here, but note that it is described in greater detail in Puchwein et al. (2023). We first perform a cosmological simulation with p-gadget-3 using the uniform UV background of Puchwein et al. (2019), saving snapshots of this simulation every 40 Myr. We then map these snapshots onto \(2048^{3}\) Cartesian grids and post-process them with the GPU-powered radiative transfer code a(Aubert & Teyssier, 2008, 2010). Multiple iterations of the radiative transfer simulations are performed, varying the redshift evolution of the volume emissivity assumed in the simulation until the desired reionization history is reached. We use a simple model for the sources, such that their emissivity is proportional to their halo mass (Iliev et al., 2006; Chardin et al., 2015). For each of the \(2048^{3}\) cells on the grid, the reionization redshift and
redshift evolution of the amplitude of the UV background are calculated from the outputs of the radiative transfer code. These data are then used as inputs to a new p-gadget-3 simulation, to capture the effect of a spatially fluctuating UV background due to patchy reionization in the hydrodynamic simulation.
The advantages to this hybrid approach are that it is computationally much cheaper than performing a full radiation-hydrodynamic simulation, while still allowing us to track the hydrodynamic response of the gas to reionization. We can also guarantee the reionization history of our final patchy simulation, which can be difficult to calibrate in self-consistent galaxy formation simulations of reionization. There are of course also some disadvantages to this method, such as the finite spatial and temporal resolution of our ionizing background, but we do not expect this to have a significant effect on the diffuse intergalactic gas that is our primary focus here.
Our fiducial reionization history is shown in Figure 1 (blue line) and compared against different probes of the IGM neutral fraction. Our preferred reionization history is a model where reionization ends at \(z=5.3\) (where we define the "end" of reionization as the redshift where the IGM first becomes 99.9 per cent ionized by volume). This is in agreement with the tight constraints on the end of reionization obtained from the opacity of the Ly\(\alpha\) forest (Bosman et al., 2022). This model is 50 per cent ionized at \(z=7.1\), which sits within the scatter of the various probes of the IGM neutral fraction at that redshift. To explore the effect of changing the reionization history on our simulated damping wings, we also analyse an alternative reionization history (shown by the pink line in Figure 1). In this early reionization model, the IGM is 99.9 per cent ionized at \(z=6.6\) and 50 per cent ionized at \(z=7.8\). As this early model is in tension with many of the constraints shown in Figure 1, we present it only as an example of how our results would change in a more extreme scenario, and do not advocate it as a preferred reionization history.
### Simulated absorption spectra
To construct our mock damping wings, we extract lines of sight from simulation snapshots at \(z=7,8,9\) and 10. As we are interested in the IGM absorption arising in gas in the foreground of galaxies, we take these lines of sight through the 100 most massive haloes at each redshift, extracting the gas neutral hydrogen density, temperature and peculiar velocity. We orient these lines of sight in the \(\pm x_{\rm r},\pm y\) and \(\pm z\) directions around the halo and continue them for 20 \(h^{-1}\) cMpc from the halo centre. This results in 600 sightlines in total. The width of each pixel in the sightline is 19.53 \(h^{-1}\) ckpc for our fiducial 40 \(h^{-1}\) cMpc 2048\({}^{3}\)-particle simulation volume and 78.13 \(h^{-1}\) ckpc for our larger, lower resolution 160 \(h^{-1}\) cMpc 2048\({}^{3}\)-particle volume.
As neutral gas at large distances can also contribute to the strength of the IGM damping wing, we stitch on five additional random sightlines length 40 \(h^{-1}\) cMpc through the simulation volume. As we would like to account for the redshift evolution of the IGM along the line of sight, we make use of the on-the-fly sightlines that were extracted from the Sherwood-Relics simulation at redshift frequency \(\Delta z=0.1\). Using these data, we stitch on subsequent sightlines taken from increasingly lower redshifts, consistent with the redshift interval corresponding to the comoving distance between the halo position and the beginning of the new sightline. This results in final lines of sight that are each 220 \(h^{-1}\) cMpc long.
From these sightlines, we then sum up the contributions of the gas in front of the halo to the Ly\(\alpha\) optical depth using the analytic approximation to the Voigt profile presented in Tepper-Garcia (2006). Due to the simplified star formation prescription used in our simulations, we do not expect to recover the distribution of neutral gas within the circumgalactic medium of the halo. We further aim to model only the contribution of the IGM to the damping wing. We therefore exclude the contribution of any gas within the virial radius of the halo by default (see also Weinberger et al., 2018). As the redshift of the observed galaxies is known from their emission lines, we renormalise the gas peculiar velocity along the line of sight such that the halo has a velocity of 0 km s\({}^{-1}\). As the damping wing extends redward of the Ly\(\alpha\) source frame redshift, we also record the optical depth in the mock spectra redwards of the halo location up to a velocity corresponding to 200 \(h^{-1}\) cMpc "behind" the halo, but of course only accounting for absorption by gas that is in front of the halo in position space. Examples of our simulated IGM damping wings are shown in Figure 2.
### The Miralda-Escude model for the IGM damping wing
As the goal of this work is to contrast the IGM damping wings constructed from cosmological simulations with the analytic Miralda-Escude (1998) model, we recap the details of that model here for completeness.
It is useful to first define the Gunn & Peterson (1965) optical depth
Figure 1: Comparison of the reionization histories analysed in this paper to a selection of constraints from the literature on the redshift evolution of the volume-averaged neutral fraction of the IGM. The solid blue line shows our fiducial model, where reionization ends at \(z=5.3\). The solid pink line shows a more extreme early model, where reionization ends at \(z=6.6\). For comparison, we show reionization constraints from observations of galaxy damping wings (Curtis-Lake et al., 2023; Hsiao et al., 2023; Umeda et al., 2023), Ly\(\alpha\) equivalent widths (Mason et al., 2018, 2019; Bolan et al., 2022; Bruton et al., 2023; Jones et al., 2023; Morishita et al., 2023), Ly\(\alpha\) emitter luminosity functions (Houe et al., 2018; Morales et al., 2021), Ly\(\alpha\) emitter clustering (Solucchi & Mesinger, 2015; Ouchi et al., 2018), quasar damping wings (Davies et al., 2018; Wang et al., 2020; Greig et al., 2022) and dark pixel fractions of the Ly\(\alpha\) forest (McGreer et al., 2015; Jin et al., 2023). The colour of the points highlights the spectacular redshift reach of JWST (black) compared to other sources (grey).
for a uniform IGM at redshift \(z\),
\[\tau_{\rm GP}(z) =\frac{3\lambda_{\alpha}^{3}\Lambda_{\alpha}n_{\rm HII}}{8\pi H(z)}, \tag{1}\] \[\simeq 5.62\times 10^{5}x_{\rm HI}\left(\frac{\Omega_{\rm b}h^{2}}{0.022} \right)\left(\frac{\Omega_{\rm m}h^{2}}{0.142}\right)^{-1/2}\left(\frac{1+z}{9} \right)^{3/2}, \tag{2}\]
where \(\Lambda_{\alpha}\) is the Ly\(\alpha\) decay constant, \(\lambda_{\alpha}\) is the Ly\(\alpha\) rest frame wavelength and \(H(z)\) is the Hubble parameter. At high redshift, this can be approximated as \(H(z)\simeq H_{0}\Omega_{\rm m}^{1/2}(1+z)^{3/2}\). The background neutral hydrogen density is defined as \(n_{\rm HI}=x_{\rm HI}\langle n_{\rm H}\rangle\), where \(\langle n_{\rm H}\rangle=\rho_{\rm crit}\Omega_{\rm b}(1-Y)(1+z)^{3}/m_{\rm H}\). Here, \(\rho_{\rm crit}\) is the critical density at \(z=0\), \(\Omega_{\rm b}\) is the baryon density, \(Y\) is the helium mass fraction and \(m_{\rm H}\) is the mass of a hydrogen atom.
Using Equation 1, we can then define the optical depth of the IGM damping wing,
\[\tau_{\rm D}(z)=\frac{\tau_{\rm GP}(z_{\rm s})R_{\rm\alpha}}{\pi}\left(\frac{ 1+z}{1+z_{\rm s}}\right)^{3/2}\left[I\left(\frac{1+z_{\rm b}}{1+z}\right)-I \left(\frac{1+z_{\rm n}}{1+z}\right)\right], \tag{3}\]
where \(\tau_{\rm D}\) is the optical depth along the line of sight evaluated at redshift \(z\), \(\tau_{\rm GP}\) is the Gunn-Peterson optical depth defined above, \(z_{\rm s}\) is the redshift of the source, \(z_{\rm b}\) is the redshift of the edge of the ionized bubble and \(z_{\rm a}\) is the redshift where reionization is defined to end. \(R_{\rm\alpha}\) is defined as \(\Lambda_{\alpha}\lambda_{\alpha}/(4\pi c)\), where \(c\) is the speed of light.
Figure 2: Examples demonstrating the diversity of our simulated IGM damping wings at \(z=8\). The map on the top shows the neutral fraction of the gas in a 19.53 \(h^{-1}\) ckpc slice through our simulation. The white circles mark the locations of the six haloes plotted below, with the numbers identifying the different haloes, connecting them to the large-scale ionization field of our reionization simulations. The arrow in the bottom right shows the direction along which the spectra were calculated. The six subplots on the bottom two rows show the results from lines of sight through different haloes. The top panel of each subplot shows the neutral gas fraction along the line of sight. The regions intersecting islands of neutral gas (which we define as having a neutral fraction \(x_{\rm HI}>0.5\)) are highlighted in red. The bottom panel of each subplot shows the corresponding Ly\(\alpha\) absorption spectrum. The solid black lines show the spectra calculated taking the contribution of all gas (both in ionized bubbles and neutral islands) into account. The dashed red line shows the spectra calculated assuming the gas in the bubbles is completely ionized, and only the neutral islands contribute to the Ly\(\alpha\) optical depth.
Finally, the function \(I(x)\) is given by
\[I(x) =\frac{x^{9/2}}{1-x}+\frac{9}{7}x^{7/2}+\frac{9}{5}x^{5/2}+3x^{3/2}+9 x^{1/2}\] \[-\frac{9}{2}\ln\left(\frac{1+x^{1/2}}{1-x^{1/2}}\right). \tag{4}\]
Note this function is only well defined for \(x=(1+z_{\rm b})/(1+z)<1\). We therefore only compute the optical depth of the damping wing for redshifts \(z>z_{\rm b}\), and at lower redshifts set the IGM optical depth to \(\tau_{\rm GP}(z)\).
## 3 Comparison of IGM damping wing models
### Comparison with late reionization model
We show the median and scatter of the IGM damping wings computed from Sherwood-Relics with the fiducial late reionization history with the blue lines and shaded regions in Figure 3. Each panel corresponds to a different redshift. We recover the expected result that as we move to higher redshifts and further into the epoch of reionization, the IGM damping wings become stronger due to the increasing volume-weighted average neutral fraction of the IGM.
Figure 3 further shows the results of our comparison between the Sherwood-Relics late reionization model and the Miralda-Escude (1998) analytic model (denoted by the dashed black line). To make a fair comparison between the two models, we compute the Miralda-Escude (1998) model assuming parameters from our simulation. We measure the volume-weighted average neutral fraction from the simulation output at the redshift of the halo. Note that this will not take any evolution of the IGM along the line of sight into account, which is taken into account in an approximate manner in our simulated sightlines. We further assume the end of reionization to be the redshift where the IGM is 99.9 per cent ionized by volume. For the size of the ionized bubble, we measure the distance between the position of the halo where we begin our sightlines and the point where the neutral fraction of the gas first exceeds \(x_{\rm HI}=0.5\) along each line of sight. We then use the redshift corresponding to the median of these bubble sizes along all lines of sight as input to Equation 3. The values used are indicated on the bottom right of each panel of Figure 3.
We find that while the Miralda-Escude (1998) model does an excellent job of recovering the median IGM damping wing predicted by the Sherwood-Relics simulation at velocities a few 100 km s\({}^{-1}\) redward of the systemic velocity, it overpredicts the IGM transmission otherwise. The deviation between the analytic model and the simulations begins slightly redward of the systemic velocity and grows towards bluer wavelengths. The discrepancy is small at higher redshift when the ionized bubble sizes are small and the volume-weighted neutral fraction of the IGM is large, but becomes increasingly apparent towards lower redshift. For example, the Miralda-Escude (1998) model at \(z=7\) calculated using the parameters from our simulation predicts that the IGM transmission should be 50 per cent at a velocity of \(-500\) km s\({}^{-1}\), while in Sherwood-Relics no transmitted flux is expected at all at that velocity.
### Comparison with additional reionization models
We next explore the IGM damping wings generated from different patchy reionization simulations from the Sherwood-Relics simulation suite. The results are shown in Figure 4 for simulation snapshots at redshifts \(z=7\) and 10. In each case, we overplot the results from the Miralda-Escude (1998) damping wing model, calculated from the properties of the corresponding simulation. For easy comparison, the first column repeats the results from the fiducial late reionization model as already discussed in Section 3.1 and shown in Figure 3. The second column contrasts this with simulated IGM damping wings computed from the early reionization model. At fixed redshift, this early model has both lower volume-weighted average neutral fractions and larger bubble sizes than the late model, as indicated in the lower right of each panel of Figure 4. This results in weaker IGM damping wings in the early reionization model than in the late reionization model at a given redshift. The other notable difference is that at \(z=7\) in the early reionization model, there is some transmitted flux visible in the Ly\(\alpha\) forest blueward of the systemic redshift. However, even in this extreme reionization model, this flux is still much lower than the corresponding prediction for the IGM transmission of the Miralda-Escude (1998) model.
We next explore how our results depend on the volume of our reionization simulation, as it has been shown that volumes of several hundred cMpc are required to fully capture the patchiness of reionization (Iliev et al., 2014), in contrast to the fiducial 40 \(h^{-1}\) cMpc volume we analyse here. The third column of Figure 4 shows the simulated IGM damping wings computed from a 160 \(h^{-1}\) cMpc simulation volume, with a near-identical reionization history to the late reionization 40 \(h^{-1}\) cMpc volume plotted in the first column
Figure 3: Comparison of the simulated Sherwood-Relics damping wings for the late reionization history with the Miralda-Escudé (1998) model. The different panels show results at different redshifts. The blue solid line shows the median Ly\(\alpha\) transmission for the simulated sightlines as a function of velocity. The light and dark shaded regions encompass the 68 and 95 per cent scatter of the transmission, respectively. The dashed black line is the Miralda-Escüde (1998) model computed assuming the volume-weighted H i fraction and the median bubble size in the simulation as indicated on each panel.
over the redshift range probed here. We find that at a given redshift, although the volume-weighted average neutral gas fraction is the same in the two simulations, the bubbles are larger in the larger volume by approximately a factor of two. The result of this is that the sightlines generated from the larger volume show more transmission in the IGM damping wing profile at lower wavelengths, with even a small amount of transmission occurring just blueward of the systemic velocity. However, this effect is somewhat offset by the larger infall velocities associated with the more massive halo masses that are found in the larger volume. At \(z=7\), the median mass of the 100 most massive haloes in the 40 \(h^{-1}\) cMpc volume is 9.0\(\times 10^{10}h^{-1}M_{\odot}\), compared with \(4.3\times 10^{11}h^{-1}M_{\odot}\) in the 160 \(h^{-1}\) cMpc volume. This results in a larger offset between the systemic velocity and the median of the sharp cutoff in the Ly\(\alpha\) transmission in the larger volume than in the smaller volume.
To investigate how the two simulations compare for a more similar population of haloes, we take a different set of lines of sight from the 160 \(h^{-1}\) cMpc volume. This time we select a sample of haloes that have a distribution of masses similar to the 100 most massive haloes in the 40 \(h^{-1}\) cMpc volume. The results of this test are shown in the fourth column of Figure 4. We find that the median Ly\(\alpha\) transmission profile now looks more similar in the 160 \(h^{-1}\) cMpc and 40 \(h^{-1}\) cMpc volumes, as the velocity offset is now much closer between the two models. We also find a median bubble size that is smaller around the lower mass haloes. However, the scatter in bubble sizes is quite different, with the 68 (95) per cent range in bubble size a factor of two (three) larger in the larger volume simulation, most likely because these lower mass haloes are clustered around more massive haloes that can carve out larger ionized bubbles. The result of this is slightly weaker IGM damping wings in the 160 \(h^{-1}\) cMpc compared to the 40 \(h^{-1}\) cMpc volume, even when probing haloes of the same mass at fixed volume-weighted average neutral fraction.
However, in all cases, we still recover the trend that our simulated IGM damping wings are not described well by the Miralda-Escude (1998) analytic model blueward of the systemic redshift, independent of the reionization history, simulation volume or host halo mass.
### Reasons for difference between analytic and simulated damping wing models
We investigate the root of the difference between the IGM damping wings from the Sherwood-Relics simulations and the Miralda-Escude (1998) model in Figure 5. The first column of this figure shows the IGM damping wings at \(z=10\) and \(z=7\). These are exactly the same as displayed in Figure 3, and are calculated as described in Section 2.2. The second column shows the IGM damping wings calculated for the same sightlines, but now we compute the Ly\(\alpha\) optical depth without taking the peculiar velocity of the gas into account. The effect of this is to produce a sharp cutoff in the IGM transmission at a velocity of 0 km s\({}^{-1}\), whereas previously this cutoff occurred at a range of different velocities redward of the systemic velocity, due to the range of motions of the local gas falling into the host halo. By disregarding this infalling gas and shifting the cutoff in the IGM transmission to bluer wavelengths, we find that we improve the agreement between the Sherwood-Relics damping wings and the Miralda-Escude (1998) damping wing model such that there is near perfect agreement for all transmission redward of the halo redshift. However, we still find conflicting predictions for the level of transmission expected blueward of the halo redshift.
Next, we investigate the impact of residual neutral gas in the ionized bubbles. We recompute the Ly\(\alpha\) optical depth along our lines of
Figure 4: Comparison of the simulated damping wings from different simulations of inhomogeneous reionization from the Sherwood-Relics suite with the Miralda-Escude (1998) model. The different rows show results at different redshifts (_top:_\(z=10\), _bottom:_\(z=7\)). _First column:_ Absorption spectra as shown in Figure 3, calculated from the fiducial late reionization simulation in the high resolution 40 \(h^{-1}\) cMpc volume. _Second column:_ Absorption spectra calculated from the early reionization simulation in the high resolution 40 \(h^{-1}\) cMpc volume. _Third column:_ calculated from the fiducial late reionization simulation in the lower resolution 160 \(h^{-1}\) cMpc volume. _Fourth column:_ Absorption spectra calculated from the fiducial late reionization simulation in the lower resolution 160 \(h^{-1}\) cMpc volume for a similar halo mass range as in the 40 \(h^{-1}\) cMpc volume.
sight, but now assume that all of the gas within the ionized bubble surrounding the host halo is completely ionized. We achieve this by finding the first neutral island along our line of sight (which we define as a pixel with \(x_{\rm HI}>0.5\)), and then setting the neutral fraction in all pixels between the halo and that first neutral island to have \(x_{\rm HI}=0\). The reason that we do not just remove the residual neutral gas in all bubbles, rather than just the host bubble, is that this will produce additional transmission along the line of sight. See, for example, the additional transmission peak separated from the IGM damping wing in Halo 6 of Figure 2. This would confuse our damping wing signal when we stack our absorption spectra, and hence our comparison with the Miralda-Escude (1998) model. We therefore chose to isolate the effect of the residual neutral gas only in the vicinity of the host halo.
We show the median transmission profiles and their scatter in the third column of Figure 5, where we have now excluded both the residual neutral gas and neglected the peculiar velocities of the gas. We find that this makes a large difference to the predicted transmission blueward of the host halo redshift, and there is now very good agreement between the damping wings calculated from the Sherwood-Relics simulation and the Miralda-Escude (1998) model. As has previously been noted in other works, residual neutral gas inside the ionized bubbles should play a large role (see, e.g., Mesinger & Haiman 2004 and Bolton & Haehnelt 2007 for a discussion in the context of quasar proximity zones, and Mason & Gronke 2020 in the context of Ly\(\alpha\) emitting galaxies). Although the gas in the ionized bubbles in the Sherwood-Relics simulation is highly ionized, it still has a neutral fraction of order \(x_{\rm HI}\sim 10^{-3}\) (see Figure 2). From Equation 1, this will result in a Ly\(\alpha\) optical depth \(\tau_{\rm Ly\alpha}\sim 500\), more than enough to completely saturate the IGM absorption.
After removing both the effects of peculiar velocities and residual neutral gas, we find much better agreement between our simulated IGM damping wings and the Miralda-Escude (1998) model. There still remains a small difference between the analytic model and the median IGM damping wing predicted from our simulations, with the simulations predicting slightly more transmission at velocities blueward of the host halo redshift. As a final test, we therefore investigate the effect of neglecting evolution of the ionization state of the gas along the line of sight. As described in Section 2.2, we account for this in an approximate way by stitching together lines of sight from simulation outputs at different redshifts. We check what effect this has by recomputing the lines of sight, but this time now stitching together lines of sight from a fixed redshift, such that there is no evolution in the average ionization field across the 220 \(h^{-1}\) cMpc line of sight, although individual pixels will still be ionized or neutral depending on their location within the simulation volume. We find however that neglecting the evolution of the IGM along the line of sight has only a small effect on our simulated damping wings.
In summary, we find that the most significant reason for the difference between the predictions for the IGM damping wing from the Sherwood-Relics simulations and the Miralda-Escude (1998) model is the residual neutral gas within the ionized bubbles. While the effects of peculiar motions of infalling gas and evolution of the IGM along the line of sight towards the observer also play a small role, these are subdominant to the resonant absorption caused by the gas within the bubbles.
## 4 Comparison of models and observations
We next seek to make contact between the IGM damping wings constructed from our simulations of inhomogeneous reionization,
Figure 5: Comparison of the simulated Sherwood-Relics damping wings with the Miralda-Escude (1998) model. The different rows show results at different redshifts (_top:_\(z=10\), _bottom:_\(z=7\)). The different columns compute the absorption spectra with and without different physical effects. _First column:_ Absorption spectra as shown in Figure 3, calculated with all the relevant physical effects. _Second column:_ Absorption spectra calculated without the effect of gas peculiar velocities. _Third column:_ Absorption spectra calculated without any residual neutral hydrogen inside the ionized bubbles and neglecting the gas peculiar velocities. _Fourth column:_ Absorption spectra calculated without accounting for any redshift evolution of the ionization state along the line of sight, without any residual neutral hydrogen inside the ionized bubbles and neglecting the gas peculiar velocities.
and the the first observations of galaxy damping wings performed with JWST. In particular, we investigate the recently inferred 100 cMpc-sized ionized bubbles during reionization (Umeda et al., 2023) and the observations of proximate DLAs in high-redshift galaxies (Heintz et al., 2023).
### Large ionized bubbles during reionization
We first turn our attention to the IGM damping wings measured from stacks of galaxies presented in Umeda et al. (2023). The sample consists of spectra of 26 galaxies at \(7<z<12\) obtained from multiple JWST programs (Arrabal Haro et al., 2023; Hsiao et al., 2023; Finkelstein et al., 2023). These galaxies were divided into four redshift bins with median redshifts \(z=7.140,7.452,7.960\) and \(9.801\). To see the imprint of the IGM damping wing, it was necessary to compare the observed spectrum with a model for the intrinsic spectrum. To properly account for absorption in the circumgalactic medium of the galaxies and isolate the effect of the IGM, Umeda et al. (2023) used stacks of galaxies in bins of stellar mass at \(2.5<z<5\) from the VANDELS survey (Cullen et al., 2019). By choosing the template in the stellar mass bin that best matched the observed galaxy continuum far redward of the Ly\(\alpha\) source frame wavelength, and finding the best-fitting Miralda-Escude (1998) damping wing model in the wavelength region close to Ly\(\alpha\), they obtained constraints on the redshift evolution of both the volume-weighted average neutral fraction of the IGM and the sizes of the ionized bubbles.
As we wish to compare the IGM damping wings generated from the Sherwood-Relics simulations, we first repeat the process to generate the template intrinsic spectra. We use the stacked spectra from the stellar mass bins that Umeda et al. (2023) found to best fit their stacked spectra. Namely, we use the log\((M_{\star}/M_{\odot})=8.16-8.70\) stack at \(z=7.140\), the log\((M_{\star}/M_{\odot})=8.70-9.20\) stack at \(z=7.452\) and \(7.960\) and the log\((M_{\star}/M_{\odot})=9.50-9.65\) stack at \(z=9.801\). We renormalise these spectra by a factor \(A(\lambda/1800\AA)b\), where \(A\) and \(b\) are parameters that can be varied until the \(\chi^{2}\) between the template stacks and the observed stacks is minimised. Following Jones et al. (2023), to account for the low spectral resolution of the NIRSpec PRISM mode2, we convolve the spectra with a Gaussian of an appropriate width. We further bin the template spectra to pixels of 25 A in the rest frame, to match the observed stacks. As in Umeda et al. (2023), we perform this fitting the wavelength range between the wavelength where an IGM damping wing generated with the Miralda-Escude (1998) model for a neutral IGM with no ionized bubble first reaches a transmission of 90 per cent at the blue end, and 2200 A rest frame wavelength at the red end. The resulting template spectra are shown in orange in the left panel of Figure 6 (the thin lines are before convolution and binning, and the thick lines are afterwards) and can be compared with the stacked observed spectra from Umeda et al. (2023) shown in black. Having generated the template intrinsic spectra, we then convolve these with models for the IGM damping wing. We first use the best-fitting parameters measured in Umeda et al. (2023), which are volume-weighted average neutral fractions \(\langle x_{\rm H}\rangle_{\rm v}=(0.46,0.54,0.63,0.83)\) and bubble sizes \(R_{\rm b}=(149,96.1,16.5,5.04)\) cMpc at redshift \(z=(7.140,7.452,7.960,9.801)\). We show the resulting curves generated from the galaxy template, convolved with these damping wing models, as the dark red curves in Figure 6.
Footnote 2: We took the resolving power as a function of wavelength from [https://jwst-docs.stsci.edu/jwst-near-infrared-spectrograph/nirspec-instrumentation/nirspec-dispersers-and-filters](https://jwst-docs.stsci.edu/jwst-near-infrared-spectrograph/nirspec-instrumentation/nirspec-dispersers-and-filters)
We next repeat this process, but instead now use the simulated IGM damping wings generated from the Sherwood-Relics late reionization simulation. We generate spectra from the snapshots at \(z=7,7.5,8\) and \(10\), but rescale the densities and velocities to match the median redshift in each bin of the Umeda et al. (2023) sample. The resulting galaxy templates convolved with the simulated damping wings are shown by the blue curves and light (dark) shaded regions in Figure 6, which represent the median and 68 (95) per cent scatter of the distribution. We find that in the two highest redshift bins, at \(z=9.801\) and \(7.960\), there is good agreement between the predictions from the best fit IGM damping wing from Umeda et al. (2023) and the Sherwood-Relics simulations. We do find that the Sherwood-Relics simulation seems to underpredict the strength of the damping wing at \(z=9.801\), but this is also seen in the Umeda et al. (2023) IGM damping wing model, and may be due to the influence of proximate absorbers as discussed in Section 4.3.
We find, however, that there is a significant difference in the expected transmitted flux between the Sherwood-Relics damping wings and the best fit Umeda et al. (2023) IGM damping wing model at \(z=7.140\) and \(7.452\). The intrinsic galaxy template convolved with the Sherwood-Relics model produces too little transmitted flux in the pixels close to the Ly\(\alpha\) source frame wavelength. At first glance, this suggests that the damping wings in Sherwood-Relics may be too strong when compared with the observations. Indeed, when comparing the best fit ionized bubble sizes from Umeda et al. (2023) with the median bubble sizes around the hundred most massive haloes in Sherwood-Relics, we find that at \(z=7.140\) (\(7.452\)) the bubble sizes are a factor 13 (11) smaller than the measurements from Umeda et al. (2023). The large bubble sizes measured in that work are driven by the significant detections of transmitted flux in pixels blueward of the Ly\(\alpha\) source frame wavelength. Fitting for this with the Miralda-Escude (1998) model for the IGM damping wing allows for this detected flux when the ionized bubbles around the host galaxies are large.
However, based on the results of Section 3, we do not expect to observe any transmitted flux blueward of the Ly\(\alpha\) source frame wavelength, independent of the size of the ionized bubble, as a result of the residual neutral hydrogen in the ionized IGM. Furthermore, if there were indeed large, 100 cMpc-scale regions of the IGM that were transparent to Ly\(\alpha\) scattering already around \(z\sim 7\) galaxies, one would expect to see these transmissive regions in the \(z>6.5\) Ly\(\alpha\) forest of the highest redshift quasars, but this is not the case (Jin et al., 2023). An exception would be the highly ionized proximity zones observed around high-redshift quasars, but even around the brightest \(z\sim 6\) quasars, the sizes of these proximity zones are smaller than the sizes of ionized bubbles measured by Umeda et al. (2023) at \(z\sim 7\)(Eilers et al., 2017). The amplitude of the transmitted Ly\(\alpha\) forest flux proposed by Umeda et al. (2023) is also unexpected. Using their best-fit parameters in the Miralda-Escude (1998) model at \(z=7.140\), we can measure the mean flux in the galaxy's Ly\(\alpha\) forest in two 50 \(h^{-1}\) cMpc chunks that contain sections of the ionized bubble. We measure a mean flux (\(F\)) = (0.97, 0.69) in the two chunks, as we move blueward from the galaxy. This can also be expressed as an effective optical depth \(\tau_{\rm eff}\) = (0.03, 0.37). Values as high as this are not observed in the Ly\(\alpha\) forest of quasars until below \(z=3\)(Becker et al., 2013).
It is therefore of interest to investigate other scenarios that could produce transmitted flux blueward of the galaxy's Ly\(\alpha\) emission. One possibility is that the flux and wavelength calibration of JWST PRISM spectra still present challenges. Residual errors from these may result in additional spurious flux blueward of Ly\(\alpha\). The choice of
the intrinsic galaxy spectrum may also play a role, and this possibility is discussed in Section 4.2 below.
### Influence of the intrinsic spectrum on inferred bubble sizes
An alternative explanation for these observations may arise from the chosen intrinsic galaxy template spectrum. If the assumed intrinsic spectrum used is not representative of the true intrinsic spectrum, this may lead to a change in the reionization parameters recovered from the observations. In particular, an underestimate of the strength the Ly\(\alpha\) emission line will effect not just the pixel that line falls in, but also the neighbouring pixels. This is a result of the low resolution of the NIRSpec PRISM mode, which has \(R=\frac{\Delta A}{4}\sim 30-40\) in the redshift range \(z=7-10\). We investigate the effect of underestimating the intrinsic Ly\(\alpha\) emission of the galaxy in the right column of Figure 6. We repeat the process of convolving the Sherwood-Relics IGM damping wings with a template galaxy spectrum. We again assume the same stacks of lower redshift spectra from Cullen et al. (2019), but we now adjust the strength of the Ly\(\alpha\) line such that it is a factor of three stronger.
This factor of three was chosen arbitrarily as a value to demonstrate the effect of changing the intrinsic spectrum. The Ly\(\alpha\) equivalent width distribution in the sample used to make the low-redshift stacks has a large scatter, with not all galaxies showing Ly\(\alpha\) emission and some showing absorption instead. In Cullen et al. (2020), which analysed an expanded sample compared to Cullen et al. (2019), the median rest-frame equivalent width of the sample was \(W_{\rm Ly\alpha}=-4\) A. The strongest Ly\(\alpha\) emitting galaxies had \(W_{\rm Ly\alpha}=110\) A. Shibuya et al. (2018) presented observations of \(z\sim 6.6\) galaxies with equivalent widths \(W_{\rm Ly\alpha}>200\) A. These observations will also contain the effects of attenuation by the IGM, so the intrinsic Ly\(\alpha\) emission of these galaxies will be stronger. Multiplying the Ly\(\alpha\) emission in these stacked spectra by a factor of three to produce intrinsic Ly\(\alpha\) equivalent widths of at most \(W_{\rm Ly\alpha}\sim 330\) A in individual galax
Figure 6: _Left column: Comparison of modelled IGM damping wings and stacks of galaxy spectra from Umeda et al. (2023). The stacked observed galaxy spectra are shown in black. Each row corresponds to a different redshift, at \(z=7.140,7.452,7.960\) and \(z=9.801\). The thin orange lines show stacked spectra at lower redshift from the VANDELS survey (Cullen et al., 2019), rescaled to match the continua of the higher redshift galaxies. The thick orange lines show the same spectra but after convolution and binning to coarser pixels. The thick red line shows the template intrinsic galaxy spectra convolved with the best-fitting IGM damping wing model of Umeda et al. (2023). The blue lines and shaded region show the intrinsic galaxy spectra convolved with absorption spectra generated from the Sherwood-Relics (late) simulation. Right column: As before, but now using a template spectrum where we have changed the strength of the Ly\(\alpha\) emission line by hand, such that it is now increased in amplitude by a factor of three._
ies, and lower values in the stacked spectra, is therefore not totally unreasonable. Indeed, a Ly\(\alpha\) emitting galaxy with \(W_{\rm Ly\alpha}\approx 400\) A at \(z=7.278\) has been discovered with JWST (Saxena et al., 2023). We note that we only increase the amplitude of the line, and do not change its shape, but this should not make a difference given the wide pixels used in the stacked spectra.
The resulting template is shown by the thin purple lines in Figure 6 before convolution and binning, and by the thick purple curves afterwards. Comparing the thick orange and thick purple curves in the figures on the left and right columns, we find that, as expected, increasing the intrinsic Ly\(\alpha\) emission enhances the intrinsic flux not only in the pixel that corresponds to the source frame Ly\(\alpha\) wavelength, but also in adjacent pixels, due to the broad instrument profile of the NIRSpec PRISM mode (see also Jones et al., 2023). We further find that we now see an excess of flux in the pixel containing the Ly\(\alpha\) emission line in our template intrinsic spectrum compared to the observations at \(z=7.140\) and \(7.452\). This is in contrast to the template used in Umeda et al. (2023), where the expected and observed flux at Ly\(\alpha\) were nearly identical, due to the large ionized bubbles that were assumed. If this latter option were the case, it would be a different scenario from what is observed in the spectra of \(z=7-7.5\) quasars, where a significant fraction of the Ly\(\alpha\) emission line is expected to be absorbed (e.g., Davies et al., 2018; Wang et al., 2020; Greig et al., 2022).
We find that when we convolve the Sherwood-Relics IGM damping wings with this new template for the intrinsic galaxy spectrum, we now find significant transmitted flux blueward of the Ly\(\alpha\) source frame wavelength, mimicking the effect of invoking the large ionized bubbles in the Miralda-Escude (1998) model. This effect is largest in the lowest redshift bins, as the IGM damping wing in the simulations is strong enough at \(z\gtrsim 8\) to absorb the bulk of the Ly\(\alpha\) emission. Of course, we have here arbitrarily changed the strength of the intrinsic Ly\(\alpha\) emission by hand to demonstrate the expected effect. In practice, it should be possible to obtain an estimate of the expected intrinsic Ly\(\alpha\) emission from the Balmer emission lines of the galaxy (e.g., Hayes, 2015).
We further note that we found near-indistinguishable results when comparing the Umeda et al. (2023) observations to the Sherwood-Relics late and early models (plotted in Figure 1). Although there are differences in the damping wings we compute from these two simulations, as shown in Figure 4, the broad instrument profile of the NIRSpec PRISM mode and large pixel size of the stacked spectra makes the two models difficult to differentiate when comparing to the JWST data, and the change in reionization history was much less significant than the change in the intrinsic spectrum. This suggests that it may be difficult to constrain reionization with the currently published spectra, and that larger samples of galaxies will be required.
### Proximate high column density absorbers
We next analyse our simulated IGM damping wings in the context of the detection of damped Lyman-\(\alpha\) absorbers in three galaxies at \(z=8-11\)(Heintz et al., 2023). The spectra of these three galaxies were compared with IGM damping wings calculated using the Miralda-Escude (1998) with volume-weighted average neutral fraction \(\langle x_{\rm HI}\rangle_{\rm v}=0.1,0.5\) and \(0.1\) and assuming an ionized bubble radius \(R_{\rm b}=0\)(i.e., setting \(z_{\rm b}\) equal to \(z_{\rm s}\) in Equation 3). Under these assumptions, it was shown that IGM damping wing profiles were not strong enough to explain the observed damping wings. Further comparison with simulated IGM damping wings from Laursen et al. (2019) also indicated that the IGM alone could not explain the observations. However, as noted in that work, the neutral fraction in that simulation is somewhat low, with \(\langle x_{\rm HI}\rangle_{\rm v}=0.13\) at \(z=8.8\). Heintz et al. (2023) concluded that including strong proximate DLAs with H i column densities in the range \(N_{\rm HI}=10^{22.1}-10^{22.4}\,{\rm cm^{-2}}\) provided a much better fit to the observations.
We test this theory in the context of the IGM transmission curves generated from the Sherwood-Relics late reionization model, which as described in Section 3.3 contain several physical effects that are not present in the Miralda-Escude (1998) model and which has a reionization history in good agreement with estimates of the evolution of the IGM neutral fraction (Figure 1). We note that we do not expect to produce DLAs in our simulations, due to our simplified star formation model and lack of galactic feedback. We further note that, as described in Section 2.2, we are also not including the contribution of any gas inside the virial radius of the galaxy when we compute our IGM damping wings. We focus our comparison on CEERS-43833, a galaxy at redshift \(z=8.7622\)(Arrabal Haro et al., 2023; Finkelstein et al., 2023), as Heintz et al. (2023) show a version of this spectrum where the transmission has been normalised by the intrinsic transmission, and so can be easily compared with our simulated damping wings.
We compute our absorption spectra starting from haloes extracted from a simulation snapshot at \(z=9\), but rescale the gas density and velocity assuming the host halo is at the redshift of CEERS-43833. The simulated IGM transmission curves we calculate are shown in blue in the top panel of Figure 7 and can be compared with the normalised spectrum of CEERS-43833 shown in black. As in Section 4.1, we further convolve the spectra with a Gaussian with a width corresponding to the resolution at the Ly\(\alpha\) wavelength at the redshift of CEERS-43833. The resulting transmission curves
Figure 7: Comparison of the normalised spectrum of CEERS-43833 (a galaxy at redshift \(z=8.7622\)) taken from Heintz et al. (2023) and the IGM damping wings from the Sherwood-Relics late reionization simulation. The black line in both panels shows the spectrum of CEERS-43833 and the grey shaded regions show the corresponding error. _Top_: The blue line and shaded regions are the median and scatter in IGM transmission curves calculated from the simulation. The red line and shaded regions are the same curves, but after convolution with the NIRSpec PRISM instrument profile. _Bottom_: As above, but now the simulated damping wings are computed for both the contribution from the IGM and for a proximate DLA with column density \(N_{\rm HI}=10^{22.1}\) cm\({}^{-2}\), as estimated by Heintz et al. (2023).
are shown in red in Figure 7. We find that even after convolution, the absorption from the IGM alone is not enough to reproduce the damping wing observed in CEERS-43833. This is similar to the simulated IGM damping wings of Laursen et al. (2019), despite the later reionization model we analyse here. This supports the claim of Heintz et al. (2023) that there are proximate DLAs associated with these high-redshift star-forming galaxies. Indeed, when we include the contribution of a DLA with column density \(N_{\rm HI}=10^{22.1}\) cm\({}^{-2}\) at the redshift of the host galaxy, we find a much better fit to the observed spectrum, as shown in the bottom panel of Figure 7.
### Implications
It is clear from these JWST observations that IGM damping wings seen in high-redshift galaxies open up many exciting possibilities for studying the first half of the reionization epoch. However, this observable comes with its own set of challenges. Galaxy spectra are expected to be somewhat simpler to model than quasar spectra as, e.g., the quasar spectra can be complicated by strong N vs emission close to the Ly\(\alpha\) emission (Davies et al., 2018; Greig et al., 2022). However, the strength of the intrinsic Ly\(\alpha\) emission will still be important to capture, especially in analyses of low resolution spectra. Moving forward, it may be useful to marginalise over grids of galaxy template spectra convolved with a separate grid of IGM damping wing models, in a spirit similar to what has been carried out in analyses of quasar damping wings. However, as pointed out by Umeda et al. (2023), it will be necessary to understand how to incorporate absorption by the circumgalactic medium of the host galaxy in such a setup.
The incidence rate of proximate DLAs in these galaxies will also add a source of uncertainty. Indeed, two of the three galaxies shown to host DLAs in Heintz et al. (2023) (MACS0647-JD at \(z=10.170\) and CEERS-16943/Maisie's Galaxy at \(z=11.409\)) are also present in the sample analysed by Umeda et al. (2023). In the future, it will be important to either allow for the contribution of local, high-density neutral hydrogen to the observed damping wings when fitting for the IGM component, as has been done for GRBs (Totani et al., 2006). Alternatively, it may be possible to remove the contaminated objects from the sample, by searching for metal lines redward of Ly\(\alpha\) that are associated with the DLA. However, depending on the metallicity of these objects, this may have to wait for the upcoming spectrographs on 30-metre class telescopes, such as ANDES or HARMONI. It may also be possible to remove the DLA hosts statistically based on the properties of the galaxy, once we have accumulated a large enough sample of spectra from these \(z>7\) galaxies.
At the same time, it will be important to continue to make advances on the theoretical side to properly interpret these results. For example, it is likely that these simulations would struggle to reproduce observations of the (rare) double-peaked Ly\(\alpha\) emitting galaxies at \(z\sim 6.5\)(Hu et al., 2016; Songaila et al., 2018; Meyer et al., 2021), which may call for more efficient production and escape of ionizing photons than we have assumed in the source model for our simulations of the contribution of AGN in locally enhancing the ionization state of the IGM (Bosman et al., 2020). It will further be important to increase the dynamic range of the simulations, such that larger volumes can be modelled while still resolving the small-scale structure of the IGM, to bridge the gap between our models and the bubble sizes predicted in large semi-numerical simulations of reionization (Lu et al., 2023) to fully understand the implications for Ly\(\alpha\) transmission.
## 5 Conclusions
We have presented here an analysis of mock IGM damping wings generated from the Sherwood-Relics simulation suite. We have compared our simulated damping wings to the analytic IGM damping wing model of Miralda-Escude (1998). We found excellent agreement between the simulated and analytic damping wings redward of Ly\(\alpha\), but found that the agreement was poor on the blue side of Ly\(\alpha\), with the analytic model predicting much more IGM transmission than expected from the simulated damping wings. We showed that this was a result of the residual neutral hydrogen inside in the ionized bubbles in our simulations of inhomogeneous reionization, which is enough to saturate the IGM absorption.
We further compared the simulated damping wings against recent observations of damping wings in high-redshift galaxies seen with JWST. We found that the simulated damping wings were unable to reproduce the significant detections of transmitted flux blueward of the galaxy Ly\(\alpha\) emission, which have led to claims of 100 cMpc-sized ionized bubbles at \(z\sim 7-7.5\). We suggest that as the residual neutral gas in the IGM at \(z\sim 7\) is expected to saturate the IGM absorption, the flux that has been observed blueward of Ly\(\alpha\) may instead be a result of stronger intrinsic Ly\(\alpha\) emission, which is spread into nearby pixels by the instrument profile of the NIRSpec PRISM mode. We also investigated claims of observed proximate DLAs in high-redshift galaxies, and confirmed that the strong damping wings observed in these galaxies cannot be reproduced by the IGM damping wings generated from our simulations.
Of course, we are only witnessing the first attempts at constraining the first half of reionization with galaxy damping wings. The first results are already incredibly impressive, and show huge promise for revealing the timing of reionization, even before the presence of this neutral gas is directly detected through its 21cm transition in upcoming experiments. However, this work has demonstrated that the exquisite quality of the new JWST data demands comparison with models that include all of the relevant physics. Future progress in this area will come from advances on both the observational and theoretical sides in understanding the interactions between galaxies and the intergalactic medium in the reionization epoch.
## Acknowledgements
The simulations used in this work were performed using the Joliot Curie supercomputer at the Tres Grand Centre de Calcul (TGCC) and the Cambridge Service for Data Driven Discovery (CSD3), part of which is operated by the University of Cambridge Research Computing on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). We acknowledge the Partnership for Advanced Computing in Europe (PRACE) for awarding us time on Joliot Curie in the 16th call. The DiRAC component of CSD3 was funded by BEIS capital funding via STFC capital grants ST/P002307/1 and ST/R002452/1 and STFC operations grant ST/R00689X/1. This work also used the DiRAC@Durham facility managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility. The equipment was funded by BEIS capital funding via STFC capital grants ST/P002293/1 and ST/R002327/1, Durham University and STFC operations grant ST/R000832/1. DiRAC is part of the National e-Infrastructure. JSB is supported by STFC consolidated grant ST/T000171/1. FC acknowledges support from a UKRI Frontier Research Guarantee Grant (PI Cullen; grant reference EP/X021025/1). MGH is supported by STFC consolidated grant ST/S000623/1. GK is partly supported by the Department of Atomic Energy (Government of India) research project with Project Identification Number
RTI 4002, and by the Max Planck Society through a Max Planck Partner Group. Support by ERC Advanced Grant 320596 "The Emergence of Structure During the Epoch of Reionization" is gratefully acknowledged. We thank Kasper Heintz for answering a question about the spectrum of CEERS-43833. We thank Volker Springel for making p-gadget-3 available. We also thank Dominique Aubert for sharing the atom code. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission.
## Data Availability
All data and analysis code used in this work are available from the first author on reasonable request.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.